Content uploaded by Matthew K Burns
Author content
All content in this area was uploaded by Matthew K Burns on Oct 10, 2024
Content may be subject to copyright.
13
SEPTEMBER/OCTOBER 2024
The Reading League has provided a set of
guidelines to examine the extent to which
reading curricula are inconsistent with re-
search or may include practices that can result
in zero or even a negative effect (The Reading
League, 2023), but those guidelines focus on
curricula rather than specific interventions. An
intervention is “a planned modification of the
environment made for the purpose of chang-
ing behavior in a prespecified way” (Tilly, 2008,
p. 21). An intervention is fundamentally differ-
ent from instruction because it is a planned
modification to the environment and needs
criteria to support practical decision making.
An intervention can be considered evidence
based if multiple rigorous studies have shown a
reliably positive effect for a given population of
students. Reading teachers and other groups
can see if there is an evidence base for any
practice by searching Google Scholar (www.
scholar.google.com) to determine if there are
published studies about the effectiveness of
the practice with at least moderate effect sizes
(e.g., d or g = 0.30 to 0.50). There may be inter-
ventions for which studies have yet to be con-
ducted, but they still should be consistent with
previous findings from the science of reading.
Summaries of Research
The first step to determine if a specific inter-
vention is aligned with the science of reading
is to examine the relevant research literature.
There are multiple sources of summaries of
research regarding specific interventions, in-
cluding practice guides from the Institute of
Educational Sciences of the United States De-
partment of Education (https://ies.ed.gov/ncee/
wwc/practiceguides). Practitioners can also go
to the National Center on Intensive Interven-
tion (NCII; www.intensiveintervention.org) to
find summaries that evaluate both the quality
of the research and the size of the effects of 36
reading interventions.
Individual meta-analyses are comprehen-
sive reviews of the literature that provide an
estimate of the effect to make general con-
clusions about the research literature and can
be especially useful in identifying effective in-
terventions. Burns et al. (2014) found 22 meta-
analyses about reading interventions, which
resulted in a list of interventions that led to
moderate to large effects (e.g., repeated read-
ing, reciprocal teaching, and concept maps) or
small effects (e.g., Whole Language approach-
es, working memory training, and comput-
er-assisted instruction).
Components of an Effective Reading
Intervention
Burns et al. (2014) also identified five essen-
tial attributes that all of the interventions with
Guidelines for Determining if a Reading
Intervention Is Consistent With the
Science of Reading
by Matthew K. Burns and Valentina A. Contesse
How do you know if an intervention is consistent with research? Many of the posts on social
media pages dedicated to the science of reading ask if a particular curriculum, instructional
approach, or intervention is aligned with the science. Given that the science of reading is a
vast, interdisciplinary body of research about reading, the consequences of using interventions
that are not aligned are potentially tragic. For example, reading interventions for children
with learning difficulties in the 1960s, 1970s, and early 1980s focused on various approaches to
perceptual training, improving sequential memory, and matching the modality of instruction
with a supposed preferred learning style, all of which led to small effects and poor student
outcomes (Kavale, 2001).
An intervention is fundamentally
different from instruction
because it is a planned
modification to the environment
and needs criteria to support
practical decision making.
13-21_Burns_Feature.indd 1313-21_Burns_Feature.indd 13 9/5/24 7:22 AM9/5/24 7:22 AM
14 The Reading League Journal
large effects had in common and that seemed
to differentiate them from interventions with
small effects. Those attributes (a) included ex-
plicit instruction in the skill, (b) were targeted
to students’ needs, (c) provided high opportu-
nities to respond (OTR), (d) offered an appropri-
ate level of challenge, and (e) delivered imme-
diate corrective feedback. Next, we will review
the research for the five research-supported
components of effective reading interventions.
Explicit Instruction
Explicit instruction is a systematic method of
teaching through small obtainable steps that
include frequent checks for understanding
and active and successful participation by all
students (Archer & Hughes, 2011). Rosenshine
(1986) outlined six steps for explicit instruction:
(a) review, (b) model, (c) provide guided prac-
tice, (d) give corrective feedback, (e) provide
independent practice, and (f) monitor student
progress. Effective interventions must present
information in a meaningful sequence of skills
and must include some aspect of modeling (I
do), guided practice (we do), and independent
practice (you do).
Explicit instruction is perhaps the most re-
searched approach to teaching reading that
exists. Stockard et al. (2018) reviewed 328 stud-
ies of explicit instruction and found an effect
size of 0.51 (226 studies) for reading and 0.66 (52
studies) for spelling. Training teachers how to
explicitly teach reading comprehension strate-
gies led to more use of modeling (g = 0.85) and
guided and independent practice (g = 1.43),
both of which had significantly positive effects
(g=0.85 and g=1.43, respectively) on reading
comprehension among low-achieving stu-
dents (Okkinga et al., 2018). Modeling is a cru-
cial aspect of explicit instruction.
Targeted to Student Needs
Selecting an evidence-based intervention that
targets a specific skill (e.g., phonemic aware-
ness, phonics, fluency, or comprehension) is
the starting point for building an effective in-
tervention system (NCII, 2018). The first step in
delivering a reading intervention should be to
identify gaps in a student’s core reading skills
and to focus intervention efforts on those skills.
Burns and colleagues (2023) found positive ef-
fects for analyzing data from the areas outlined
by the National Reading Panel (NRP, 2000) to
identify the most fundamental skill that a stu-
dent has yet to master and to target interven-
tion efforts on that skill. For example, students
who lack proficiency in both comprehension
and fluency, but possess adequate decoding
skills, should receive fluency interventions. In
contrast, those with low skills in comprehen-
sion, fluency, and decoding, but have sufficient
phonemic awareness, should receive an inter-
vention that focuses on decoding.
Students who receive reading interven-
tions targeted in this manner demonstrate
significantly more progress when compared to
those receiving comprehensive multicompo-
nent interventions or to students without ini-
tial reading difficulties (Burns et al., 2023). Hall
and Burns (2018) examined 26 studies on small-
group reading interventions for elementary
and middle school students and found a mod-
erate overall effect size of g = 0.54, but interven-
tions that targeted specific skills (g = 0.65) were
more effective than multicomponent interven-
tions addressing multiple skills (g = 0.35).
Targeting an intervention does not imply
exclusive concentration on one area. Rather, it
signifies that the primary emphasis is on the
specified target, while other facets of reading
may also be incorporated. For instance, a read-
ing fluency intervention may include asking the
student comprehension questions, a decoding
intervention should include reading connected
text and could begin with a phonemic aware-
ness warm-up, and phonemic awareness in-
terventions often incorporate letter manipula-
tion techniques to enhance effectiveness (NRP,
2000).
High Opportunities to Respond
Providing practice while learning a new skill
is one of the most important components of
effective instruction and intervention, but not
all practice is good practice. In order to lead to
retention and generalization, practice must in-
volve repetition, generation, and interleaving of
review items.
Repetition
The first criterion for effective practice is repeti-
tion. Simply reading a word or grapheme-pho-
neme correspondence only once does little for
building retention. We have known conclu-
Explicit instruction is a
systematic method of teaching
through small obtainable steps
that include frequent checks for
understanding and active and
successful participation by all
students.
13-21_Burns_Feature.indd 1413-21_Burns_Feature.indd 14 9/5/24 7:22 AM9/5/24 7:22 AM
15
SEPTEMBER/OCTOBER 2024
sively for 40 years that increasing the number
of presentations while rehearsing new items,
called opportunities to respond (OTR), leads
to improved academic engagement, learning,
and retention of the newly learned items. A
comparison of various interventions found that
increased OTR was the causal mechanism and
led to a large effect of r = .82 with a correlation
between number of repetitions and retention
(Szadokierski & Burns, 2008).
Generation
Students remember something better if they
have to self-generate a response rather than
simply read an answer. For example, students
will remember the sound for /ch/ better if they
generate three words from memory that start
with the sound than if they say the words repre-
sented by three pictures of objects that all start
with the /ch/ sound. Retention is also better if
students generate a verbal or written response
than if they just think the answer to themselves
(McCurdy et al., 2020). Generation (d = 1.31) was
at least as important as the number of repeti-
tions (d = 0.66) while practicing (Zaslofsky et al.
2016).
Interleaving
Interleaving practice is practicing several relat-
ed skills together (A,B,C,A,B,C,A,B,C as opposed
to A,A,A,B,B,B,C,C,C) after initially learning each.
Interleaving has led to positive effects on reten-
tion (g = 0.65) and generalization (g = 0.66), and
the more similar the items were, the better they
were retained (Firth et al., 2021). The effects are
stronger for smaller discrete stimuli, like letters
than for longer tasks like words or sentences
(Brunmair & Richter, 2019).
Appropriate Level of Challenge
To be successful, the intervention should pro-
vide an appropriate level of challenge. Unfor-
tunately, the appropriate level of challenge is
often determined by using reading books with
a gradient of difficulty, organizing the books
according to difficulty level, and placing stu-
dents into a specific level of the book series
based on data from a standardized measure
of reading level. However, an appropriate level
of challenge is better defined in relation to the
percentage of words read correctly in any given
passage or book and the amount of informa-
tion taught. Next, we discuss both aspects of
an appropriate level of challenge.
Percentage of Known
Teaching new skills so that they are sequenced
to build on each other increases student learn-
ing. Students who read 93% to 97% of the words
correctly had better reading outcomes than
those who read at higher or lower percentages
(Burns, 2024). Reading 93% to 97% of the words
correctly is a research-based definition of an
appropriate level of challenge during interven-
tion. In fact, the correlation between student
growth during reading fluency intervention
and the frequency with which each student
reads 93% to 97% of the words correctly was
r = .80 (Burns, 2007).
The guidelines for an appropriate level of
challenge for other tasks (e.g., grapheme-pho-
neme correspondences, learning irregular
words in isolation, spelling) are less clear than
for reading connected text. Meta-analytic re-
search found large effects for interventions
focused on isolated skills (e.g., spelling, letter
sounds, math facts) if they contained 50% to
60% known items (d = 1.00), 70% to 85% known
items (d = 1.17), or 90% known items (d = 1.22),
which compared to a small effect if the inter-
vention contained less than 50% known items
(d = 0.39; Burns, 2004). Although the specific
criteria may not be clear, including a high per-
centage of known items seems important.
Amount of Information
The amount of new learning that students can
process, retain, and use is limited and is referred
to as an acquisition rate (AR). Cesaro (1967) fa-
mously found that teaching students a number
of words that exceeded their individual limit
resulted in significantly fewer words being re-
tained and also reduced retention of previously
learned words. Cesaro called this phenomenon
retroactive cognitive interference, in which piec-
es of information competed with each other as
they overlapped in memory systems. Retroac-
tive interference has been supported by subse-
quent research (Darby & Sloutsky, 2015).
Interventions should only teach the num-
ber of new items that match the student’s AR,
which can be found by noting when the stu-
dent starts making frequent mistakes while
learning any new item. Teaching a number of
items that exceeded a student’s AR resulted in
learning fewer words (Haegele & Burns, 2015)
and increased time off task during reading in-
struction (Burns et al., 2021). Thus, interventions
should purposefully control the size of the in-
structional set, which is often inversely related
to learning (i.e., smaller sets were associated
with more learning; Poncy et al., 2015).
Feedback
Learning is enhanced during intervention when
students receive feedback based on their re-
13-21_Burns_Feature.indd 1513-21_Burns_Feature.indd 15 9/5/24 7:22 AM9/5/24 7:22 AM
16 The Reading League Journal
sponses. Feedback within an instructional
process can be broadly defined as sharing
and clarifying criteria for student success and
providing precise and honest information
about progress toward the criteria (Panadero
& Lipnevich, 2022). Feedback should focus on
the correct response rather than the incorrect
student response and should include an op-
portunity for the student to present the correct
response (Barbetta et al., 1993).
The review of meta-analyses conducted by
Wisniewski and colleagues (2020) found an ef-
fect size of d = 0.46 (238 study effects) for cor-
rective feedback. The effectiveness of feedback
is influenced by the frequency, immediacy, and
content or accuracy. For example, during initial
learning, a student’s incorrect response should
be followed by immediate corrective feedback,
but feedback can be delayed when engaged in
independent practice after the initial acquisi-
tion of the skill.
Using the Criteria
The five components of an effective interven-
tion outlined by Burns et al. (2014) can provide
a framework for examining how consistent a
reading intervention is with previous research.
As shown in Table 1, each criterion can be rated
on a scale of 1 to 4, with 4 being highest and
suggesting consistency with research on that
component. Next, we will demonstrate how to
use the criteria to evaluate two interventions.
Incremental Rehearsal
Incremental rehearsal (IR) is an approach used
to practice newly learned isolated stimuli to
improve retention of them. It is most effec-
tive for students who have difficulty remem-
bering stimuli such as grapheme-phoneme
correspondences, high-frequency words, or
vocabulary words, and it usually begins by as-
sessing items that the student is responding
to incorrectly and those that are mastered.
The unknown items are then written on index
cards or some other venue along with seven to
nine known items and practiced incrementally
(Tucker & Burns, 2016).
A review of Google Scholar found 19 pub-
lished studies of IR effects since 2013. IR was also
favorably rated by the NCII (2021; i.e., three of five
studies were rated as “convincing evidence” for
study design, two were rated as “partially con-
vincing evidence,” and all had either large ef-
fects or “convincing evidence” for an effect from
visual analysis of single-case designs).
IR includes all five components of an ef-
fective intervention. As shown in Table 2, the
intervention addresses multiple criteria for the
Explicit Instruction and Correctly Targeted com-
ponents, but it does not inherently involve prog-
ress monitoring or reinforce other areas of read-
ing and would need to be contextualized with
applied practice of the skill. The remaining three
components (High OTR, Appropriate Level of
Challenge, and Feedback) are all inherent to IR
and would all receive a rating of 4, including for
Appropriate Level of Challenge because set size
considerations are included in the procedures
(Tucker & Burns, 2016). Because all components
received a rating of three or four, the interven-
tion is likely consistent with previous research.
Cover, Copy, and Compare
Much like IR, Cover, Copy, and Compare (CCC) is
a well-researched intervention that helps stu-
dents learn new stimuli such as math facts or
spelling words. CCC for spelling involves pro-
viding each student with a sheet of paper with
at least three columns and a series of spelling
words in the first column. Each student then
studies the first word in the column, folds the
paper so that the first column is covered but
the second column is exposed, spells the word
from memory in the second column, and then
unfolds the paper to see if their spelling is cor-
rect. If it is not correct, the student corrects the
word in the third column. If it is correct, then the
student proceeds to the next word and starts
the process over again (Cieslar et al., 2008).
The NCII (2021) reviewed six studies of CCC
(five math and one spelling), of which only one
was rated as “convincing evidence” for study
design, two had “partially convincing evidence”
for the study design, and three had “uncon-
vincing evidence” for the study design. The
NCII also rated two single-case design studies
as providing “partially convincing evidence” of
the effectiveness of the intervention, three had
“unconvincing evidence” for the effectiveness,
and one study reported an effect size of -0.27
on targeted research measures.
Using the components of an effective inter-
vention shown in Table 2, CCC does not rate as
favorably as IR. The intervention provides little
modeling because the students are shown a
Feedback should focus on the
correct response rather than the
incorrect student response and
should include an opportunity
for the student to present the
correct response.
13-21_Burns_Feature.indd 1613-21_Burns_Feature.indd 16 9/5/24 7:22 AM9/5/24 7:22 AM
| www.brookespublishing.com | 1-800-638-3775
Bestsellers on evidence-based
literacy instruction
Explore our new and bestselling
titles on the science of reading
Talk with our esteemed
experts
Find foundational texts
and practical guides
Get a special conference
discount
Stop by the Brookes booth to
13-21_Burns_Feature.indd 1713-21_Burns_Feature.indd 17 9/5/24 7:22 AM9/5/24 7:22 AM
18 The Reading League Journal
Table 1
Guidelines for Evaluating if a Reading Intervention Aligns With Research
Component and Criteria Ratings
Explicit Instruction 1 2 3 4
• Uses modeling, guided practice, and
independent practice.
• Consistently and regularly monitors student
progress.
• Includes frequent reviews.
• Students engage in skill without
being provided a model.
• No approach to monitor progress.
• No opportunities to review.
• One of the criteria is
present.
• Two of the
criteria are
present.
• All three
of the
criteria
are
present.
Correctly Targeted 1 2 3 4
• Focuses on one area of reading.
• Focus area is selected based on student need.
• Reinforces other relevant areas of reading.
• Attempts to cover all areas of
reading OR no clear intervention
focus.
• Intervention is focused but
not based on student need.
• Does not reinforce other
relevant areas of reading.
• Two of the
criteria are
present.
• All three
of the
criteria
are
present.
High Opportunities to Respond 1 2 3 4
• Includes sufficient practice (repetition) to learn
the new skill.
• Students have to generate the response.
• Practice intersperses review items while
learning new ones.
• Little to no practice provided. • Sufficient practice provided.
• No generation of a
response.
• No interspersing of review
items.
• Two of the
criteria are
present.
• All three
of the
criteria
are
present.
Appropriate Level of Challenge 1 2 3 4
• New skills are sequenced so that they build on
each other.
• Students read connected text that contains
93% to 97% known words OR can complete
isolated skill tasks with at least 85% accuracy.
• Intervention considers set size based on
student skill and age.
• New skills are not well sequenced.
• Students read connected text that
contains less than 93% known
words OR complete most tasks
with less than 85% accuracy.
• Uses too large of a set size.
• One of the characteristics is
present.
• Two of the
criteria are
present.
• All three
of the
criteria
are
present.
Feedback 1 2 3 4
• Interventionist immediately states that each
response is correct or incorrect.
• Interventionist asks the student to give the
correct response.
• Interventionist’s response includes specific
information.
• Not apparent that feedback is
provided.
• Feedback may be provided
but only one of the criteria
is present.
• Two of the
criteria are
present.
• All three
of the
criteria
are
present.
13-21_Burns_Feature.indd 1813-21_Burns_Feature.indd 18 9/5/24 7:22 AM9/5/24 7:22 AM
19
SEPTEMBER/OCTOBER 2024
Intervention Explicit Instruction Correctly Targeted
High
Opportunities to
Respond
Appropriate Level of
Challenge Feedback
Incremental
Rehearsal
Rating = 3
• New stimulus is
modeled.
• Provides guided
practice.
• Frequent review of
previously learned
items.
• No inherent progress
monitoring.
Rating = 3
• Focuses on one
area of reading.
• Assesses known
and unknown
items.
• Does not reinforce
other areas of
reading.
Rating = 4
• Considerable
repetition.
• Students
generate the
response.
• Includes a high
percentage of
known items
interleaved.
Rating = 4
• High percentage of
known items.
• Carefully selected
items for teaching
and review (known
items).
• Set size is
addressed.
Rating = 4
• Errors are corrected
immediately.
• Correct response is
modeled.
• Students required
to state the correct
response after errors.
Cover, Copy,
and Compare
Rating = 2
• Minimal modeling
and no modeling of
pronunciation.
• No review built into
the procedure.
• Resulting worksheets
can be used to
monitor progress.
Rating = 2
• Focuses on one
area of reading
(spelling).
• Not matched to
student needs.
• Does not reinforce
other areas of
reading.
Rating = 1
• Provides only
one repetition.
• Students
generate a
response.
• No interspersal
of review and
new items.
Rating = 1
• Sequence depends
on sequence of
spelling curriculum.
• No control over
difficulty (could be
100% new).
Rating = 2
• Students immediately
correct errors.
• Correct response is
modeled.
• Dependent on
student correctly
understanding
modeled correct
response.
Table 2
Examples of Ratings for Two Common Reading Interventions
13-21_Burns_Feature.indd 1913-21_Burns_Feature.indd 19 9/5/24 7:22 AM9/5/24 7:22 AM
20 The Reading League Journal
model of correct spelling, but the word is not
read aloud to the student, and the student
could practice the word while mispronouncing
it. There is also no review built into the proce-
dure, and CCC would need to be contextualized
with the applied practice of the skill to address
other areas in reading. CCC would likely receive
a 1 for High OTR because it only includes one
repetition, and attempts to increase the num-
ber of repetitions to three did not enhance
the effects (Erion et al., 2009), suggesting that
three repetitions were insufficient.
The intervention does result in a permanent
written product that teachers can use to evalu-
ate student progress. The intervention would
receive a rating of 2 for Explicit Instruction, Cor-
rectly Targeted, and Feedback, but only a 1 for
High OTR and Appropriate Level of Challenge.
Thus, it seems that CCC does not address two
components and likely does not adequately
address the other three.
Conclusion
It is important for new interventions to align
with previous research, and the criteria pre-
sented here could help practitioners evaluate
how well individual reading interventions align
with the science. The two examples shown
here, IR and CCC, are well-researched, and
practitioners could simply rely on the previous
research. However, the guidelines should also
be applied and studied when evaluating inter-
ventions that have not been frequently stud-
ied. Additional research is needed to examine
the reliability of decisions made with these
guidelines across multiple raters and the valid-
ity of the resulting decisions. Moreover, the ex-
amples presented here used letter sounds and
spelling, which are only two potential targets
for reading interventions.
Practitioners should select the intervention
with the strongest research base, but in the ab-
sence of such data, selecting interventions that
are most aligned with previous research could
increase student outcomes. Given that school
personnel have previously attempted reading
interventions that are based on fads rather
than science (Kavale, 2001), a method to eval-
uate consistency with research may be a key
component in selecting interventions.
References
Archer, A., & Hughes, C. (2011).Explicit instruction: Effective
and efficient teaching. Guilford.
Barbetta, P. M., Heron, T. E., & Heward, W. L. (1993). Effects
of active student response during error correction
on the acquisition, maintenance, and generalization
of sight words by students with developmental
disabilities.Journal of Applied Behavior Analysis,26(1),
111-119.
Brunmair, M., & Richter, T. (2019). Similarity matters:
A meta-analysis of interleaved learning and its
moderators.Psychological Bulletin, 145(11), 1029–1052.
Burns, M. K. (2004). Empirical analysis of drill ratio
research: Refining the instructional level for drill tasks.
Remedial and Special Education, 25, 167-175.
Burns, M. K. (2007). Reading at the instructional level
with children identified as learning disabled: Potential
implications for response–to-intervention. School
Psychology Quarterly, 22, 297-313.
Burns, M. K. (2024). Assessing an instructional level during
reading fluency interventions: A meta-analysis of the
effects on reading.Assessment for Effective Intervention.
Burns, M. K., Aguilar, L. N., Warmbold‐Brann, K., Preast, J.
L., & Taylor, C. N. (2021). Effect of acquisition rates on off‐
task behavior of kindergarten students while learning
sight words.Psychology in the Schools,58(1), 5-17.
Burns, M., Duesenberg-Marshall, M., Sussman-Dawson,
K., Romero, M., Wilson, D., & Felten, M. (2023). Effects
of targeting reading interventions: Testing a skill-by-
treatment interaction in an applied setting.Preventing
School Failure: Alternative Education for Children and
Youth, 68(4):1-9.
Burns, M. K., VanDerHeyden, A. M., & Zaslofsky, A. F.
(2014). Best practices in delivering intensive academic
interventions with a skill-by-treatment interaction. In P.
L. Harrison & A. Thomas (Eds.)Best practices in school
psychology: Student-level services 6th ed.; pp. 129-141).
National Association of School Psychologists.
Cesaro, J. (1967). The interference theory of forgetting.
Scientific American, 217(4), 117-124.
Cieslar, W., McLaughlin, T. F., & Derby, K. M. (2008). Effects
of the copy, cover, and compare procedure on the math
and spelling performance of a high school student with
behavioral disorder: A case report.Preventing School
Failure,52(4), 45-51.
Darby, K. P., & Sloutsky, V. M. (2015). The cost of learning:
Interference effects in memory development.Journal of
Experimental Psychology: General, 144(2), 410-431.
Erion, J., Davenport, C., Rodax, N., Scholl, B., & Hardy, J.
(2009). Cover-copy-compare and spelling: One versus
three repetitions. Journal of Behavioral Education, 18,
319-330. https://doi.org/10.1007/s10864-009-9095-4.
Firth, J., Rivers, I., & Boyle, J. (2021). A systematic review of
interleaving as a concept learning strategy.Review of
Education,9(2), 642-684.
Haegele, K., & Burns, M. K. (2015). Effect of modifying
intervention set size with acquisition rate data among
students identified with a learning disability.Journal of
Behavioral Education,24, 33-50.
Hall, M. S., & Burns, M. K. (2018). Meta-analysis of targeted
small-group reading interventions.Journal of School
Psychology,66, 54-66.
Kavale, K. A. (2001). Decision making in special education:
The function of meta-analysis. Exceptionality, 9, 245-268.
McCurdy, M.P., Viechtbauer, W., Sklenar, A.M.,
Frankenstein, A. N., & Leshikar, E. D. (2020).Theories
of the generation effect and the impact of generation
constraint: A meta-analytic review.Psychonomic
Bulletin & Review, 27, 1139–1165.
National Center on Intensive Intervention. (2018). Breaking
down the DBI process: Questions & considerations.
Office of Special Education Programs, U.S. Department
of Education.
National Center on Intensive Intervention. (2018).
What is data-based individualization. https://
intensiveintervention.org/data-based-individualization.
National Center on Intensive Intervention. (2021).
Academic intervention tools chart. https://charts.
intensiveintervention.org/aintervention.
13-21_Burns_Feature.indd 2013-21_Burns_Feature.indd 20 9/5/24 7:22 AM9/5/24 7:22 AM
21
SEPTEMBER/OCTOBER 2024
National Reading Panel. (2000). Report of the National
Reading Panel: Teaching children to read: An evidence-
based assessment of the scientific research literature
on reading and its implications for reading instruction:
Reports of the subgroups. National Institute of Child
Health and Human Development, National Institutes of
Health.
Okkinga, M., van Steensel, R., van Gelderen, A. J., &
Sleegers, P. J. (2018). Effects of reciprocal teaching on
reading comprehension of low‐achieving adolescents.
The importance of specific teacher skills.Journal of
Research in Reading,41(1), 20-41.
Panadero, E., & Lipnevich, A. A. (2022). A review of feedback
models and typologies: Towards an integrative model of
feedback elements.Educational Research Review,35,
100416.
Poncy, B. C., Solomon, G. E., Duhon, G. J., Skinner, C. H.,
Moore, K., & Simons, S. (2015). An analysis of learning rate
and curricular scope: Caution when choosing academic
interventions based on aggregated outcomes. School
Psychology Review, 44(3), 289–305.
Rosenshine, B. V. (1986). Synthesis of research on explicit
teaching. Educational Leadership, 43, 60-69.
Stockard, J., Wood, T. W., Coughlin, C., & Rasplica Khoury, C.
(2018). The effectiveness of direct instruction curricula:
A meta-analysis of a half century of research.Review of
Educational Research,88(4), 479-507.
Szadokierski, I., & Burns, M. K. (2008). Analogue evaluation
of the effects of opportunities to respond and ratios of
known items within drill rehearsal of Esperanto words.
Journal of School Psychology, 46(5), 593-609.
The Reading League. (2023). Curriculum evaluation
guidelines. Available online at https://www.
thereadingleague.org/curriculum-evaluation-
guidelines/
Tilly, W. D. III. (2008). The evolution of school psychology
to science-based practice: Problem solving and the
three-tiered model. In Thomas A., Grimes J. (Eds.),Best
practices in school psychology(5th ed., pp. 17–36).
National Association of School Psychologists.
Tucker, J. A., & Burns, M. K. (2016). Helping students
remember what they learn: An intervention for teachers
and school psychologists.Communiqué,44(6), 23.
Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power
of feedback revisited: A meta-analysis of educational
feedback research.Frontiers in Psychology,10, 3087.
Zaslofsky, A. F., Scholin, S. E., Burns, M. K., & Varma, S. (2016).
Comparison of opportunities to respond and generation
effect as potential causal mechanisms for incremental
rehearsal with multiplication combinations.Journal of
School Psychology,55, 71-78.
Matthew Burns
Dr. Matthew K. Burns is the Fien Professor of Special Education at the University of Florida and an
assistant director of the University of Florida Literacy Institute. Dr. Burns has published over 200
articles and book chapters in national publications and has co-authored or co-edited 15 books.
Dr. Burns is one of the leading researchers regarding the use of assessment data to determine
individual or small-group interventions and has published extensively on response to intervention,
academic interventions, and facilitating problem-solving teams. He received the 2020 Senior
Scientist Award from Division 16 (School Psychology) of the American Psychological Association.
Valentina Contesse
Dr. Valentina Contesse is a clinical assistant professor of special education at the University of
Florida. Dr. Contesse is a University of Florida Literacy Institute (UFLI) faculty member and the
co-author ofUFLI Foundations: An Explicit and Systematic Phonics Program. Dr. Contesse teaches
courses in various UF teacher preparation programs and coordinates professional development for
practicing teachers focused on evidence-based reading instruction. Her research focuses on early
literacy instruction and intervention, teacher preparation and training in reading, and the effects
of performance feedback on the implementation of evidence-based instructional practices.
13-21_Burns_Feature.indd 2113-21_Burns_Feature.indd 21 9/5/24 7:22 AM9/5/24 7:22 AM