ArticlePDF Available

Oral Reading Fluency Norms: A Valuable Assessment Tool for Reading Teachers

Authors:

Abstract

In 1992, the authors collaborated to develop a set of norms for oral reading fluency for grades 2–5. Since then, interest in and awareness of fluency has greatly increased, and Hasbrouck and Tindal have collaborated further to compile an updated and expanded set of norms for grades 1–8.This article discusses the application of these norms to three important assessment activities related to improving students' reading achievement:Screening students for possible reading problemsDiagnosing deficits in students' fluencyMonitoring the progress of students receiving supplementary instruction or intensive intervention in readingAn overview of the history and purpose for developing measures of oral reading fluency is also presented.
636
© 2006 International Reading Association (pp. 636–644) doi:10.1598/RT.59.7.3
JAN HASBROUCK
GERALD A. TINDAL
Oral reading fluency norms: A valuable
assessment tool for reading teachers
In this article, fluency norms are reassessed
and updated in light of the findings stated in
the National Reading Panel report.
T
eachers have long known that having students
learn to process written text fluently, with ap-
propriate rate, accuracy, and expression—
making reading sound like language (Stahl & Kuhn,
2002)—is important in the overall development of
proficient reading. However, the fundamental link
between reading fluency and comprehension, espe-
cially in students who struggle with reading, may
have been new news to some teachers (Pikulski &
Chard, 2005). Following the publication of the
National Reading Panel report (National Institute of
Child Health and Human Development, 2000),
many teachers and reading specialists are now fo-
cusing significant attention on developing their stu-
dents’ fluency skills.
Curriculum-based measurement and
oral reading fluency
Educators looking for a way to assess students’
reading fluency have at times turned to curriculum-
based measurement (CBM). CBM is a set of stan-
dardized and well-researched procedures for
assessing and monitoring students’ progress in
reading, math, spelling, and writing (Fuchs &
Deno, 1991; Shinn, 1989, 1998; Tindal & Marston,
1990). One widely used CBM procedure is the as-
sessment of oral reading fluency (ORF), which fo-
cuses on two of the three components of fluency:
rate and accuracy. A teacher listens to a student
read aloud from an unpracticed passage for one
minute. At the end of the minute each error is sub-
tracted from the total number of words read to cal-
culate the score of words correct per minute
(WCPM). For a full description of the standardized
CBM procedures for assessing oral reading fluen-
cy, see Shinn (1989).
WCPM has been shown, in both theoretical
and empirical research, to serve as an accurate and
powerful indicator of overall reading competence,
especially in its strong correlation with compre-
hension. The validity and reliability of these two
measures have been well established in a body of
research extending over the past 25 years (Fuchs,
Fuchs, Hosp, & Jenkins, 2001; Shinn, 1998). The
relationship between ORF and comprehension has
been found to be stronger with elementary and jun-
ior high students than with older individuals (Fuchs
et al., 2001).
National norms for oral reading
fluency performance
National ORF norms: 1992
In 1992 we published an article that contained
a table of ORF norms that reported percentile
scores for students in grades 2–5 at three times
(fall, winter, and spring) for each grade. These per-
formance norms were created by compiling data
from eight geographically and demographically di-
verse school districts in the United States. These
districts all had used standardized CBM procedures
to collect their ORF data. There were several limi-
tations to the original 1992 ORF norms. For ex-
ample, they contained scores only for grades 2–5.
In addition, the data obtained in that original study
Oral reading fluency norms: A valuable assessment tool for reading teachers
637
allowed us to compile norms only for the 75th,
50th, and 25th percentiles.
Time to revisit national ORF norms
Over a decade later, the interest in fluency by
teachers and administrators has grown tremen-
dously. By 2005, fluency had made it to both the
“what’s hot” and the “what should be hot” cate-
gories of the annual survey of national reading ex-
perts to determine current key issues (Cassidy &
Cassidy, 2004/2005). Materials designed specifi-
cally to help teachers teach reading fluency have
been developed such as Read Naturally (Ihnot,
1991), QuickReads (Hiebert, 2002), and The Six-
Minute Solution (Adams & Brown, 2003).
Publications designed to help teachers understand
what fluency is and how to teach it (see Osborn &
Lehr, 2004), as well as how to assess reading flu-
ency (see Rasinski, 2004), are now readily avail-
able. Articles about reading fluency frequently
appear in major professional reading journals, in-
cluding The Reading Teacher. Recent examples are
Hudson, Lane, and Pullen (2005); Kuhn (2004/
2005); and Pikulski and Chard (2005).
From kindergarten through grade 3 a common
practice has been to compare fluency scores with
established norms or benchmarks for (a) screening
students to determine if an individual student may
need targeted reading assistance, and (b) monitor-
ing students’ reading progress. Examples of bench-
mark assessments include DIBELS (Good &
Kaminski, 2002), AIMSweb (Edformation, 2004),
the Texas Primary Reading Inventory—TPRI
(Texas Education Agency, 2004), and the Reading
Fluency Monitor (Read Naturally, 2002). With es-
calating interest in assessing and teaching reading
fluency in the past decade, professional educators
must be certain that they have the most current and
accurate information available to them.
National ORF norms: 2005
New national performance norms for oral read-
ing fluency have now been developed. These new
ORF norms were created from a far larger number
of scores, ranging from a low of 3,496 (in the win-
ter assessment period for eighth graders) to a high
of 20,128 scores (in the spring assessment of sec-
ond graders). We collected data from schools and
districts in 23 states and were able to compile more
detailed norms, reporting percentiles from the 90th
through the 10th percentile levels. To ensure that
these new norms represented reasonably current
student performance, we used only ORF data col-
lected between the fall of 2000 through the 2004
school year.
All the ORF data used in this current compila-
tion were collected using traditional CBM proce-
dures that mandate that every student in a
classroom—or a representative sample of students
from all levels of achievement—be assessed.
Following these procedures, reading scores were
collected from the full range of students, from
those identified as gifted or otherwise exceptional-
ly skillful to those diagnosed with reading disabil-
ities such as dyslexia. Students learning to speak
English who receive reading instruction in a regu-
lar classroom also have been represented in this
sample, although the exact proportion of these stu-
dents is unknown. (A complete summary of the
data files used to compile the norms table in this
article is available at the website of Behavioral
Research & Teaching at the University of Oregon:
http://brt.uore
gon.edu/techreports/TR
_
33
_
NCORF
_
DescStats.pdf
[Behavioral Research and
Teaching, 2005].)
Using ORF norms for making key
decisions
Everyone associated with schools today is
aware of the increasing requirements for data-
driven accountability for student performance. The
federal No Child Left Behind (NCLB) Act of 2001
(NCLB, 2002) mandates that U.S. schools demon-
strate Adequate Yearly Progress (AYP). In turn,
state and local education agencies are requiring
schools to demonstrate that individual students are
meeting specified benchmarks indicated in state
standards. This amplified focus on accountability
necessarily requires increased collection of assess-
ment data, in both special and general education
settings (Linn, 2000; McLaughlin & Thurlow,
2003).
Four categories of reading assessments
Reading assessments have recently been cate-
gorized to match four different decision-making
The Reading Teacher Vol. 59, No. 7 April 2006
638
purposes: screening, diagnostic, progress monitor-
ing, and outcome (Kame’enui, 2002).
Screening measures: Brief assessments that
focus on critical reading skills that predict fu-
ture reading growth and development, con-
ducted at the beginning of the school year to
identify children likely to need extra or alter-
native forms of instruction.
Diagnostic measures: Assessments conducted
at any time during the school year when a
more in-depth analysis of a student’s strengths
and needs is necessary to guide instructional
decisions.
Progress-monitoring measures: Assessments
conducted at a minimum of three times a year
or on a routine basis (e.g., weekly, monthly,
or quarterly) using comparable and multiple
test forms to (a) estimate rates of reading im-
provement, (b) identify students who are not
demonstrating adequate progress and may re-
quire additional or different forms of instruc-
tion, and (c) evaluate the effectiveness of
different forms of instruction for struggling
readers and provide direction for developing
more effective instructional programs for
those challenged learners.
Outcome measures: Assessments for the
purpose of determining whether students
achieved grade-level performance or demon-
strated improvement.
The role of ORF in reading assessment
Fuchs et al. (2001) have suggested that ORF
assessments can play a role in screening and
progress monitoring. Some initial research by Hosp
and Fuchs (2005) also provides support for the use
of traditional CBM measures as a way of diagnos-
ing difficulties in reading subskills. Having cur-
rent norms available can help guide teachers in
using ORF assessment results to make key instruc-
tional decisions for screening, diagnosis, and
progress monitoring.
The ORF norms presented in Table 1 provide
scores for students in grades 1–8 for three differ-
ent time periods across a school year. For each
grade level, scores are presented for five different
percentile rankings: 90th, 75th, 50th, 25th, and
10th. In order to use these norms for making in-
structional or placement decisions about their own
students, teachers must be certain to follow the
CBM procedures carefully to collect ORF scores.
ORF norms for screening decisions
Rationale and support for screening
reading
Screening measures help a teacher quickly iden-
tify which students are likely “on track” to achieve
future success in overall reading competence and
which ones may need extra assistance. Screening
measures are commonly developed from research
examining the capacity of an assessment to predict
future, complex performance based on a current,
simple measure of performance. These assessments
are designed to be time efficient to minimize the im-
pact on instructional time. Research has clearly indi-
cated the critical need to provide high-quality,
intensive instructional interventions to students at
risk for reading difficulty as soon as possible (Snow,
Burns, & Griffin, 1998). Increasingly, teachers are
being required to administer screening measures to
every student, especially those in kindergarten
through grade 3, because of the potential to prevent
future reading difficulties by early identification and
through instructional intervention.
Assessments that measure a student’s accuracy
and speed in performing a skill have long been stud-
ied by researchers. Such fluency-based assessments
have been proven to be efficient, reliable, and valid
indicators of reading proficiency when used as
screening measures (Fuchs et al., 2001; Good,
Simmons, & Kame’enui, 2001). Researchers have
cited a variety of studies that have documented the
ability of these simple and quick measures to accu-
rately identify individual differences in overall read-
ing competence.
Concerns about fluency measures as
screening tools
Some educators have expressed apprehension
about the use of a very short measure of what may
appear as a single, isolated reading skill to make a
determination about a student’s proficiency in the
highly complex set of processes involved in the task
of reading (Hamilton & Shinn, 2003). Although
this concern is understandable, it is important to
Oral reading fluency norms: A valuable assessment tool for reading teachers
639
TABLE 1
Oral reading fluency norms, grades 1–8
Fall Winter Spring
Grade Percentile WCPM WCPM WCPM
1 90 81 111
75 47 82
50 23 53
25 12 28
10 6 15
SD 32 39
Count 16,950 19,434
2 90 106 125 142
75 79 100 117
50 51 72 89
25 25 42 61
10 11 18 31
SD 37 41 42
Count 15,896 18,229 20,128
3 90 128 146 162
75 99 120 137
50 71 92 107
25 44 62 78
10 21 36 48
SD 40 43 44
Count 16,988 17,383 18,372
4 90 145 166 180
75 119 139 152
50 94 112 123
25 68 87 98
10 45 61 72
SD 40 41 43
Count 16,523 14,572 16,269
5 90 166 182 194
75 139 156 168
50 110 127 139
25 85 99 109
10 61 74 83
SD 45 44 45
Count 16,212 13,331 15,292
6 90 177 195 204
75 153 167 177
50 127 140 150
25 98 111 122
10 68 82 93
SD 42 45 44
Count 10,520 9,218 11,290
7 90 180 192 202
75 156 165 177
50 128 136 150
25 102 109 123
10 79 88 98
SD 40 43 41
Count 6,482 4,058 5,998
8 90 185 199 199
75 161 173 177
50 133 146 151
25 106 115 124
10 77 84 97
SD 43 45 41
Count 5,546 3,496 5,335
WCPM: Words correct per minute
SD: Standard deviation
Count: Number of student scores
recognize that when fluency-based reading meas-
ures are used for screening decisions, the results
are not meant to provide a full profile of a student’s
overall reading skill level. These measures serve as
a powerful gauge of proficiency, strongly support-
ed by a convergence of findings from decades of
theoretical and empirical research (Fuchs et al.,
2001; Hosp & Fuchs, 2005). The result of any
screening measure must be viewed as one single
piece of valuable information to be considered
when making important decisions about a student,
such as placement in an instructional program or
possible referral for academic assistance.
ORF as a “thermometer”
Perhaps a helpful way to explain how teachers
can use a student’s WCPM score as a screening tool
would be to provide an analogy. A fluency-based
screener can be viewed as similar to the temperature
reading that a physician obtains from a thermome-
ter when assisting a patient. A thermometer—like
a fluency-based measure—is recognized as a tool
that provides valid (relevant, useful, and important)
and reliable (accurate) information very quickly.
However, as important as a temperature reading is
to a physician, it is only a single indicator of gener-
al health or illness.
A temperature of 98.6 degrees would not result
in your physician pronouncing you “well” if you
have torn a ligament or have recurring headaches. On
the other hand, if the thermometer reads 103 degrees,
the physician is not going to rush you to surgery to
have your gall bladder removed. Body temperature
provides an efficient and accurate way for a doctor
to gauge a patient’s overall health, but it cannot fully
diagnose the cause of the concern. Fluency-based
screening measures can be valuable tools for teachers
to use in the same way that a physician uses a
thermometer—as one reasonably dependable indica-
tor of student’s academic “health” or “illness.
No assessment is perfect, and screening meas-
ures may well exemplify the type of measures
sometimes referred to by education professionals
as “quick and dirty.” Screening measures are de-
signed to be administered in a short period of time
(“quick”), and will at times over- or underidentify
students as needing assistance (“dirty”). While
WCPM has been found to be a stable performance
score, some variance can be expected due to
several uncontrollable factors. These consist of a
student’s familiarity or interest in the content of the
passages, a lack of precision in the timing of the
passage, or mistakes made in calculating the final
score due to unnoticed student errors. Both human
error and measurement error are involved in every
assessment. Scores from fluency-based screening
measures must be considered as a performance in-
dicator rather than a definitive cut point (Francis
et al., 2005).
Using ORF norms for screening decisions
Having students read for one minute in an un-
practiced grade-level passage yields a rate and ac-
curacy score that can be compared to the new ORF
norms. This method of screening is typically used
no earlier than the middle of first grade, as stu-
dents’ ability to read text is often not adequately de-
veloped until that time. Other fluency-based
screening measures have been created for younger
students who are still developing text-reading skills
(Edformation, 2004; Kaminski & Good, 1998;
Read Naturally, 2002). The ORF norms presented
in this article start in the winter of first grade and
extend up to the spring of eighth grade.
Interpreting screening scores using the ORF
norms: Grade 1. Research by Good, Simmons,
Kame’enui, Kaminski, & Wallin (2002) found that
first-grade students who are reading 40 or more
WCPM on unpracticed text passages are by the end
of the year at low risk of future reading difficulty,
while students below 40 WCPM are at some risk,
and students reading below 20 WCPM are at high
risk of failure. We recommend following these
guidelines for interpreting first-grade scores.
Interpreting screening scores using the ORF
norms: Grades 2–8. To determine if a student may
be having difficulties with reading, the teacher
compares the student’s WCPM score to the scores
from that student’s grade level at the closest time
period: fall, winter, or spring. On the basis of our
field experiences with interpreting ORF screening
scores, we recommend that a score falling within
10 words above or below the 50th percentile should
be interpreted as within the normal, expected, and
appropriate range for a student at that grade level at
that time of year, at least for students in grades 2–8.
The Reading Teacher Vol. 59, No. 7 April 2006
640
ORF norms for diagnosis
We can continue the medical analogy used pre-
viously with screening decisions to discuss diag-
nosing reading difficulties. When a physician sees
a patient with an elevated body temperature, that
information—along with blood pressure, choles-
terol levels, height/weight ratio, and many other
potential sources of data—serves as a key part of
the physician’s decision about the next steps to take
in the patient’s treatment. Diagnosing illness has
direct parallels to diagnosing the causes for reading
difficulties and planning appropriate instruction.
As we have discussed, if a student has a low
score on a screening measure, that single score
alone cannot provide the guidance we need about
how to develop an instructional plan to help that
student achieve academic “wellness.A profes-
sional educator looks beyond a low score on a
fluency-based screening measure to examine other
critical components of reading, including oral lan-
guage development, phonological and phonemic
awareness, phonics and decoding skills, vocabulary
knowledge and language development, compre-
hension strategies, and reading fluency. The ORF
norms can play a useful role in diagnosing possi-
ble problems that are primarily related to fluency.
Interpreting scores using the ORF norms
for diagnosing fluency problems
The procedures for using the ORF norms to
diagnose fluency problems are similar to those for
screening, except here the level of materials should
reflect the student’s instructional reading level,
rather than his or her grade level. We define
instructional level as text that is challenging but
manageable for the reader, with no more than ap-
proximately 1 in 10 difficult words. This translates
into 90% success (Partnership for Reading, 2001).
A tool sometimes used by reading specialists or
classroom teachers for diagnosing reading problems
is an informal reading inventory (IRI). IRIs are ei-
ther teacher-made or published sets of graded pas-
sages, sometimes with introductions to be read aloud
to students before they read, and typically include a
set of comprehension questions to be answered af-
ter the student reads the entire passage. IRIs are
commonly used to help a teacher determine at what
level a student can read text either independently or
with instruction, or if the text is at that student’s frus-
tration level (less than 90% accuracy with impaired
comprehension). Analysis of miscues made during
the student’s reading can assist in the diagnoses of
decoding or comprehension difficulties. IRI pas-
sages can also be used along with CBM procedures
to assist in diagnosing fluency problems.
To incorporate fluency diagnosis into an IRI
assessment, a teacher would assess a student’s flu-
ency using the standardized CBM procedures dur-
ing the first 60 seconds of reading in text that is
determined to be at the student’s instructional read-
ing level.
ORF norms for monitoring student
progress
A third use for ORF norms is to provide a tool to
monitor a student’s progress in reading. Use of CBM
procedures to assess individual progress in acquiring
reading skills has a long history and strong support
from numerous empirical research studies (Fuchs et
al., 2001; Fuchs & Fuchs, 1998; Shinn, 1989, 1998).
CBM fluency-based measures have been found by
many educators to be better tools for making deci-
sions about student progress than traditional stan-
dardized measures, which can be time-consuming,
expensive, administered infrequently, and of limit-
ed instructional utility (Good, Simmons, &
Kame’enui, 2001; Tindal & Marston, 1990).
Using ORF norms for progress-monitoring
decisions
CBM progress monitoring typically involves
having a student read an unpracticed passage se-
lected from materials at that student’s grade level
(for those reading at or above expected levels) or
at a goal level (for students reading below expected
levels). Progress-monitoring assessments may be
administered weekly, once or twice monthly, or
three to four times per year, depending on the type
of instructional program a student is receiving.
Students at or above grade level in reading.
Students whose reading performance is at or ex-
ceeds the level expected for their grade placement
may need only to have their reading progress mon-
itored a few times per year to determine if they are
meeting the benchmark standards that serve as
Oral reading fluency norms: A valuable assessment tool for reading teachers
641
predictors of reading success. For these students,
progress monitoring may take the form of simply
repeating the same procedures used in the fall for
screening. Students read aloud from an unpracticed
passage at their grade level, and the resulting
WCPM score is compared to the ORF norms for
the most appropriate comparison time period—fall,
winter, or spring. If a student’s WCPM score is
within plus or minus 10 WCPM of the 50th per-
centile on the ORF table, or is more than 10
WCPM above the 50th percentile, we recommend
that the student be considered as making adequate
progress in reading (unless there are other indica-
tors that would raise concern).
Students below grade level in reading. For stu-
dents who receive supplemental support for their
reading (those reading six months to one year be-
low grade level) or students with more serious
reading problems who are getting more intensive
interventions to improve their reading skills,
progress monitoring may take a different form. For
these students, progress-monitoring assessments
may be administered more frequently, perhaps
once or twice monthly for students receiving sup-
plemental reading support, and as often as once per
week for students reading more than one year be-
low level who are receiving intensive intervention
services, including special education.
Using graphs to interpret progress-
monitoring scores
When monitoring the progress of these lower
performing students, the standard CBM procedures
are used; however, the student’s WCPM scores are
recorded on a graph to facilitate interpretation of the
scores. An individual progress-monitoring graph is
created for each student. A graph may reflect a par-
ticular period of time, perhaps a grading period or a
trimester. An aimline is placed on the graph, which
represents the progress a student will need to make
to achieve a preset fluency goal. Each time the stu-
dent is assessed, that score is placed on the graph. If
three or more consecutive scores fall below the aim-
line, the teacher must consider making some kind
of adjustment to the current instructional program
(Hasbrouck, Woldbeck, Ihnot, & Parker, 1999).
CBM progress-monitoring procedures have
been available for many years but have not been
widely used by teachers (Hasbrouck et al., 1999).
With the increased awareness of the importance of
preventing reading difficulties and providing inten-
sive intervention as soon as a concern is noted, this
will likely change. Using fluency norms to set ap-
propriate goals for student improvement and to
measure progress toward those goals is a powerful
and efficient way for educators to make well-
informed and timely decisions about the instruc-
tional needs of their students, particularly the
lowest performing, struggling readers. (For more
resources for progress monitoring, see the website
of the National Center on Student Progress
Monitoring at www
.studentprogress.org.)
A cautionary note about reading
fluency
We would like to add one caveat regarding
reading fluency. Although this skill has recently be-
come an increased focus in classroom reading in-
struction, and the awareness of the link between
fluency and comprehension has grown, there ap-
pears to be a tendency among some educators to
believe that raising a student’s fluency score is
“the” main goal of reading instruction. As impor-
tant as fluency is, and as valuable as the informa-
tion obtained from fluency-based assessments can
be for instructional decision making, we caution
teachers and administrators to keep fluency and
fluency-based assessment scores in perspective.
Helping our students become fluent readers is
absolutely critical for proficient and motivated
reading. Nonetheless, fluency is only one of the
essential skills involved in reading. We suggest that
teachers use the 50th percentile as a reasonable
gauge of proficiency for students. Keep in mind
that it is appropriate and expected for students to
adjust their rate when reading text of varying diffi-
culty and for varied purposes. Pushing every stu-
dent to reach the 90th percentile or even the 75th
percentile in their grade level is not a reasonable
or appropriate goal for fluency instruction.
Focus on fluency
Reading is a complex process involving multi-
ple linguistic and cognitive challenges. It is clear
The Reading Teacher Vol. 59, No. 7 April 2006
642
that the ability to read text effortlessly, quickly, ac-
curately, and with expression plays an essential role
in becoming a competent reader. Researchers still
have much work to do to identify fully the features,
mechanisms, and processes involved in reading flu-
ency. However, decades of research have validated
the use of fluency-based measures for making es-
sential decisions about which students may need
assistance in becoming a skilled reader (screening),
an individual student’s strength or need with the
skills of reading fluency (diagnosis), and whether a
student is making adequate progress toward the
goals of improved reading proficiency (progress
monitoring). While we strongly agree with the
premise that accuracy, rate, and quality of oral
reading must be assessed within a context of com-
prehension (Pikulski & Chard, 2005), up-to-date
national oral reading fluency norms can serve as an
important tool to assist educators in developing,
implementing, and evaluating effective instruction-
al programs to help every student become a skilled,
lifelong reader and learner.
Hasbrouck is a consultant and researcher with
JH Consulting, 2100 3rd Avenue #2003,
Seattle, WA 98121, USA. E-mail
reading@jhasbrouck.net. Tindal teaches at the
University of Oregon in Eugene.
References
Adams, G.N., & Brown, S. (2003). The six-minute solution.
Longmont, CO: Sopris West.
Behavioral Research and Teaching. (2005). Oral reading flu-
ency: 90 years of assessment (Tech. Rep. No. 33).
Eugene: University of Oregon.
Cassidy, J., & Cassidy, D. (December 2004/January 2005).
What’s hot, what’s not for 2005. Reading Today, p. 1.
Edformation. (2004). AIMSweb progress monitoring and
assessment system. Retrieved May 17, 2004, from
http://www.edformation.com
Francis, D.J., Fletcher, J.M., Stuebing, K.K., Lyon, G.R.,
Shaywitz, B.A., & Shaywitz, S.E. (2005). Psychometric
approaches to the identification of LD: IQ and achieve-
ment scores are not sufficient. Journal of Intellectual
Disabilities, 38(2), 98–108.
Fuchs, L.S., & Deno, S.L. (1991). Curriculum-based measure-
ment: Current applications and future directions.
Exceptional Children, 57, 466–501.
Fuchs, L.S., & Fuchs, D. (1998). Monitoring student progress
toward the development of reading competence: A re-
view of three forms of classroom-based assessment.
School Psychology Review, 28, 659–671.
Fuchs, L.S., Fuchs, D., Hosp, M.K., & Jenkins, J.R. (2001). Oral
reading fluency as an indicator of reading competence: A
theoretical, empirical, and historical analysis. Scientific
Studies of Reading, 5, 239–256.
Good, R.H., III, & Kaminski, R.A. (Eds.). (2002). Dynamic in-
dicators of basic early literacy skills (6th ed.). Eugene:
University of Oregon, Institute for the Development of
Educational Achievement.
Good, R.H., Simmons, D.C., & Kame’enui, E.J. (2001). The
importance and decision-making utility of a continuum of
fluency-based indicators of foundational reading skills
for third-grade high-stakes outcomes. Scientific Studies
of Reading, 5, 257–288.
Good, R.H., Simmons, D.S., Kame’enui, E.J., Kaminski, R.A., &
Wallin, J. (2002). Summary of decision rules for inten-
sive, strategic, and benchmark instructional recommen-
dations in kindergarten through third grade (Tech. Rep.
No. 11). Eugene: University of Oregon.
Hamilton, C., & Shinn, M.R. (2003). Characteristics of word
callers: An investigation of the accuracy of teachers’
judgments of reading comprehension and oral reading
skills. School Psychology Review, 32, 228–240.
Hasbrouck, J.E., & Tindal, G. (1992). Curriculum-based oral
reading fluency norms for students in grades 2–5.
Teaching Exceptional Children, 24(3), 41–44.
Hasbrouck, J.E., Woldbeck, T., Ihnot, C., & Parker, R.I. (1999).
One teacher’s use of curriculum-based measurement: A
changed opinion. Learning Disabilities Research &
Practice, 14(2), 118–126.
Hiebert, E.H. (2002). QuickReads. Upper Saddle River, NJ:
Modern Curriculum Press.
Hosp, M.K., & Fuchs, L.S. (2005). Using CBM as an indicator
of decoding, word reading, and comprehension: Do the
relations change with grade? School Psychology Review,
34 9–26.
Hudson, R.F., Lane, H.B., & Pullen, P.C. (2005). Reading flu-
ency assessment and instruction: What, why, and how?
The Reading Teacher, 58, 702–714.
Ihnot, C. (1991). Read naturally. Minneapolis, MN: Read
Naturally.
Kame’enui, E.J. (2002, May). Final report on the analysis of
reading assessment instruments for K–3. Eugene:
University of Oregon, Institute for the Development of
Educational Achievement.
Kaminski, R.A., & Good, R.H. (1998). Assessing early literacy
skills in a problem-solving model: Dynamic Indicators of
Basic Early Literacy Skills. In M.R. Shinn (Ed.), Advanced
applications of curriculum-based measurement (pp.
113–142). New York: Guilford.
Kuhn, M. (2004/2005). Helping students become accurate,
expressive readers: Fluency instruction for small groups.
The Reading Teacher, 58, 338–345.
Linn, R.L. (2000). Assessments and accountability.
Educational Researcher, 29(2), 4–16.
McLaughlin, M.J., & Thurlow, M. (2003). Educational ac-
countability and students with disabilities: Issues and
challenges. Educational Policy, 17, 431–451.
Oral reading fluency norms: A valuable assessment tool for reading teachers
643
National Institute of Child Health and Human Development.
(2000). Report of the National Reading Panel. Teaching
children to read: An evidence-based assessment of the sci-
entific research literature on reading and its implications
for reading instruction (NIH Publication No. 00–4769).
Washington, DC: U.S. Government Printing Office.
No Child Left Behind Act of 2001, Pub. L. No. 107–110, 115 Stat.
1425 (2002).
Osborn, J., & Lehr, F. (2004). A focus on fluency. Honolulu,
HI: Pacific Resources for Education and Learning.
Partnership for Reading. (2001). Put reading first: The re-
search building blocks for teaching children to read.
Washington, DC: National Institute for Literacy.
Pikulski, J.J., & Chard, D.J. (2005). Fluency: Bridge between
decoding and comprehension. The Reading Teacher, 58,
510–519.
Rasinski, T.V. (2004) Assessing reading fluency. Honolulu,
HI: Pacific Resources for Education and Learning.
Read Naturally. (2002). Reading fluency monitor. Minneapolis:
Author.
Shinn, M.R. (Ed.). (1989). Curriculum-based measurement:
Assessing special children. New York: Guilford.
Shinn, M.R. (Ed.). (1998). Advanced applications of curricu-
lum-based measurement. New York: Guilford.
Snow, C.E., Burns, M.S., & Griffin, P. (Eds.). (1998). Preventing
reading difficulties in young children. Washington, DC:
National Academy Press.
Stahl, S.A., & Kuhn, M.R. (2002). Making it sound like lan-
guage: Developing fluency. The Reading Teacher, 55,
582–584.
Texas Education Agency. (2004). Texas primary reading
inventory—TPRI. Retrieved May 19, 2005, from http://
www.tpri.org
Tindal, G., & Marston, D. (1990). Classroom-based assess-
ment: Testing for teachers. Columbus, OH: Merrill.
The Reading Teacher Vol. 59, No. 7 April 2006
644
... Indeed, Chall (1996) placed fluency development in Stage 2 (Grades 2-3) in her developmental model of reading. However, a growing body of research has shown that the development of reading fluency continues beyond grade 2 (Hasbrouck & Tindal, 2006, 2017 and that a significant number of students in the upper elementary, middle, and even secondary grades have not achieved adequate levels of reading fluency to support proficiency in comprehension Rasinski et al., 2009;Sabatini et al., 2019;Wang et al., 2019). The need for teachers beyond the elementary grades to be aware of reading fluency, its importance, and how it can be assessed and nurtured in students is critical. ...
... The Lexile ranges of the texts we used reflected levels that college ready secondary level students would be expected to read proficiently. The assessment protocol we used followed administration protocols that are commonly used to assess oral fluency (Hasbrouck & Tindal, 2006;Shinn, 1989). Participants first completed a brief survey of demographic data. ...
... Our first research question asked: what are the appropriate reading word recognition and automaticity norms for highly proficient adult readers? Hasbrouck and Tindal (2006) suggest that a fluency rate 10 WCPM above or below the 50th percentile should be considered within the normal and expected range of performance. Applying this criterion to the present study where the 50th percentile score is 148 WCPM, automaticity scores between 138 and 158 may be considered normal and expected. ...
Article
Reading fluency has been identified as a critical competency for reading success. Accurate and automatic word recognition, one component of fluency, is normally assessed through reading rate (Oral Reading Fluency—ORF). While norms for ORF exist through grade 8, norms beyond grade 8 are largely unknown. The present study attempted to establish ORF norms associated with successful adult readers (college graduates), thus establishing a ceiling ORF target range for success in fluency. Results indicate that a word recognition accuracy range of 98–100% and an automaticity range of 138–158 words read correctly per minute on high school level reading material are associated with average performance by college graduates. Results are discussed in terms of implications for monitoring students' progress in fluency in the secondary grades and beyond.
... Trials in this phase that did not meet the specified goal would not result in access to a reinforcer; participants would be given the option of trying again to earn their reinforcer or to take a break. Participants' progress was graphed on an Excel version of the SCC (Harder, 2020), and goals were determined by calculating the celeration necessary to achieve a benchmark of 60 words per minute within 3 months. The primary researcher determined this goal and communicated it to each caregiver, with changes in the goal being relayed during weekly meetings based upon each individual's performance. ...
... After the trials were completed for the day, participants filled out a Google Form, which was sent directly to the primary researcher. These data were transferred to an Excel Template for the SCC, where primary data analysis occurred (Harder, 2020). ...
Article
Frequency building, a method of instruction by which learners perform timed repetitions of the behavior followed by corrective feedback, is often used in conjunction with precision teaching wherein individuals’ performances are continuously measured on a standard celeration chart (SCC) to facilitate data-based decisions. The present case study utilized a synchronous teaching method to teach caregivers to implement frequency building in-home to teach young learners to identify sight words while learner performance was measured on a SCC. Four participants, with little previous exposure to sight words, were taught using precision teaching as implemented by caregivers. Results indicate that synchronous teaching is a viable method to teach caregivers to implement precision teaching, but care should be given to the method of teaching as well as procedural fidelity during precision teaching trials.
... Reading benchmarks in English are well established in relation to words correct per minute, i.e. reading accuracy and reading rate (Hasbrouck and Tindal 2006), but these still need to be established for the African languages. As reading rate has been correlated with reading comprehension (Cummings and Petcher 2016) slow readers might struggle to read for meaning in any language (Spaull, Pretorius, and Mohohlwane 2020). ...
Article
The research reported on in this article examines the attitudes towards student linguistic diversity and multilingual pedagogies of 30 university lecturer participants enrolled for an accredited short course on multilingual pedagogies at a South African institution. The aim of the course is to support lecturers in helping students gain access to their disciplines using multilingual strategies including translation and translanguaging. Staff from a range of disciplines drawn from 8 faculties formed the first cohort of participants. Within a postmodern research paradigm, an interpretive approach was used to understand and analyze data collected from questionnaires, language histories and a language portrait exercise. We discuss findings on staff perceptions of translanguaging in their teaching; their knowledge of and sensitivity towards their students’ linguistic repertoires, their own language backgrounds and the challenges they face in catering for linguistic diversity in their lectures. We also present participants’ examples of multilingual pedagogies based on what they had learned from the MP course.
... Panel & National Institute of Child Health & Human Development, 2000), studies have also established benchmarks using only reading fluency percentile scores. For example, Hasbrouck and Tindal (2006) recommend that student performance around the 50th percentile on a reading fluency assessment can be considered as appropriate for that grade. Other scholars have reported that students scoring below the 25th percentile on a test were considered as performing below average and in need of additional intervention (Cummings, Dewey, et al., 2011;Fletcher et al., 2005;Fuchs & Fuchs, 2005; University of Oregon, 2020). ...
Article
Full-text available
The National Education Policy of India in 2020 indicated a goal that all children will achieve foundational literacy by Grade 3 by 2025. In line with this, the Indian government sets a target of 30 to 35 correct words per minute (cwpm) at Grade 3. However, there are no supporting data for reading fluency targets in the Indian reading literature. Similarly, international benchmarks are used in India, but there are no data examining their appropriateness for the Indian context. The purpose of this study was to develop data‐driven benchmarks for oral reading fluency (ORF) for students in Grade 3 in the Indian context. We compared both international standards and the national targets with our study's benchmarks to assess their suitability for the Indian population. This study assessed ORF in English for Grade 3 students from three schools following three different curricula in India. The primary tool employed was Fluency and Benchmarking for Literacy in Education (FABLe), a digital curriculum‐based measurement tool developed specifically for the Indian context. Students were assessed at the beginning and end of the academic year. The performance data collected using FABLe when compared with the international benchmarks were significantly skewed, especially for the end of the year. The national Grade 3 fluency target was found to be even lower than the 5th percentile at the end of the year. Employing international tools and benchmarks may lead to an overidentification of the number of students that need support, and the current Indian government target may lead to an under‐identification of the same population of students. This study fills an important research gap by providing schools with a Grade 3 ORF measure developed for the Indian context. The results should be considered a preliminary benchmark in Grade 3 and be used to inform literacy practices. What is already known about this topic The Government of India has declared a Grade 3 target of 30 to 35 correct words read per minute, which is aligned to the National Education Policy 2020 goal of achieving foundational literacy for all students by Grade 3. International benchmarks are being employed in India, but there are no data to indicate their suitability for the Indian context. The available Indian benchmarks are based on limited data from Grade 1 and 2 students and from other developing countries. What this paper adds This study presents preliminary data‐driven benchmarks for English oral reading fluency (ORF) at the beginning and end of Grade 3 in the Indian context. The study compares student performance with respect to the international benchmarks as well as with the Indian national target, highlighting the need to develop specific benchmarks for the Indian population. Implications for theory, policy or practice Currently, there are no benchmarks available for the Indian context. Policy‐makers can use FABLe benchmarks to set more realistic reading targets. International benchmarks may tend to overidentify students, and the national targets may tend to under‐identify students needing support. Indian educators can make use of FABLe benchmarks to more accurately identify students in need of intervention. Using the benchmarks, schools and educators can develop reading profiles of students to guide data‐driven instruction and monitor student progress.
... Özellikle ilk defa karşılaşılan kelime sayısının çok fazla olduğu metinlerde çözümleme akıcılığının etkisinin çok daha fazla görüldüğü ve zayıf çözümleme akıcılığı becerilerine sahip okuyucuların hem akıcı okuma hem de okuduğunu anlama süreçlerinde güçlüklerle karşılaşmalarının kaçınılmaz olduğu vurgulanmaktadır (Hudson vd., 2008). Genel olarak bakıldığında okuyucuların karşılaştıkları kelimeleri hem doğru hem de uygun hızda okumaları gerektiği ve çalışmalarda, özellikle sesli şekilde yapılan okumaların akıcılıklarının değerlendirilmesinde okuyucuların dakikada okudukları doğru sözcük sayılarının temel alındığı görülmektedir (Hasbrouck ve Tindal, 1992;Hasbrouck ve Tindal, 2006;Rasinski, 1990). Buna ek olarak akıcı okumanın gerçekleşmesi için okunan sözcüklerin doğru şekilde anlamlandırılması gerektiği açık şekilde belirtilmektedir (Allington, 2006;Fuchs, Fuchs, Hosp ve Jenkins, 2001). ...
Article
Full-text available
The aim of this study is to develop a Reading Skills Assessment Tool (RSAT) for 2nd, 4th, 6th and 8th grade readers and to assess if there is a significant difference between the decoding and fluent reading skills of good and poor comprehenders. First, validity and reliability studies of the RSAT, which includes the dimensions of decoding, fluent reading, and reading comprehension, were conducted in this direction. The RSAT was finalized with the data collected from 840 participants in validity and reliability studies. Following this, the reading comprehension scores of 150 participants at each grade level were divided into lower and upper 27% groups, and groups with good and poor comprehenders were determined. Then, the differences in the decoding and reading fluency results of the good and poor comprehenders were studied. The results revealed that the decoding and fluency reading performances of good comprehenders were considerably superior to those of the group with poor comprehenders, and the results were reviewed in light of the relevant literature. Consequently, good comprehenders performed better than poor comprehenders in decoding (syllable reading pace, real word reading pace and pseudoword reading accuracy) and fluent reading (correct number of words read per minute). With the research, a formal tool for evaluating reading skills has been added to the Turkish literature, and it has contributed to the elimination of an important limitation in this field. RSAT can be used effectively by experts in scientific studies and by teachers in the evaluation of student performance in practice. In addition, it was emphasized that good and poor readers perform differently in all aspects of reading, and that it is crucial to prevent future Good and poor readers Components of reading Assessment of reading Development of reading Reading performances of good and poor readers Article Info
Article
Eye movements have been examined as an index of attention and comprehension during reading in the literature for over 30 years. Although eye-movement measurements are acknowledged as reliable indicators of readers' comprehension skill, few studies have analyzed eye-movement patterns using network science. In this study, we offer a new approach to analyze eye-movement data. Specifically, we recorded visual scanpaths when participants were reading expository science text, and used these to construct scanpath networks that reflect readers' processing of the text. Results showed that low ability and high ability readers' scanpath networks exhibited distinctive properties, which are reflected in different network metrics including density, centrality, small-worldness, transitivity, and global efficiency. Such patterns provide a new way to show how skilled readers, as compared with less skilled readers, process information more efficiently. Implications of our analyses are discussed in light of current theories of reading comprehension.
Article
Full-text available
Despite a body of evidence that curriculum-based measurement of reading (R-CBM) is a valid measure of general reading achievement, some school-based professionals remain unconvinced. At the core of their argument is their experience with word callers, students who purportedly can read fluently, but do not understand what they read. No studies have been conduced to determine if teachers' perceptions about these word callers are accurate. This study examined the oral reading and comprehension skills of teacher-identified word callers to test whether they read fluently, but lacked comprehension. Two groups of third graders (N = 66) were examined: (a) teacher-identified word callers (n = 33) and (b) similarly fluent peers (n = 33) who were judged by their teachers to read as fluently as the word caller but who showed comprehension. They were compared on R-CBM, CBM-Maze (an oral question-answering test), and the Passage Comprehension subtest of the Woodcock Reading Mastery Test. Results disconfirmed that word callers and their similarly fluent peers read aloud equally well. Word callers read fewer correct words per minute and earned significantly lower scores on the three comprehension measures. Teachers were not accurate in their predictions of either group's actual reading scores on all measures, but were most inaccurate in their prediction of word callers' oral reading scores. Implications for addressing resistance in using CBM as a measure of general reading achievement are discussed.
Article
Full-text available
A deep, developmental construct and definition of fluency, in which fluency and reading comprehension have a reciprocal relationship, is explicated and contrasted with superficial approaches to that construct. The historical development of fluency is outlined, along with conclusions of the U.S. National Reading Panel, to explore why fluency has moved from being “the neglected aspect of reading” to a popular topic in the field. A practical, developmental instructional program based largely on the theoretical framework and research findings of Linnea Ehri is delineated. The nine essential components of that program include building the graphophonic foundations for fluency; building and extending vocabulary and oral language skills; providing expert instruction and practice in the recognition of high-frequency vocabulary; teaching common word parts and spelling patterns; teaching, modeling, and providing practice in the application of a decoding strategy; using appropriate texts to coach strategic behaviors and to build reading speed; using repeated reading procedures as an intervention approach for struggling readers; extending growing fluency through wide independent reading; and monitoring fluency development through appropriate assessment procedures. The position is taken throughout that teaching, developing, and assessing fluency must always be done in the context of reading comprehension.
Article
Full-text available
Effective approaches to fluency instruction should facilitate automatic and accurate word recognition as well as the ability to read with expression. The study reported in this article focused on instructional approaches that can be used with small groups of learners within a broader literacy curriculum, one that is suitable for flexible grouping. It also explored the relationship between fluent reading and comprehension. Twenty-four struggling second-grade readers were selected to take part in the interventions. The research evaluated two approaches for assisting learners who were making the transition to fluent reading: a modified repeated reading approach, and a scaffolded wide-reading approach in which learners read equivalent amounts of text without the use of repetition. A listening-only group, designed to serve as a Hawthorne control, and a control group were also included. Results indicate that the students in the wide-reading and repeated reading groups demonstrated growth in terms of word recognition in isolation, prosody, and correct words per minute, and that the wide-reading group also demonstrated growth in terms of comprehension. Suggestions for integrating these approaches with the literacy curriculum are discussed.
Article
Full-text available
This article explains the elements of reading fluency and ways to assess and teach them. Fluent reading has three elements: accurate reading of connected text, at a conversational rate with appropriate prosody. Word reading accuracy refers to the ability to recognize or decode words correctly. Reading rate refers to both word-level automaticity and speed in reading text. Prosodic features are variations in pitch, stress patterns, and duration that contribute to expressive reading of a text. To assess reading fluency, including all its aspects, teachers listen to students read aloud. Students' accuracy can be measured by listening to oral reading and counting the number of errors per 100 words or a running record. Measuring reading rate includes both word-reading automaticity and speed in reading connected text using tests of sight-word knowledge and timed readings. A student's reading prosody can be measured using a checklist while listening to the student. To provide instruction in rate and accuracy, variations on the repeated readings technique are useful. To develop prosody, readers can listen to fluent models and engage in activities focused on expression and meaning. Opportunities to develop all areas of reading fluency are important for all readers, but especially for those who struggle.
Curriculum-based measurement (CBM) is a well-researched and widely discussed tool for making a variety of key school-based decisions, including eligibility for special services, monitoring students' progress, developing and modifying academic interventions, and evaluating program effectiveness. However, there is little evidence that CBM is being used by significant numbers of teachers or school psychologists. This article presents an overview of CBM and a discussion of the general lack of implementation by practitioners. Six case studies illustrate the transition of one teacher from a skeptical opponent of CBM to a strong advocate, who incorporates CBM into her instructional program for low-skilled readers.