Content uploaded by Paula J. Schwanenflugel
Author content
All content in this area was uploaded by Paula J. Schwanenflugel on Apr 04, 2016
Content may be subject to copyright.
232 Reading Research Quarterly • 45(2) • pp. 232–253 • dx.doi.org/10.1598/RRQ.45.2.4 • © 2010 International Reading Association
Over the past decade, fluent reading has come to be seen as a central component of skilled reading and a driving force
in the literacy curriculum. However, much of this focus has centered on a relatively narrow definition of reading fluency,
one that emphasizes automatic word recognition. This article attempts to expand this understanding by synthesizing sev-
eral key aspects of research on reading fluency, including theoretical perspectives surrounding automaticity and prosody.
It examines four major definitions of reading fluency and their relationship to accuracy, automaticity, and prosody. A
proposed definition is presented. Finally, the implications of these definitions for current assessment and instruction are
considered along with suggestions for reenvisioning fluency’s role within literacy curriculum.
Over the past decade, the field of literacy educa-
tion has seen a major shift in fluency’s role in
the literacy curriculum, moving from a rarely
encountered instructional component to one that is of-
ten responsible for driving major instructional decisions
(e.g., Riedel, 2007; Schilling, Carlisle, Scott, & Zeng,
2007). This shift is due, in part, to the identification
of fluency as one of the areas reviewed by the National
Reading Panel (National Institute of Child Health and
Human Development [NICHD], 2000). It also results
from a broader reconsideration of the role of oral read-
ing in the development of skilled reading (e.g., Rasinski,
2006; Reutzel, Fawson, & Smith, 2008). The recogni-
tion of the importance of fluency that has emerged as
part of our developing understanding of the construct
has led to a corresponding emphasis on fluency assess-
ment and instruction within the literacy curriculum
(e.g., Pikulski & Chard, 2005).
Most literacy educators consider fluency to be a crit-
ical component of reading development (e.g., Rasinski,
Blachowicz, & Lems, 2006; Samuels & Farstrup, 2006).
However, the current implementation of f luency in-
struction in many classrooms is often driven by assess-
ments that build upon an incomplete conceptualization
of the construct and can lead to both inappropriate in-
struction and a serious misconception of this essential
characteristic of skilled reading. Further, despite the
significant amount of attention the construct of reading
fluency has received recently, there are still a number
of questions surrounding our understanding of what
constitutes fluency, its role in the reading process, and
how its assessment and instruction fit into the literacy
curriculum. We plan to use the opportunity presented
in this article to synthesize several key aspects of the
research surrounding reading fluency, from theoretical
perspectives to the role evaluation plays in determin-
ing practice, with an emphasis on the work that has
Review of Research
Aligning Theory and Assessment
of Reading Fluency: Automaticity,
Prosody, and Definitions of Fluency
Melanie R. Kuhn
Boston University
Paula J. Schwanenflugel
The University of Georgia
Elizabeth B. Meisinger
The University of Memphis
A b s t r a c t
A B S T R A C T
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 233
occurred since the National Reading Panel ’s (NICHD,
2000) report.
Although there are a number of definitions of read-
ing fluency, each of which places varying emphasis on
its components, there seems to be a growing consen-
sus that accuracy, automaticity, and prosody all make
a contribution to the construct (e.g., Hudson, Pullen,
Lane, & Torgesen, 2009; Rasinski, Reutzel, Chard, &
Linan-Thompson, in press). Yet the way in which these
components are conceptualized, their role in reading
development, and their function in reading compre-
hension have a significant influence on how they are
taught and assessed. In particular, we plan to consider
automaticity and prosody in greater detail. And, while
we do not focus on the development of accurate word
recognition, per se, we address accuracy as part of the
broader discussion of both automaticity and fluency as-
sessment. This decision was made in part to move away
from the view that reading fluency results from an im-
provement in the ability of students to recognize words
and their component elements with increasing rapidity.
Instead, we refer the reader to elegant discussions of the
development of reading accuracy by Chall (1996), Ehri
(1995), and Perfetti (1992).
Our discussion is divided into several parts; the
first provides theoretical perspectives on reading flu-
ency, particularly the role of automaticity and prosody
in f luency. We then consider four definitions of fluent
reading; each definition places differing emphasis on
fluency’s component parts as well as on the role fluency
plays in the reading curriculum. As part of this section,
we also present our own conception of what consti-
tutes fluent reading. Next, we explore the relationship
between certain conceptualizations of reading f luency,
dominant assessments, and current practice. Finally,
we consider the implications of these definitions for
assessment and instruction and make suggestions for
incorporating a broader understanding of the goals and
purposes of reading fluency within a reenvisioned lit-
eracy curriculum.
Theoretical Perspectives on Reading
Fluency
Automaticity
Automatic word recognition is central to the construct
of f luency and fluency’s role in the comprehension of
text (e.g., Samuels, 2004, 2006). But what are the quali-
ties that make for automaticity as it relates to reading
fluency? According to Logan (1997; see also Moors and
DeHouwer, 2006), processes are considered to be auto-
matic when they possess four properties: speed, effort-
lessness, autonomy, and lack of conscious awareness.
These properties can be considered together or sepa-
rately when determining whether a skill is automatized
(Moors & DeHouwer, 2006).
The first of these properties is speed, which is
thought to emerge concurrently with accuracy as learn-
ers engage in practice (Logan, 1988). As automaticity
develops, whether in terms of reading, perceptual-mo-
tor activities, or another skilled task, the learner’s per-
formance not only becomes accurate, it gets faster.
However, this increase in speed is not limitless. Rather,
the learning curve for these tasks follows what is known
as the power law; this “states that reaction time decreas-
es as a function of practice until some irreducible limit
is reached. Speed increases throughout practice, but
the gains are largest early on and diminish with further
practice” (Logan, 1997, p. 123).
In terms of connected text, the power law can be
seen in Hasbrouck and Tindal’s (2006) oral reading flu-
ency norms; for example, between w inter and spring
of the first-grade year students at the 50th percentile
increase their reading rate approximately 30 correct
words per minute, whereas their peers in the eighth
grade gain only 18 correct words per minute over the
entire school year and the gains for adult skilled read-
ers, who have reached asymptote, are infinitesimal.
The second attribute of automaticity is effortless-
ness (Logan, 1997). This refers to the sense of ease with
which a task is performed and to the ability to carry
out a second task while carrying out the first, automatic
one. If a person is able to accomplish two tasks at once,
then at least one of those tasks is, by necessity, auto-
matic. In terms of fluency, effortlessness can be seen in
two ways. First, fluent readers lack a sense of struggle
in recognizing most of the words they encounter in text.
This effortlessness in word recognition is derived, in
part, from unitization, a process that involves collaps-
ing some of the sequential steps used to identify words
(Cunningham, Healy, Kanengiser, Chizzick, & Willitts,
1988). Slow, algorithmic sequential word identification
processes are seemingly replaced by a shift toward di-
rect single-step retrieval of larger units (such as words
and phrases) in long-term memor y. These retrieved
skills essentially outpace the slower algorithmic word
identification processes and can be completed more
quickly (Logan, 1988). Second, most fluent readers not
only decode text, they also simultaneously comprehend
what they are reading. Inefficient word recognition
hampers comprehension and takes up precious cogni-
tive resources that should be used for understanding.
With automatization of lower level processes, children
can shift their attention from lower level skills to higher
level, integrative aspects of reading such as reading flu-
ently w ith comprehension. Disfluent readers, on the
other hand, are unable to integrate these lower level
skills with higher level ones, primarily because of the
Reading Research Quarterly • 45(2)
234
effort they need to expend on word recognition (e.g.,
LaBerge & Samuels, 1974; Samuels, 2006).
In addition to rate and effortlessness, automatic pro-
cesses are also autonomous; that is, they occur with-
out intention, beginning and running to completion
independent of the direction or intent of the person
undertaking the act (Logan, 1997). In contrast, a non-
autonomous process is deliberate, allowing an individu-
al to maintain control over the act and deciding whether
it occurs. In the case of reading, f luent readers have
little choice but to recognize words as they encounter
them whereas beginning readers do not find reading
to be an obligatory act. For example, fluent readers of-
ten find themselves inadvertently reading the text that
runs along the bottom of a news program, although
they are eventually able to use their available cognitive
resources to inhibit it. Disfluent readers, on the other
hand, are either unable to process the text at all or may
find their attentional resources excessively preoccupied
by it (Schwanenflugel & Ruston, 2008). However, au-
tonomous processing of words comes in early in the
development of reading, perhaps even before children
are truly fluent readers (Schwanenflugel, Morris, Kuhn,
Strauss, & Sieczko, 2008; Stanovich, Cunningham,
& West, 1981). Indeed, continued lack of autonomy
of lexical processing is an indicator that the child (or
adult) is not yet a fluent reader (Protopapas, Archonti,
& Skaloumbakas, 2007; Schwanenflugel et al., 2006).
The final characteristic of automaticity is a lack of
conscious awareness (Logan, 1997). Once lower level
word recognition skills become automatic, the con-
scious awareness of the subskills that comprise them
disappears. This lack of conscious awareness in word
recognition differentiates f luent from disf luent read-
ers. Disfluent readers tend to be keenly aware of the
steps they need to undertake to determine the words
in a text and find the process to be slow and deliberate
(e.g., Chall, 1996). However, because word recognition
has become automatic for f luent readers, they are able
to identify nearly every word they encounter without
conscious effort.
Although each of these four properties can be ap-
plied to automatic word recognition, it is important to
remember that these attributes develop on a continu-
um, as well as at different rates, so that readers who
have had “an intermediate amount of practice may be
somewhat fast, somewhat effortful, somewhat autono-
mous, and partially unconscious” (Logan, 1997, p. 128).
Further, as readers gain skill and are exposed to more
texts, automaticity may expand not just at the sublexi-
cal (i.e., phoneme and rime level) and word level, but
also at the phrasal and perhaps even the sentence level.
Developing Automatic Word Recognition
Although the aforementioned discussion indicates the
complexities of automaticity, the issues surrounding
the development of automatic word recognition are still
critical to reading fluency and therefore deserving of
our attention. So how does automatic word recognition
develop? The basic answer is that it occurs through con-
sistent practice (Logan, 1997; Samuels, 2004). However,
what that practice consists of and what it results in are
central determinants in our current understanding of
both reading fluency and its implementation in the
classroom.
When discussing word recognition automatic-
ity, we are talking about comparatively instantaneous
identification. Such rapid word recognition is impor-
tant because readers need to integrate information from
multiple sources, phonemic, semantic, phrasal, textual,
and so on. However, because of the cognitive resourc-
es used by word recognition, beginning readers must
switch between these multiple sources rather than pro-
cess them in a unified manner. To move beyond this
ser ial processing and toward the autonomous word
recognition entailed by fluent reading, learners require
the opportunity for extensive practice in the reading of
connected text (Kuhn et al., 2006; Schwanenflugel et
al., 2009).
According to Logan (1997), every encounter with
a task lays down a trace, or instance representation, in
memory. As the number of encounters, or instances,
increase, learners begin to build their knowledge base.
When individuals first encounter a representation, their
performance is based on an algorithmic computation
that involves thinking or reasoning. However, as their
encounters with a particular task increase, their knowl-
edge base becomes more extensive and their retrievals
begin to be based on past instances, or memories of past
solutions, rather than on the need to formulate a solution
based on slow algorithmic processes (see also, Rawson,
2007; Rawson & Middleton, 2009). When this knowl-
edge base is substantial enough, learners’ performance
can be based entirely on memory retrieval. However,
although it is most likely that automaticity will occur af-
ter numerous exposures to a task, it is conceivable that
it can occur after only one encounter. And, adding one
trace to the initial encounter, or even the first 10 en-
counters, will have greater impact on the reader’s abil-
ity to retrieve that trace quickly, or from memory, than
does adding one trace to the one hundredth encounter
(Logan, 1992; Logan, Taylor, & Etherton, 1999). This
notion has important implications for reading practice.
When reading, learners encounter letters, words,
phrases, and construct higher order propositional struc-
tures, and each reading leaves a trace at each level of
representation (Logan, 1997). Although it is true that
the number of times individuals encounter instances
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 235
at these different levels of representation varies fairly
dramatically, for example, readers encounter letters and
even high frequency words far more often than they do
a particular higher order structure, there will still be
some benefit for readers from each encounter at every
level.
We contend that this argument has impor tant
implications for practice. To begin with, readers can
benefit from both repetition (e.g., Levy, 2001; Logan,
1997; Samuels, 2006) and the wide reading of texts
(e.g., Schwanenf lugel & Ruston, 2008; St anovich,
1986). Repetition of text allows for the kind of consis-
tent practice that is important to readers. And, drawing
from both the Samuels and the Logan theories of auto-
maticity, it allows for the deepening of traces (Logan,
1997) and the freeing up of attention (Samuels, 2006).
Further, Logan pointed out that, in addition to devel-
oping automatic word recognition, repeated readings
allow learners to establish prosody, identify appropri-
ate phrasing, and determine meaning. Thus, difficulties
encountered in a text can be successfully solved as the
text is read repeatedly and, as a result, similar difficul-
ties are likely to be more readily solved when encoun-
tered in another text.
Another important implication concerns the pow-
er law. Because most of the gains made with repeated
readings, both in terms of accuracy and automatic-
ity, occur between the third and the fifth repetition
(e.g., O’Shea, Sindelar, & O’Shea, 1987; Reutzel, 2003;
Rawson & Middleton, 2009), the power law mentioned
earlier provides a reasonable explanation for decreasing
gains across continued repetitions. Indeed, after some
minimal amount of practice, readers seem to rely on di-
rect retrieval of text meanings rather than on slow algo-
rithmic processing of each word (Rawson & Middleton,
2009).
However, it is important to note that Logan (1997)
also argued that some variability in practice can benefit
learners: “Automaticity transfers to similar stimuli, so
there should be some benefit in exposing readers to dif-
ferent materials” (p.139). Wide reading provides oppor-
tunities for just such transfer, and research conducted on
students who were asked to read a wide variety of mate-
rials with adequate support (e.g., Kuhn, 2005; Kuhn et
al., 2006; Schwanenflugel et al., 2009; Schwebel, 2007)
indicates that their automaticity does improve. Because
there is a great deal of word overlap in the materials
used for beginning readers (e.g., Adams, 1990), it seems
likely that seeing words in multiple contexts improves
students recognition of those words (Mostow & Beck,
2005; Rashotte & Torgesen, 1985). However, the exact
degree of similar versus unique words needs to be de-
termined (e.g., Allington, 2009; Hiebert, 2006).
In addition to the sheer number of words that oc-
cur in multiple contexts, it might also help to have
students read across themes, so that when a new word
is encountered, there is a greater likelihood of it be-
ing seen within a different but supportive context (e.g.,
Logan, 1997). In this way, students are more likely to
build upon and have the opportunity to expand their
conceptual, as well as their orthographic, knowledge.
We consider this understanding to be a compliment
to the arguments presented by Stanovich (1986) in his
article describing the Matthew Effects in reading and
demonstrated in research we recently conducted with
several colleagues (Kuhn et al., 2006; Schwanenf lugel
et al., 2009). Not only do readers who read widely have
more accurate and automatic word recognition, but they
also have a more extensive vocabulary and encounter a
broader range of concepts than do their peers who read
in a more limited way (Stanovich, 1986). Continued
practice on the same words, or same texts, beyond a
certain point may not only be redundant, it may have
the perverse effect of fixing students’ attentional focus
on the lower level aspects of text rather than shifting
their focus toward practicing the integration of higher
level skills. Practice through wide reading would trans-
late into greater fluency, leading to further increases in
readers’ ease and comfort with texts.
We w ish to conclude this discussion with a reit-
eration of what we consider to be a central tenant of
automaticity; it is important to stress that, whether de-
veloped through repetition or the wide reading of texts,
automaticity occurs on multiple levels and connects to
comprehension in multiple ways (e.g., Samuels, 2004;
Logan, 1997). We also want to stress that it is this inter-
action, occurring between various levels of processing,
rather than simple speeded word recognition, which is
central to a reader’s construction of meaning from text
(Bredekamp & Pikulski, 2008; Fuchs, Fuchs, Hosp,
& Jenkins, 2001; Hudson et al., 2009; Wolf & Katzir-
Cohn, 2001).
Prosody
Although automaticity is central to children’s develop-
ment as fluent readers, it does not account for all as-
pects of the construct. A second critical component
of reading fluency is the ability to read with prosody;
that is, with appropriate expression or intonation cou-
pled with phrasing that allows for the maintenance of
meaning (Cowie, Douglas-Cowie, & Wichmann, 2002;
Miller & Schwanenflugel, 2006, 2008; Schwanenflugel,
Hamilton, Kuhn, Wisenbaker, & Stahl, 2004).
However, the import of developing expressiveness
in reading, as children proceed from reading in a stacca-
to, flat word-by-word manner to something that sounds
more or less like everyday speech, is not entirely clear.
Is expressiveness merely an epiphenomenon which pro-
ceeds of its own accord with little impact on other as-
pects of reading or is it some essential ingredient that
Reading Research Quarterly • 45(2)
236
benefits (or perhaps enables) other reading processes?
Our question is the following: If the development of ex-
pressiveness is important, what about it is important
and what is it important for? If it is essential to read-
ing, we may, indeed, wish to prioritize prosody in our
instruction. If it is inessential or emerges without in-
struction, then we might decide not to. In recent years,
the evaluation of expressiveness in fluent reading has
become the focus of empirical research to address these
questions. In what we present here, we equate reading
with expression with reading prosody.
Prosody is the music of language. Indeed, some an-
thropologists have claimed that speech prosody served
as the protolinguistic base from which music itself may
have emerged (Simpson, Oliver, & Fragaszy, 2008).
Prosody captures the rise and falls of pitch, rhythm
and stress, the pausing, lengthening, and elision sur-
rounding certain words and phrases that is found in the
pull of linguistic communication (Hirschberg, 2002).
However, there are clear developments in children’s
understanding and use of prosody in their own speech
that are ongoing during the period in which children
are learning to read.
In this section, we begin by considering the spec-
trographic features measured to discern the qualities of
prosody and their import in the development of read-
ing prosody. We outline the psycholinguistic functions
of prosody. We consider the costs and benefits of vari-
ous ways of measuring prosody for reading fluency. We
then describe what we know about where prosody fits
in our conceptions of the development of reading skill.
Prosody Features
The first of these features is fundamental frequency (Fo)
or, more simply, pitch. Pitch needs to be considered
relative to a speaker’s voice range and native language.
For example, young children with their high-pitched
voices may not have prosodic “room” to regulate pitch.
Language features such as tones in tone-bearing lan-
guages such as Chinese will affect measured pitch.
Declarative sentences or statements are usually sig-
naled by an initial rising and then falling pitch (called
pitch declination or, simply, declination). As sentenc-
es become longer, there is a general flattening out of
pitch (Ladd, 1984), so we can expect children to dis-
play smaller sentence-final declinations as they read
complex texts (Benjamin, Schwanenflugel, & Kuhn,
2009). Yes–no questions are usually marked by sus-
tained rising pitch, but this rising pitch is not obliga-
tory for all question types (Miller & Schwanenflugel,
2006). Further, children’s understanding of declarative
question prosody (e.g., He ate a bologna sandwich?) is still
under development until around age 11 (Patel & Grigos,
2006). Consequently, we should not tell children to use
ascending pitch at each and every question mark as is
sometimes the advice given to teachers (Hudson, Lane,
& Pullen, 2005).
Pitch can convey pragmatic information as well. A
plateau contour can convey a sense of boredom or reci-
tation effect. A continuation rise can indicate continua-
tion or uncertainty (Hirschberg, 2002). Neither pattern
in children’s readings should necessarily be taken as
indicating a lack of fluency. As children learn to read
with good prosody, they come to display an intonation-
al pitch contour increasingly similar to the one used by
adults when they read. In our studies, this has been a
very consistent pattern associated with good fluency
(Schwanenflugel et al., 2004; Miller & Schwanenflugel,
2006, 2008).
Another prosodic feature is duration. Vowels in
stressed words are usually longer than in unstressed
words (Temperley, 2009) and even longer in phrase-
final position. Stressed syllables tend to also have
greater intensity, or volume, (Cooper & Paccia-Cooper,
1980). Duration has to be taken in context with the
speaker’s overall speaking rate. Thus, faster readers
will have shorter segment durations than slower read-
ers. However, syllable duration will become shorter as
speakers proceed over long sentences (Ladd, 1984).
This means that a child who has been told to read
quickly will show less evidence of stress marking and
phrase-final lengthening. Children will not be able to
read both very quickly and with proper prosody, so di-
recting them to read passages quickly and accurately
will have the perverse effect of having them read less
expressively.
Stress is a property in speaking that “makes one
syllable in a word more prominent than its neighbors”
(Himmelmann & Ladd, 2008, p. 248). Knowledge of
a word’s stress seems to be retrieved automatically
when a word is read (Gutiérrez-Palma & Palma-Reyes,
2008). Function or “closed class” words tend to be un-
stressed. However, English favors a regular distribution
of stressed and unstressed syllables, and this will cause
English speakers to add or move stress to keep up a
regular stress pattern (e.g., I gave it to the postman) and
to avoid stress clashes (e.g., She turned thirteen versus
thirteen donuts; Temperley, 2009). Stress can be used
to distinguish grammatical form class in English (e.g.,
permit [noun] versus permit [verb]) with nouns being
more likely to be stressed on the first syllable than verbs
are (Kelly & Bock, 1988). Each language, however, fol-
lows its ow n rhy thmic pattern. Sensitivity to stress
patterns is related to the development of skilled read-
ing (de Bree, Wijnen, & Zonneveld, 2006; Goswami et
al., 2002; Jarmulowicz, Taran, & Hay, 2007; Orsolini,
Fanari, Tosi, de Nigris, & Carrier, 2006; Thomson,
Fryer, Maltby, & Goswami, 2006; Whalley & Hansen,
2006; Wood, 2006). So, in monitoring for prosody in
children’s reading, we should look for the familiar stress
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 237
patterns associated with the language that they speak,
keeping in mind that nonnative speakers are unlikely to
show nativelike use of stress (Guion, Harada, & Clark,
2004).
Pausing is noted by a spectrographic silence in oral
reading beyond that invoked by some consonant com-
binations. Slow speakers make more pauses, and people
differ considerably as to whether they make sentence-
internal pauses in speech (Eisler, 1968; Krivokapić,
2007). Regardless, intrasentential pauses tend to be
shorter than intersentential ones (Cooper & Paccia-
Cooper, 1980). Pauses tend to be larger both preceed-
ing and following syntactically complex phrases and as
information load increases (Cooper & Paccia-Cooper,
1980; Ferreira, 1991; Zvonik & Cummins, 2003). Still,
we should not expect children to pause midsentence
simply because they have completed a complex noun
phrase. Neither should we consider a pause in mid-
sentence a reading error in long, complex sentences.
Our work has suggested that most midsentence pauses
among young readers are related to decoding abilities
(Miller & Schwanenflugel, 2008).
What Are the Psycholinguistic Functions
of Prosody?
Prosody provides a variety of natural breakpoints in
continuous speech. These intonational units provide
distributional “edges” that allow the listener, includ-
ing children, to break up continuous speech for pars-
ing (Ramus, Hauser, Miller, Morris, & Mehler, 2000).
Words at the right edges of these units are likely to pos-
sess boundary tones that indicate the end of the partic-
ular unit, typically word-final lengthening, declination,
or pausing. If speech has these boundary markers in-
serted incorrectly, it is difficult both to understand and
to parse (Sanderman & Collier, 1997; Shukla, Nespor,
& Mehler, 2007); it is possible that the intermittent
pausing found in the disf luent reading of young chil-
dren may have this effect also, but this has yet to be
determined.
As indicated by Wheeldon and Lahiri (1997), “pro-
sodic constituents are derived from syntactic constit-
uents but are not necessarily isomorphic to them (p.
357).” Thus, syntactic bracketing (e.g., [[[The girl]NP [[I]
NP [[danced with]V [at the party]PP ]VP]S]NP[tripped]V P]S)
is considerably richer than the bracketing that prosody
imposes. So one cannot assume that the positive effects
of syntactic bracketing of text and greater syntactic
awareness on children’s comprehension (e.g., Mokhtari
& Thompson, 2006; Young & Bowers, 1995) will be the
same as those found for reading prosody.
One of the essential functions of prosody is to pro-
vide a basic cognitive skeleton that allows one to hold
an auditory sequence in working memory (Frazier,
Carlson, & Clifton, 2006; Swets, Desmet, Hambrick,
& Ferreira, 2007). By cognitively bracketing key in-
formational units such as phrases, prosody assists in
maintaining an utterance in working memory until a
more complete semantic analysis can be carried out
(Koriat, Greenberg, & Kreiner, 2002). Although there
is no evidence currently that the development of ap-
propriate reading prosody allows this to occur, it has
been shown that people have better memory for poetic
versions of texts that have enhanced prosodic features
(Goldman, Meyerson, & Coté, 2006). It is possible
that the construction of a good prosodic reading (com-
pared with an inappropriate rendering) might improve
comprehension.
Prosody can also serve to disambiguate semantically
and syntactically ambiguous sentences. Because speak-
ers rarely recognize their own ambiguity, they don’t use
prosody reliably to disambiguate their own utterances
(Allbritton, McKoon, & R atcliff, 1996; Beach, 1991;
Snedeker & Trueswell, 2003), but listeners use it when
it’s available (Snedeker & Trueswell, 2003). Children
have a fragile awareness of how prosody relates to dis-
ambiguation (Snedeker & Yuan, 2008). Consequently,
we should not expect children to use this type of disam-
biguating prosody in their oral readings.
Prosody carries more than just syntactic phras-
ing, however. Different prosodic patterns convey dif-
ferent emotions (Banse & Scherer, 1996; Juslin &
Laukka, 2003). For example, happiness is characterized
by fast speech rate, high, rising pitch and variability,
and fast voice onsets; and sadness nearly the opposite.
Uncertainty is signaled by a sustained rise in pitch
(Hirschberg, 2002). However, during the period where
children are developing fluency, their concomitant un-
derstanding of emotional prosody is still not fully adult-
like (Fujiki, Spackman, Brinton, & Illig, 2008; Wells &
Peppe, 2003), so we should not expect them to convey
these attitudes fully in their readings.
Prosody also carries discourse information. Higher,
more variable pitch tones and longer pauses are typi-
cally seen at higher levels in the discourse hierarchy,
for example, at topic shifts and the initial position in
a paragraph (Noordman, Dassen, Swerts, & Terken,
1999; Smith, 2004). High pitch tone is used to intro-
duce new topics and low pitch tones are used to indi-
cate that the topical anaphor is in short-term memory
(Wennerstrom, 2001). Pitch and punctuated stress is
also used to dictate informational focus and contrast
(Carlson, Dickey, Frazier, & Clifton, 2009; Couper-
Kuhlen & Selting, 1996). Informationally related ut-
terances are distinguished by short pause durations
between them and faster rates (den Ouden, Noordman,
& Terken, 2009). However, again, children do not have
full understanding of the import of these discourse el-
ements of prosody possibly until they are adolescents
Reading Research Quarterly • 45(2)
238
(Chen, 1998; Wells & Peppe, 2003), so it is unclear
whether they will know to convey this information in
their oral readings. To date, discourse features have
been largely ignored in the study of the development of
reading prosody. We currently do not know whether or
when children come to use these features in their oral
readings as they become fluent readers.
In sum, we see that a tremendous amount of infor-
mation is available for communication in the prosody of
sophisticated readers. However, during the same period
when children are learning to read fluently they are also
developing a general understanding of the various uses
of prosody. At this point, research is unclear on which
attributes could serve as valid, reliable assessments
of children’s ability to read fluently. Further, most of
studies described in this article regarding prosody fo-
cus on English speakers. (Only 26% of the studies used
languages other than English and most of these were
Germanic languages.) Prosody is not identical across
languages, so it is important to understand the limita-
tions of current research with regards to linguistic di-
versity, including bilingual children.
Measuring Prosody: Direct Measures
Versus Ratings
The sine qua non of reading fluency is that children
read in a manner that approximates speech. Yet reading
prosody is not identical to speech prosody. Even in flu-
ent readers, reading prosody has fewer end-of-sentences
rises, fewer very low pitch ranges (as for parenthetical
speech), and possesses generally less variability than
spontaneous speech (Esser & Polomski, 1988). Overall,
adults pause less in read speech, show more consistent
stress placement and generally cleaner speech (Howell
& Kadi-Hanifi, 1991) than in spontaneous speech.
Minor syntactic boundaries are less likely to be marked
in read speech than spontaneous speech (Blaauw,
1994). Indeed, it may be that only professionals, such
as television newscasters, truly read in a way that ap-
proximates speech (Esser & Polomski, 1988); so this
expectation is likely a bar set too high for determining
the achievement of reading fluency. However, where to
set the bar and how to set it empirically is the issue in
question.
There are two basic ways to measure reading pros-
ody: rating scales and spectrographic measures. In the
classroom, rating scales are relied upon for evaluation,
and the NAEP Oral Reading Fluency Scale is most com-
mon measure (Pinnell et al., 1995). This 4-point scale
distinguishes reading that sounds primarily word-by-
word from reading that occurs in “larger, meaningful
phrase groups” (Pinnell et al., 1995, p. 15). Another
popular rating scale that focuses more on the prosodic
characteristics of oral reading is the Multidimensional
Fluency Scale (Rasinski, Rikli, & Johnston, 2009; Zutell
& Rasinski, 1991). This scale consists of four separate
4-point subscales that distinguish phrasing and ex-
pression, smoothness and accuracy, and pacing. These
scales are then summed to represent children’s overall
ratings of fluency. Rasinski et al. (2009) have reported
inter-rater agreement within 2 points to be 86%. More
recently, however, Klauda and Guthrie (2008) added to
this scale by including a 4-point rating scale that dis-
tinguished passage-level expressiveness with a 1 indi-
cating that the child read with no mood or tone to a
4 indicating that the child read “whole or nearly the
whole passage in an expressive manner that created a
mood or tone that seemed in accord with the author’s
intention” (p. 314).
Unfortunately, even after collapsing two points on
the scale, the researchers were only able to achieve 79%
agreement so it is unclear whether this revised scale
will have sufficient reliability to add a degree of preci-
sion to fluency ratings. Whether rating scales will ever
have the precision necessary for them to add meaning-
fully to our measurement of reading fluency beyond
text reading speed and accuracy (see also, Fuchs et al.,
2001) is a concern, but it is an avenue that needs to
be pursued. Still, we believe that these more complex
scales are the general direction in which rating scales of
prosody need to go.
In our own work on reading prosody, we have al-
ways employed spectrographic measures. We do recog-
nize that the technical skills related to spectrographic
measurement will be beyond the needs of most teach-
ers, and perhaps reading specialists, although we note
that these tools are becoming increasingly easy to use.
What is needed right now is research that allows us to
relate spectrographic measures directly to various rat-
ing schemes so that informationally valid and reliable
ratings of reading prosody having curricular utility can
be created. We are certain that these measures will need
to include some notation of the complexity of the text
being read (Benjamin et al., 2009) as well as their gen-
eral discourse features because it is tempting to assign
a child a NAEP rating of 4, say, on a simple passage
while the same child might receive only a 1 on a more
complex one. Benjamin and Schwanenflugel (2009)
have shown that prosody measured from simple pas-
sages is simply less predictive of reading skill than is
prosody measured from passages that press the upward
limits of children’s skills. We are also encouraged that
the need to revise prosody rating scales might be pre-
empted by recent advances in artificial intelligence tools
that may allow us to automate the process of identifying
the adultlike extent of children’s reading expressiveness
(Mostow & Duong, 2009).
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 239
Where Does Reading Prosody Fit in Our
Conceptions of Development of Reading
Skill?
Prosody is at the heart of the development of reading
skill. Prosody is likely another aspect of the fundamen-
tal phonological representations that drive much of the
development of early reading skill (de Bree et al., 2006;
Goswami et al., 2002; Surányi et al., 2009). However,
because there are distinct prosody features at lexical,
phrasal, sentence, and discourse levels, these may only
be partially related to phonological codes that connect
to basic phonological (segmental) awareness (Whalley
& Hansen, 2006).
Prosody is most certainly related to the develop-
ment of reading fluency. As children become more flu-
ent readers, they also make shorter and less variable
intersentential pauses, shorter and less frequent intra-
sentential pauses, larger pitch declinations, and display
a more adultlike intonation contour (Clay & Imlach,
1971; Cowie et al., 2002; Miller & Schwanenflugel,
2006, 2008). These changes in reading prosody be-
tween first and second grade are predictive longitu-
dinally of later reading f luency, beyond measures of
word reading efficiency and text reading rate (Miller &
Schwanenf lugel, 2008). Pauses seem to be more con-
nected to word reading skill than fluency itself, but as
children read more complex passages, pauses in f luent
readers will more systematically mark the greater syn-
tactic complexity and sheer length of their sentences
that accompany such texts (Benjamin et al., 2009).
Reading prosody also seems to be related to read-
ing comprehension. Our own work has found varying
patterns regarding the relationship between reading
prosody and reading comprehension, some of which
seem to be attributable to passage character istics.
Measurements of prosody from simple texts (relative to
the absolute levels of reading skills of the children) do
not seem to contribute much to our ability to predict
reading comprehension skill (Schwanenf lugel et al.,
2004). Measurements of prosody from more complex
texts do predict reading comprehension skills beyond
that accounted for by word reading efficiency or text
reading rate measures (Benjamin et al., 2009; Klauda
& Guthrie, 2008; Miller & Schwanenflugel, 2006).
Indeed, we have found that the reading prosody of sim-
ple texts in first grade predicts children’s reading com-
prehension skills of more complex texts two years later
(Miller & Schwanenflugel, 2008). Thus, it appears that
having appropriate reading prosody is independently
related to good reading comprehension.
At present we do not know the directionality of this
relationship. That is, does reading with good prosody
help the reader comprehend what is being read or does
comprehending while reading simply promote good
reading prosody? Or is the relationship between read-
ing prosody and reading comprehension reciprocal?
Currently, we know of two studies that have addressed
the directionality issue, and they have come to differ-
ent conclusions using different methods. Using second
and third graders as participants, Schwanenf lugel et al.
(2004) evaluated two structural equation models im-
plying different directionality. In one, reading prosody
served as a partial mediator with word reading efficien-
cy to predict reading comprehension score outcomes. In
the other, reading comprehension and word reading ef-
ficiency predicted reading prosody as outcomes. In that
study, only the first model (i.e., that reading prosody
predicted reading comprehension) fit the data. Klauda
and Guthrie (2008) examined the issue of whether
changes in ratings of syntactic prosody were recipro-
cally related to changes in reading comprehension lon-
gitudinally beyond word reading speed over the course
of the fifth grade year. They found evidence for reci-
procity. Whether the differences in outcomes between
these studies could be attributed to differences to the
age of the children or the particulars of the methods is
not clear. Directionality and causality between reading
prosody and comprehension remains to be determined.
Finally, we hy pothesize that the development of
oral reading prosody will be related to the movement
toward what psycholing ui sts have called “ implicit
prosody” (Fodor, 2002), which may develop as children
make the transition from oral reading to silent read-
ing (McCallum, Sharp, Bell, & George, 2004; Prior &
Welling, 2001). According to Fodor (2002), a default
prosodic contour is projected onto the reading materi-
als during silent reading. Several findings support the
existence of this implicit prosody during silent reading.
Among them, in silent reading, adults read words with
multiple stressed syllables more slowly than words with
a single stressed syllable, even though syllable structure
is not actually needed (Ashby, 2006; Ashby & Clifton,
2005). Further, adult readers appear to dwell on com-
mas during silent reading (Hirotani, Frazier, & Rayner,
2006), particularly when they are needed to disambigu-
ate syntactically ambiguous sentences (Kerkhofs, Vonk,
Schriefers, & Chwilla, 2008). Event-related brain po-
tentials seem to be linked to focus during silent read-
ing in adults (Stolterfoht, Friederici, Alter, & Steube,
2007). Whether there will be a one-to-one relationship
between implicit prosody and all the various features
found in oral reading prosody remains to be seen.
We can already identify some places where there
may be differences between skilled and nov ice oral
readers. Among them, most adults do not pause on each
and every comma when they read aloud (Chafe, 1988);
pausing on commas is a feature of younger, less gener-
ally skilled readers (Miller & Schwanenflugel, 2006).
Further, we need to ensure that our theories regarding
Reading Research Quarterly • 45(2)
240
the development of implicit prosody do not exceed
what we know about the developing status of children’s
understanding of prosody, particularly during the pe-
riod that they are learning to read, which for some
features extends until age 18 or so (Plante, Holland, &
Schmithorst, 2006).
Still, the implicit prosody hypothesis is an intrigu-
ing one that needs further research. Moreover, it will
be important to understand more about how prosodic
reading is acquired so we can determine how it may
promote, or perhaps enable, the development of implicit
prosody in the silent reading prosody in children. One
study is particularly intriguing with respect to this.
Kleiman, Winograd, and Humphrey (1979) showed that
below-average fourth grade readers had difficulty mark-
ing phrase boundaries in silent reading compared with
sentences that were presented in both spoken and writ-
ten form. This is suggestive, at least, that poor readers
may be having difficulties generating implicit prosody
during silent reading to support their comprehension.
Of course, this is not the only explanation for these
findings, but they fit the pattern anticipated by this
view. Similarly, Rasinski et al. (2009) have found that
oral reading prosody ratings using the multidimension-
al fluency scoring rubric bear a substantial relationship
to silent reading comprehension scores.
In the beginning of our discussion of prosody, we
asked what the role of reading prosody was in the devel-
opment of reading fluency. We asked whether expres-
siveness is merely an epiphenomenon which proceeds
with little impact on other aspects of reading and, if
not, then what is prosody used for. We believe we can
say rather conclusively at this point that good reading
prosody emerges as children develop efficient word and
text oral reading skills. To connect to our earlier discus-
sion regarding automaticity, we can say that children
who develop efficient word and text reading skills seem
to use the newly freed up resources gained from these
automated skills and shift their attention on the integra-
tion of speech prosody with integrative reading skills.
Thus, prosody seems to be integrally related to the de-
velopment of good oral reading fluency and, indeed,
may be a marker of it. If so, prosody should be mea-
sured whenever reading fluency is measured.
We also suggested that good reading prosody may
support reading comprehension, but the directionality
of this has yet to be determined. The directionality issue
is important so that we can determine whether a par-
ticular instructional emphasis on prosody is necessary.
If acquiring good reading prosody supports improved
comprehension (as we think evidence is beginning to
support), then we should emphasize prosody in our in-
struction along with these other skills. If, instead, ac-
quiring good reading prosody is a reflection of efficient
decoding and comprehension skills alone, it may not
make much sense to focus children’s attention instruc-
tionally on developing newscaster-like oral reading be-
cause this by itself would have limited utility. Once we
are certain that developing good reading prosody has
causal value for improved reading comprehension, then
we should shift our research to considering better (and
worse) ways of integrating such instruction in our lit-
eracy practice.
Definitions
Having discussed the automaticity and prosody con-
structs surrounding reading fluency, we turn to the
multiple ways in which fluency is defined. This is not
an exclusively theoretical issue or simply a matter of
semantics. Because classroom instruction develops
around teachers’ perceived understanding of a con-
struct, the way in which they view certain aspects of
the reading process has a decisive role in their teaching
and assessment of those aspects. Further, these concep-
tualizations strongly affect learners’ understanding of
what reading is as well as what it means to be a reader.
It is also important to highlight the commonalities and
differences in these definitions while working toward a
more cohesive understanding of what fluency is, as well
as of what it is not. So, while many definitions of flu-
ency highlight the importance of accuracy, automaticity,
and prosody in relation to the comprehension of text,
(e.g., Fuchs et al., 2001; NICHD, 2000; Rasinski et al.,
in press; Samuels, 2006; Torgesen & Hudson, 2006),
which of these elements they emphasize and the role
they are assigned in the development of skilled reading
vary widely.
Fluency as Accuracy and Automaticity
The first definition emphasizes accurate and automatic
word recognition and those components, such as pho-
nemic awareness and letter–sound correspondences,
which allow students to rapidly, and correctly, identify
words (Fletcher, Lyon, Fuchs, & Barnes, 2007; Good,
Kaminski, Simmons, & Kame’enui, 2001). As can be
seen in the earlier discussion of automaticity, there is
little dispute that accurate, automatic word recognition
is a critical component of fluent reading, or that pho-
nemic awareness, letter naming, or other components
contribute to the development and consolidation of
students’ word recognition (e.g., Ehri, 1995; NICHD,
2000). In fact, most fluency researchers (e.g., Rasinski
et al., 2006; Samuels & Farstrup, 2006) agree that ac-
curate and automatic word identification plays a central
role in fluent reading, and that components, such as
phonemic awareness and letter naming, are important
in the process of developing accuracy and automaticity
in their turn (e.g., Chall, 1996; Ehri, 1995).
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 241
What needs to be challenged, however, is the em-
phasis that is placed on accuracy and automaticity, to
some extent, simply because they are the most quan-
tifiable elements of f luency (Paris, 2008; Torgesen &
Hudson, 2006) and often at the expense of other as-
pects of fluent reading, such as phrasing, appropriate
pacing, stress and emphasis (e.g., Kuhn & Stahl, 2003).
Although these elements are central to f luent read-
ing, they are by no means the only elements critical
to the process. By focusing on these elements over the
past decade, to a large extent through the dominance
of Dy namic Indicators of Basic Early Literacy Skills
(DIBELS; Good & Kaminski, 2002), and similar assess-
ments such as AIMSweb (Shinn & Shinn, 2002) and
curriculum-based measurements (CBM; Deno, 1985) in
the classroom, rate measures such as the DIBELS Oral
Reading Fluency have become privileged, driving the
literacy curriculum (e.g., Riedel, 2007; Samuels, 2007).
Given that this perspective presents a limited view of
fluency, it is essential that reading educators consider
a broader definition of the construct, one that places
weight on its less quantifiable elements.
Fluency as Prosody
The Nationa l Asses sment of Educational Progress
(NAEP; Daane, Campbell, Grigg, Goodman, & Oranje,
2005; Pinnell et al., 1995), on the other hand, views
oral reading performance as an important indicator of
skilled reading. However, while it includes measure-
ments of accuracy and rate as part of its evaluation, it
parcels out fluency as a distinct component, defining
it as “phrasing, adherence to the author’s syntax, and
expressiveness” (Daane et al., 2005, p. v). The result of
this wording is that fluency becomes equated with most
working definitions of prosody (e.g., Kuhn & Stahl,
2003; Schreiber, 1991; Torgesen & Hudson, 2006).
At first, it was unclear why the authors of the NAEP
assessment would make this distinction, but an expla-
nation may be found in the historical context surround-
ing the measure. The 1992 NAEP was one of the first
large-scale evaluations of oral reading performance, un-
dertaken at a time when fluency was largely a neglected
component in the reading process (e.g., Allington, 1983;
Dowhower, 1991). In those few cases where fluency was
considered, it was primarily in terms of rate and accu-
racy and was generally measured as the number of cor-
rect words read in a minute (e.g., Kuhn & Stahl, 2003).
One of the goals of the original NAEP evaluation of oral
reading performance was “to describe those aspects of
oral reading that go beyond accuracy and rate” which
the authors felt “may have wide applicability for reading
educators” (Pinnell et al., 1995, p. 2). By designing the
Oral Reading Fluency Scale, the authors’ hoped to coun-
terbalance some of the overemphasis on rate and accu-
racy and to integrate oral language elements into the
discussions that surround oral reading performance.
What is interesting is that, when looking across a
range of discussions that have taken place around the
construct of fluency, both before the initial NAEP pub-
lication (Pinnell et al., 1995) and increasingly since
(e.g., Kuhn & Stahl, 2003; Zutell & Rasinski, 1991),
it becomes apparent that there is a recognition of the
importance of prosodic elements in most definitions
of fluency (e.g., Hudson et al., 2009; NICHD, 2000).
Whether that acknowledgment is as robust as we might
want is something we are attempting to address here.
However, we would argue that a definition that sepa-
rates rate and accuracy from prosody reinforces the po-
sition that correct words per minute can be treated as
an isolated measure of oral reading performance. As a
result, we consider it important to keep an integrated
definition that includes accuracy, speed, and prosody.
Fluency as Skilled Reading
A third definition of reading fluency equates it with
skilled reading. According to Samuels (2006), “the most
important characteristic of the fluent reader is the abil-
ity to decode and to comprehend the text at the same
time” (p. 9) and “other characteristics of fluency such
as accuracy of word recognition, speed of reading, and
the ability to read orally with expression” (p. 9) simply
serve as indicators that fluency has been achieved. This
definition initially holds great appeal; by including text
comprehension within the definition of f luent reading,
it becomes possible to differentiate two groups of stu-
dents, word callers, who simply read words, or “bark” at
print (Samuels, 2007), without attending to the mean-
ing, and fluent readers who construct meaning from the
text as they read. Although word callers are not ubiq-
uitous (Meisinger, Bradley, Schwanenflugel, Kuhn, &
Morris, 2009; Meisinger, Bradley, Schwanenflugel, &
Kuhn, in press), their numbers do increase across the
elementary grades. As such, it seems reasonable to pre-
suppose that instruction that focuses on speed and ac-
curacy of word identification with little or no regard to
understanding will further serve to inflate their num-
bers (Applegate, Applegate, & Modla, 2009; Pressley,
Hilden, & Shankland, 2006).
However, this broad definition of f luency gives us
pause. Skilled reading is a complicated act that requires
the coordination of input from multiple sources, in-
cluding syntactic knowledge, background knowledge,
vocabulary knowledge, orthographic knowledge, and
affective factors, among others (e.g., McKenna & Stahl,
2003; RAND Reading Study Group, 2002), that al-
lows the reader to construct meaning from text. Rather
than defining fluency as simultaneously decoding and
comprehending (Samuels, 2007), it can be argued that
fluent reading merely allows comprehension to occur
Reading Research Quarterly • 45(2)
242
(e.g., Levy, 2001). Just as readers’ fluency can vary with
various texts (e.g., Allington, 2009; Hiebert, 2006), that
is, readers may be able to read independent level texts
with good fluency yet be disfluent when reading texts
that are challenging in terms of vocabulary or content,
it is also possible for readers’ comprehension of difficult
texts to vary despite their reading of these texts with
adequate fluency.
For example, let’s consider what happens when you
read a complicated text, such as a theoretical paper. As
a reader with a strong background in the subject, you
are likely to read that text fluently, that is accurately, at
a good rate, and with appropriate parsing and cadence.
But it is also likely that you will have only surface-lev-
el comprehension on the initial reading. However, by
rereading that text and grappling with its meaning,
you will deepen your understanding of the material
(Pressley, 2000). Similarly, even with relatively easy
texts, say The Lion, the Witch and the Wardrobe, you and
another reader with similar levels of fluency may devel-
op highly differing interpretations of the text depending
on your varying background knowledge. At the same
time, we do agree that “fluent” reading without any con-
comitant comprehension would be merely word calling.
So how do we rectify these potentially disparate under-
standings? Although it is reasonable to expect a basic
level of comprehension before considering an individu-
al’s reading fluent, if only to prevent the term from be-
ing equated with surface-level features (accuracy, speed,
and expression), it is critical not to confound the two
constructs given the complexities of both.
Fluency as a Bridge to Comprehension
The final definition we consider here views fluency as
a bridge between decoding and comprehension (Chard,
Pikulski, & McDonagh, 2006; Pikulski & Chard, 2005).
This position indicates that fluency likely has a recipro-
cal relationship with comprehension, both contributing
to and possibly resulting from readers’ understanding
of text (e.g., Klauda & Guthrie, 2008; Stecker, Roser,
& Martinez, 1998). It also accounts for the theoreti-
cal discussions surrounding automaticity and prosody,
indicating that both aspects of the construct facilitate,
and benefit from, comprehension. Further, this defini-
tion moves away from what the authors call a surface
conceptualization of f luency; such an understanding
sees the construct primarily as an oral reading phenom-
enon and, as a result, has a tendency to stress its more
concrete elements, accuracy, rate, and prosody through
both assessment and instruction (Chard et al., 2006;
Pikulski & Chard, 2005). Given most reading is silent,
rather than oral, this insight is particularly important.
Indeed, children are thought to make the transition to
predominantly silent reading during late elementary
school (Prior & Welling, 2001), generally around fourth
grade.
Our Definition
Having reviewed multiple ways of conceptualizing
reading f luency, we propose the following definition to
synthesize the information presented thus far:
Fluency combines accuracy, automaticity, and oral reading
prosody, which, taken together, facilitate the reader’s con-
struction of meaning. It is demonstrated during oral read-
ing through ease of word recognition, appropriate pacing,
phrasing, and intonation. It is a factor in both oral and silent
reading that can limit or support comprehension.
Although this definition is clearly influenced by those
presented elsewhere (e.g., Harris & Hodges, 1995;
Pikulski & Chard, 2005; Reutzel, 1996), it attempts to
incorporate several critical points. First, it highlights
the relationship between f luency and comprehension.
Next, it emphasizes prosody along with accurate and
automatic word recognition without privileging any of
these components. Third, it begins to address the un-
derstanding that fluency plays a role in silent as well
as oral reading. Finally, it attempts to reconceptualize
two aspects of the construct that have the potential to
be problematic when taken in isolation from the rest of
the components: rate and expression. We also recognize
that there may be a reciprocal relationship between flu-
ency and comprehension; however, this issue requires
further research. As such, we have chosen not to in-
clude reciprocity in our definition.
When discussing oral reading fluency in terms of
assessment and instruction (e.g., Mathson, Allington, &
Solic, 2006; Samuels, 2007), there has been a tendency
to focus on decoding speed at the expense of prosody.
This results in students being encouraged to read as fast
as possible rather than at a rate that replicates oral lan-
guage; the hope is that the use of the term appropriate
pacing over rate has the potential to begin addressing
this misconception. The second aspect that can lead
to problems of interpretation is the use of the term ex-
pressive reading as an equivalent of prosody. Although
expression is an accurate term for many types of texts
(e.g., narratives, plays, poetry), it has been argued that it
is inappropriate for informational text. However, these
texts have their own prosodic indicators (e.g., Carlson
et al., 2009; den Ouden et al., 2009), so it may be that
the use of the term intonation is more precise a word
than is expression for describing the suprasegmental
features that occur as part of the reading of informa-
tional text. It may be that the use of these alternative
terms to describe fluent reading will help to counter
certain misunderstandings that have developed around
the construct.
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 243
Current Assessment of Fluency
Given the aforementioned review, it is important to
expand the discussion to the assessment—and prac-
tice—of fluency as it is currently being implemented
in many school districts across the United States (e.g.,
Riedel, 2007). Since the introduction of No Child Left
Behind and Reading First (government mandated ed-
ucational reforms), the instructional landscape in the
United States has undergone a major shift (e.g., Cervetti,
Jaynes, & Hiebert, 2009; Garcia & Bauer, 2009). There
has been an attempt to refocus literacy education on five
areas of literacy development reviewed by the National
Reading Panel (NICHD, 2000): phonemic awareness,
phonics, fluency, vocabulary, and comprehension. This
has been coupled with a new emphasis on regular as-
sessment and scientifically based reading research. Our
goal here was not to critique Reading First, per se (see
Connor, Jakobsons, Crowe, & Meadows, 2009; Dubin,
2008; Gamse, Bloom, Kemple, & Jacob, 2008; Teale,
Paciga, & Hoffman, 2007 among others for discussions
of the impact of Reading First), but to instead look at the
ways in which fluency assessment and instruction have
been affected by conceptualizations that have domi-
nated educational practice since the inception of this
legislation.
While Reading First focused on five areas of reading
as critical to skilled reading development, two of those
five, vocabulary and comprehension, are significantly
more complex and, therefore, more difficult to measure.
As a result, designing assessments that readily demon-
strate student growth in these areas has been somewhat
problematic (e.g., McKenna & Stahl, 2003; Paris, 2008).
One result of this difficulty has been a greater emphasis
on those areas that are easy to measure (Duffy, 2007;
Paris, 2008): phonological awareness, the alphabetic
principle, and oral reading fluency, or what have been
referred to as the “big ideas” (Good et al., 2001, p. 7) of
beginning reading. In fact, when discussing these con-
cepts, Good and his colleagues title the section of their
paper that focuses on these components “Measuring
what’s important: The foundational skills of beginning
reading” (p. 6).
Although these factors are among the critical un-
derstandings that students must establish if they are
to become successful readers, this list needs to be pur-
posefully expanded to provide a better sense of the
complexities of beginning reading; as such, factors that
emphasize oral language, motivation, extensive oppor-
tunities to read and interact with connected text, and a
range of other skills that contribute to vocabulary and
comprehension development should also be included as
part of a balanced reading curriculum (Bredekamp &
Pikulski, 2008; Pikulski, 2005; Schwanenflugel et al.,
in press; Shanahan, 2005).
So is the reason for the emphasis on three of the five
components simply the result of their ease of assess-
ment? In our opinion, to some degree, yes. According
to Paris (2005, 2008), reading skills can be classified
along a continuum of constrained and unconstrained
skills. Constrained skills develop over a relatively brief
period of time, incorporate a limited set of knowledge
and skills, can be taught directly, and can be readily as-
sessed quantitatively. Further, these skills are important
because they enable the development of unconstrained
skills to occur in relation to text. When placing skills
along this continuum, Paris (2008) argued that phono-
logical awareness, phonics, and oral reading fluency are
constrained, or in the case of oral reading fluency some-
what constrained, and that vocabulary and compre-
hension are unconstrained. And it is also the case that
the testing of constrained skills is both uncomplicated
and inexpensive, allowing students to show significant
gains over short periods of time. As a result, it is easy for
them to become the focus of attention.
However, ease of measurement is only one reason
that the “big ideas” (Good et al., 2001, p. 7) of begin-
ning reading have gained dominance in many schools’
reading curriculum. A more fundamental reason is
that a number of researchers (e.g., Fletcher et al., 2007;
Kame’enui, Simmons, Good, & Harn, 2001) consider
these skills to be integral to later reading success. In
this line of thought, if learners encounter difficulties
with these skills early on, they are increasing their
likelihood of developing later reading difficulties dra-
matically. According to Good and his colleagues (2001),
“differences in developmental reading trajectories can
be explained, in part, by a predictable and consequen-
tial series of reading-related activities that begin with
difficulty in foundational skills” (p. 6). To circumvent
this problem, it is important to identify any weaknesses
that students are experiencing with these skills early
and provide intensive instruction in the corresponding
areas. And the best way to determine whether students
are making appropriate progress is through regular
assessments. CBM (Deno, 1985), along with its com-
mercially available variants (e.g., DIBEL S, Good &
Kaminski, 2002; AIMSweb; Shinn & Shinn, 2002), have
come to the fore as a means of accomplishing this goal.
Further, these measures have become highly influential
in informing early reading instruction in general and
fluency instruction in particular.
Curriculum-Based Measurement (CBM)
CBM was originally designed to evaluate students’ gen-
eral reading progress by measuring the number of cor-
rect—and incorrect—words read aloud in one minute
(e.g., Deno & Marston, 2006; Madelaine & Wheldall,
1999, 2004; Samuels, 2007). However, they have since
been adapted for use as a measure of oral reading
Reading Research Quarterly • 45(2)
244
fluency as well. The initial drive behind these assess-
ments was to provide teachers with a quick alternative
to norm-referenced standardized tests (Madelaine &
Wheldall, 1999). A number of reasons have been cited
for this decision, including standardized measures’ lack
of technical adequacy (e.g., issues surrounding content
validity), their insensitivity to small changes in learn-
ers’ development, their inappropriateness as a means
of tracking students’ progress or as a basis for instruc-
tional decision making, and a tendency toward the mis-
use of the data that norm-referenced standardized tests
provide.
CBMs, on the other hand, are meant as an alter-
native that incorporates standardized procedures, but
prov ides teachers with infor mation that is “reliable
and valid, quick and easy to administer repeatedly, in-
expensive, unobtrusive, sensitive to small changes in
progress, and able to be used to make instructional de-
cisions” (Madelaine & Wheldall, 1999, p. 74). Studies
have indicated that the use of these measures as a means
of tracking learners’ reading development can lead to
improvements in reading achievement (Deno, 2003;
Fuchs, Deno, & Mirkin, 1984; Stecker & Fuchs, 2000;
Wayman, Wallace, Wiley, Tichdt, & Espin, 2007), and
there is evidence that they correlate highly with stan-
dardized tests of reading comprehension as well (e.g.,
Deno & Marston, 2006; Fuchs, Fuchs, & Maxwell,
1988).
Initially, CBMs were developed as a means of evalu-
ating learners’ progress on passages drawn directly from
their curriculum, a procedure that allowed teachers di-
rect insight into their students’ ability with the mate-
rial they were using in the classroom (Deno & Marston,
2006). While researchers note clear advantages to this
approach, they discuss disadvantages as well, including
the amount of time required to identify reading pas-
sages and the variability in difficulty across, and even
within, texts. To rectify these issues, passages identified
at a given reading level, but selected from material out-
side the curriculum, have been used for these measures
as well (Powell-Smith & Bradley-Klug, 2001). This shift
away from specific, classroom-based literacy curricula
also laid the groundwork for commercial versions of
oral reading f luency assessments, the best known of
which is DIBELS (Good & Kaminski, 2002).
Dynamic Indicators of Basic Early
Literacy Skills (DIBELS)
As with CBMs, the DIBELS oral reading fluency mea-
sures the number of correct words students can read in
one minute. According to the DIBELS website (dibels.
uoregon.edu/samples/index.php), the oral reading flu-
ency measure, along with other measures that are part
of the DIBELS data system (Good & Kaminski, 2002),
is in use at over 15,000 schools, making it likely the
most frequently used single assessment of connected-
text reading fluency in the United States today. The
DIBELS tests provide a developmental timeline and
corresponding benchmarks for skills acquisition that
allows teachers to determine a developmental trajectory
for each student. The measures include initial-sound
fluency, letter-naming fluency, phoneme segmentation
fluency, nonsense word f luency, oral reading fluency,
retell f luency, and word use f luency. Taken together,
these assessments are meant to be easy and inexpen-
sive to administer, effective at identifying students who
are likely to experience later reading difficulty based
on their progress on a series of constrained skills, and
designed to provide data that can serve as the basis for
instructional decision making.
At this point, it is useful to note that the authors
of the DIBELS (Kame’enui et al., 2001) define fluency
differently from the way we have discussed it thus far.
Rather than employ what they term a traditional defini-
tion of fluency, that of proficient word recognition in the
reading of connected text, they modify the definition to
include “fluency in the component skills and lower-level
processes” (p. 308). This understanding translates into
automaticity in phonemic awareness, letter recognition,
and decoding and accounts for the term being used in
connection with all the DIBELS measures, not just oral
reading fluency of connected text. However, this un-
derstanding of fluency as automaticity is also integral to
the DIBELS oral reading fluency, which is described as
a measure of the accuracy and fluency of connected-text
reading. This results in the DIBELS actually narrowing,
rather than expanding, the understanding of f luency
so that the term becomes a synonym for automaticity,
even as it is applied to a broader range of concepts than
connected-text reading. As Hudson and her colleagues
(2009) succinctly argued, “the concept of automaticity
actually implies more about a response than does the
concept of f luency”; accordingly, they retain the term
automaticity, rather than fluency, to describe a response
that “requires few processing resources, is obligatory,
and outside of conscious control” (p. 9).
The Use of CBMs and DIBELS in Practice
A key premise of both CBM and the DIBELS oral reading
fluency is that students’ reading rate and accuracy are
effective proxies for general reading ability (e.g., Deno
& Marston, 2006; Fuchs et al., 2001; Samuels, 2007).
As such, they are seen as a means of tracking students’
reading development and as the basis for determining
whether students are receiving effective instruction.
In addition, the DIBEL S oral reading fluency, along
with other versions of CBMs, is seen as an indicator of
connected-text f luency (Good et al. 2001). These scales
have established benchmarks designed to determine
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 245
learners’ risk level in relation to reading development
(Good & Kaminski, 2002; Shapiro, 2004). And the use
of these measures is seen as a valuable means of helping
students avoid later reading difficulties and the negative
cycle that develops as a result of unsuccessful early ex-
periences with print. As Good and his colleagues (2001)
stated, “few would argue with the concept of preven-
tion and the need for formative assessment to inform
instruction” (p. 9).
Indeed, there is little arguing with the desire to pre-
vent reading difficulties (Snow, Burns, & Griffin, 1998)
or of the importance of using appropriate assessments
to inform instruction (Duffy, 2007). Nor is there any
doubt that ensuring students have extensive experienc-
es with text and appropriate forms of instruction, some
of which focus on the development of constrained skills,
will prevent many students from experiencing later
reading difficulties (e.g., Cunningham & Stanovich,
1998; Shanahan, 2005). However, whether the assess-
ments discussed earlier are the best means for helping
learners meet these goals is significantly more problem-
atic. The answer depends, to a large extent, on how the
assessments measure and define fluency and on forms
of instruction that are used as a result. So, for example,
if the emphasis is on automaticity (e.g., Kame’enui et
al., 2001) or “rapid decoding” (Shinn et al., 1992 cited
in Madelaine & Wheldall, 1999, p. 76), either as part of
fluency’s working definition (e.g., oral reading fluency
or “the oral translation of text with speed and accuracy”
Fuchs et al., 2001, p. 239) or as part of its measurement
(e.g., Fletcher et al., 2007; Torgesen & Hudson, 2006),
there is, almost inevitably, a corresponding privileging
of speeded decoding in its instruction (Applegate et al.,
2009; Pressley et al., 2006). Further, the importance
that administrators, teachers, and other constituents
currently assign to these measures, coupled with their
repeated use over the course of the elementary school
years, has intensified these issues (Paris, 2005, 2008).
This excessive focus on rate can lead to fast, staccato
reading rather than reading with appropriate pacing
and may actually interfere with, rather than promote,
comprehension (Samuels, 2007).
Because excessive rate impedes comprehension, ei-
ther by shifting the focus away from understanding or
by actually interfering with the construction of mean-
ing, most researchers (e.g., Fletcher et al., 2007; Hudson
et al., 2009; Rasinski et al., in press) consider appropri-
ate or conversational pacing, along with other prosodic
features, as central to their definition of fluency. So why
does this understanding not translate to assessment?
It appears that measuring prosody is considered to be
somewhat problematic (e.g., Fuchs et al., 2001). Three
primary concerns underlie this perception (Torgesen
& Hudson, 2006). The first involves prosody’s am-
biguous relationship with comprehension; estimates of
prosody’s contribution to comprehension beyond that
accounted for by rate measures have ranged from small
to moderate (Schwanenf lugel et al., 2004; Miller &
Schwanenflugel, 2006, 2008). Second, those measures
of prosody that are readily implemented in classrooms,
such as the NAEP fluency scale (Pinnell et al., 1995)
or the multidimensional fluency scale (Rasinski et al.,
2009), are far less precise than are measures of correct
words per minute. This means the results from these
measures are less sensitive to small, ongoing changes
in fluency (e.g., Klauda & Guthrie, 2008). Finally, mea-
sures of prosody have the highest levels of reliability
when they include a measure of reading rate as well,
making their implementation somewhat redundant
(Torgesen & Hudson, 2006).
Despite their shortcomings, compelling arguments
can be made for the use of fluency scales. According to
the NAEP analysis of oral reading (Daane et al., 2005;
Pinnell et al., 1995), all three elements of f luency—ac-
curacy, rate and prosody—are related not only to one
another, but to overall reading comprehension as well.
That is, students with higher NAEP ratings (levels 3 or 4
on the NAEP oral reading fluency scale) not only tended
to read texts with a higher degree of accuracy and at a
faster rate, but they also had a higher score in terms of
their overall reading proficiency. Their peers with lower
NAEP ratings (levels 1 or 2 on the NAEP oral reading
fluency scale), on the other hand, read fewer words per
minute, had a higher percentage of miscues, and had
a lower overall reading proficiency score. In addition,
the appropriate use of prosodic elements appears to re-
flect a reader’s comprehension of a text (Mathson et al.,
2006). And, although fluency scales do involve qualita-
tive judgments, several researchers (e.g., Kuhn, 2005;
McKenna & Stahl, 2003) found high levels of inter-rater
reliability after brief training on these measures; in fact,
they have established levels as high as 100% using the
NAEP scale, perhaps because the NAEP has the broad-
est descriptive categories, increasing the likelihood of
agreement. Although current fluency scales are not as
precise as we might wish, they do provide additional
insight into students’ reading development.
Ultimately, it is essential to expand the way fluency
is measured so that it encompasses more than rate and
accuracy. According to Deno and Marston (2006) the
definition of fluency should not be limited to correct
words per minute, because this understanding leaves
out important features of the construct, such as pros-
ody. We would argue that such an emphasis leaves
children in danger of focusing on speed at the expense
of comprehension (see also, Samuels, 2007; Wixson &
Lipson, 2009). In fact, Samuels (2007) correctly argued
that curriculum-based measures were conceived of as a
means of monitoring students’ reading progress, broad-
ly considered, and that their use as a f luency measure
Reading Research Quarterly • 45(2)
246
leads to an overemphasis on “speed at the expense of
understanding” (p. 565). Wixson and Lipson (2009)
concurred, arguing that it is the use of these assess-
ments as both screening and progress monitoring mea-
sures that leads to such an instructional focus.
Further, Deno and Marston (2006) found the notion
of benchmarks for reading rates to be highly problematic
because it relies on an oversimplification of the relation-
ship between word recognition and comprehension and
implies there is a point beyond which text comprehen-
sion is guaranteed. In fact, Hudson and her colleagues
(2009) noted that there are times when a slower reading
rate is necessary to ensure the construction of meaning.
While it is true that exceedingly slow word recognition
hinders comprehension and that skilled readers’ word
recognition is automatic, it is also the case that skilled
readers vary their reading pace depending upon the dif-
ficulty of the text and the complexity of the ideas they
are encountering. Given this, if learners are to become
skilled readers, it is important that they learn to be flex-
ible, rather than simply fast, oral readers. By including
measures of prosody as part of the evaluation process,
the likelihood that learners develop the mistaken no-
tion that fluent reading and fast reading are one in the
same decreases. As such, it seems the positives of in-
cluding measures of prosody along with a measure of
rate and accuracy outweigh the negatives.
Implications for Assessment and
Instruction
Implications for Assessment
Given what we know about fluency assessment and
what we hope to see in f luency instruction, what do
we propose? First, it is essential that fluency be seen as
more than simply correct words per minute. Without
the addition of some measure of prosody, there contin-
ues to be too high a risk that oral reading fluency will
be seen only as a measure of quickly decoding a pas-
sage (e.g., Samuels, 2007) and that instruction will con-
tinue to follow suit (Wixson & Lipson, 2009). For now,
prosodic measures such as the NAEP oral reading flu-
ency scale (Pinnell et al., 1995) or the multidimensional
fluency scoring guide (Rasinski et al. 2009; Zutell &
Rasinski, 1991) can serve as rough gauge of how well
students are integrating the suprasegmental features of
language into their oral reading. We further think that
improvements will be made in such rating measures as
we gain a more specific understanding of the linkages
between identifiable spectrographic elements of proso-
dy and comprehension. We believe that research should
go in the direction of creating prosody-rating schemes
that preserve the ease and general utility of ratings with
the refinement of spectrographic measures.
Second, it remains critical that students are not fo-
cusing on rate at the expense of meaning; to prevent
overemphasizing rapid decoding, a measure of compre-
hension should be used in conjunction with any evalu-
ation of reading f luency (Samuels, 2006). This can be
undertaken in several ways, from brief discussions of
the passage being read to answering a range of ques-
tions, from factual to inferential, which are related to
the material to student retellings of the text (McKenna
& Stahl, 2003). However, although there are a range of
possibilities available for evaluating comprehension,
simply asking students to reiterate as many words as
they can remember after the oral reading of a passage
fails to reflect any evidence of the processes they may be
using to construct meaning (e.g., Pressley et al., 2006;
Samuels, 2007). As such, it is important to encourage
the use of a comprehension measure that allows learn-
ers to demonstrate understanding rather than simply
recite words.
Third, although it is important to evaluate stu-
dents’ oral reading (e.g., Daane et al., 2005; McKenna
& Stahl, 2003), this is only one piece of information in
a reader’s profile. Despite evidence that there are often
high correlations between fluency measures and stan-
dardized comprehension measures (Daane et al., 2005;
Deno & Marston, 2006; Fuchs et al., 2001; Madelaine
&Wheldall, 1999, 2004), the correlations are not per-
fect. In fact, looking at research conducted across a
range of populations and a variety of standardized com-
prehension assessments, we noted correlations ranging
from a low of 0.61 (Sibley, Biwer, & Hesch, 2001) to a
high of 0.91 (Fuchs et al., 1988) for curriculum-based
measures and from 0.45 (Pressley et al., 2006) to 0.80
(Riedel, 2007) for the DIBELS. This can cause a number
of students to be misidentified, either as having reading
difficulties when they do not or as making sufficient
progress in their reading development when, in fact,
they are struggling (e.g., Riedel, 2007).
To decrease the likelihood of misidentifying student
achievement levels, students’ fluency results should be
considered as part of a broader range of assessments
(e.g., McKenna & Stahl, 2003) and classroom-based
data (Aff lerbach, 2004; Glasswell & Teale, 2007). It
also makes sense to look at these measures more quali-
tatively; that is, what types of miscues are the readers
making and in what context (e.g., McKenna & Picard,
2006), how does the readers’ rate vary with the type of
text and its instructional level (Kuhn, 2007), and how
appropriate is their prosody for the text they are reading
(Rasinski, 2004)?
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 247
Implications for Instruction
In line with our suggestions that assessment of read-
ing fluency should be multifaceted, we present alterna-
tives to lessons that stress automaticity as the end goal
of fluent reading (e.g., Mathson et al., 2006; Wixson &
Lipson, 2009). This article began by highlighting the
shift that has taken place around fluency and its role
within the literacy curriculum over the past decade.
During this period, fluency, seen primarily in terms
of rate measures, has become a driving force in read-
ing instruction. Although fluent reading is critical to
later reading success (Kuhn & Stahl, 2003; Rasinski
et al., in press), it is only one component of literacy
learning. Effective instr uctional approaches for f lu-
ency development such as fluency-oriented reading in-
struction (FORI; Stahl, Heubach, & Holcomb, 2005),
wide fluency-oriented reading instruction (Wide FORI
or Wide Reading; Kuhn et al., 2006) and the f luency
development lesson (FDL; Rasinski, Padak, Linek, &
Sturtevant, 1994) view the comprehension of texts,
rather than an increase in reading rate, as the primary
goal. These approaches all recognize that the develop-
ment of automaticity, prosody, and reading comprehen-
sion occur through the scaffolded reading of a range of
texts. It is the various forms of supported reading (for
example, echo, choral, partner, and repeated reading)
that allow learners to engage with and learn from the
material they are reading.
The types of supported reading that comprise effec-
tive fluency practices, such as FORI, Wide FORI, and
FDL, also integrate, and further develop, the compo-
nent skills that lead to automatic word recognition (e.g.,
Kuhn et al., 2006; Samuels, 2006). Although instruc-
tion in constrained skills, such as phonemic aware-
ness and word recognition, provide a critical base on
which to develop automaticity (e.g., Bear & Templeton,
1998; Levy, 2001; Paris, 2008), the practice with con-
nected text provided by these approaches allows learn-
ers to consolidate these components. Further, there is
evidence that an overemphasis on word instruction in
isolation (Allington, 1983, 2009; Chomsky, 1976) can
actually work against students’ development as skilled
readers. It is equally critical to remember that the rela-
tion between children’s basic reading skills (e.g., word
reading and reading fluency) and reading compre-
hension diminishes as children age (Meisinger et al.,
2009; Schwanenf lugel et al., 2006; Vellutino, Fletcher,
Snowling, & Scanlon, 2004). As young readers develop
automatic word reading skills, attentional resources are
freed for comprehension processes (LaBerge & Samuels,
1974; Perfetti, 1985). However, at some point the issue
shifts from managing freed resources to using content
knowledge, accessing the meanings of sophisticated
vocabulary, drawing appropriate inferences, and moni-
toring comprehension progress (Chall, 1996; Sweet &
Snow, 2003).
Finally, it is essential that students read substan-
tial amounts of connected text if they are going to be-
come fluent readers (Logan, 1997; Stanov ich, 1986).
Although an effective literacy curriculum will include
a wide range of materials, including poetry and other
relatively brief texts, if these are the only texts that stu-
dents are reading, they will not provide learners with
sufficient practice to develop their fluency, regardless of
how repeatedly they are read (Schwanenflugel, Kuhn et
al., 2008). Nor is it sufficient to read a single longer text,
say a narrative or expository trade book or a selection
from a basal reader or literature anthology designed for
second or third graders, if that text is read only once
over the course of a school week (Hiebert, 2004). And
although independent reading is central to developing
reading fluency, students who are averse to reading are
unlikely to benefit from DEAR or SSSR (Hasbrouck,
2006) unless they are provided with a range of options
such as scaffolded silent reading (Reutzel, Fawson, &
Smith, 2008), partner reading (Meisinger & Bradley,
2008), reading-while-listening (Chomsky, 1976; Pluck,
2006) or other forms of assisted reading along with tra-
ditional independent silent reading (for a more compre-
hensive review of fluency instruction, see Rasinski et
al., in press). What is critical here is that learners are
provided with extensive opportunities to engage with
connected texts, whether they are reading repeatedly or
widely, and that sufficient support is provided to allow
students to succeed given the level of challenge that is
presented by various texts.
Conclusions
The title of this article implies that we need to align
our assessment practices with our theories of reading
fluency. At the basic level, we know that fluency incor-
porates automaticity and prosody (e.g., Erekson, 2003;
Logan, 1997; Samuels, 2004). We also know that f lu-
ent reading facilitates comprehension and that compre-
hension may mediate aspects of f luency such as pacing
(e.g., Hudson et al., 2009; Rasinski et al., in press). And
although there is much research and theory to describe
the multiple ways in which automaticity contributes to
comprehension, our concepts regarding the exact rela-
tionship between prosody and comprehension are still
under development (e.g., Schwanenflugel et al., 2004).
We have suggested throughout this ar ticle that
the way fluency is defined, and which elements of the
construct are emphasized in these definitions, influ-
ences how it is both assessed and taught (e.g., Mathson
et al., 2006). We have further argued that, by look-
ing at students’ fluency as part of their overall reading
Reading Research Quarterly • 45(2)
248
development, instead of as a proxy for it, educators are
likely to develop the kind of nuanced understanding of
learners’ reading ability that will make effective literacy
instruction possible. It is critical that we establish as-
sessments, and instruction, that assists learners in be-
coming truly fluent readers rather than just fast ones.
References
Adams, M.J. (199 0). Beginning to read: Thinking and learning about
print. Cambridge, MA: MIT Press.
Aff lerb ac h, P. (2004). H ig h s ta kes te sti ng an d read in g a s-
se ssme nt: Nat iona l Rea ding C onferen ce b ri ef. R etrie ved
November 30, 2009, from www.nrconline.org /public ations /
HighStakesTestingandReadingAssessment.pdf
Allbritton, D.W., McKoon, G., & Ratcliff, R. (1996). Rel iabilit y
of prosodic codes for resolving syntactic ambiguity. Journal of
Experimental Psychology. Learning, Memory, and Cognition, 22(3),
714–735. Medline doi:10.1037/0278-7393.22.3.714
Allington, R.L. (1983). Fluency: The neglected reading goal. The
Reading Teacher, 36(6), 556–561.
Allington, R.L. (2009, February). New challenges for literacy research-
ers. Keynote address given at the annual Internat ional Reading
Association Reading Research Conference, Phoenix, AZ.
Applegate, M.D., Applegate, A.J., & Modla, V.B. (2009). “She’s my
best reader; she just can’t comprehend”: Studying the relationship
between fluency and comprehension. The Reading Teacher, 62(6),
512–521. doi:10.1598/RT.62.6.5
Ashby, J. (2006). Prosody in skilled silent reading: Evidence from
eye movements. Journal of Research in Reading, 29(3), 318–333.
doi:10.1111/j.1467-9817.2006.00311.x
Ashby, J., & Cl ifton, C. (2005). The prosodic property of lexical
stress affects eye movement s during silent re ading. Cognition,
96(3), B89–B100. doi:10.1016/j.cognition.2004.12.006
Banse, R., & Scherer, K.R. (1996). Acoustic profiles in vocal emo-
tion expression. Journal of Personality and Social Psycholog y, 70(3),
614–636. Medline doi:10.1037/0022-3514.70.3.614
Beach, C .M. (1991). Th e interpretation of pros odic patt erns at
points of syntactic str ucture ambiguity: Evidence for cue trad-
ing relations. Journal of Memory and Language, 30(6), 644– 663.
doi:10.1016/0749-596X(91)90030-N
Bear, D.R., & Templeton, S. (1998). Explorations in developmental
spelling: Foundations for learning and teaching phonics, spell-
ing, and vocabulary. The Reading Teacher, 52(3), 222–242.
Benjamin, R., & Schwanenf lugel, P.J. (2009). Text complexity and
oral reading prosody in young readers. Unpublished manuscript,
University of Georgia, Athens.
Benjamin, R., Schwanenflugel, P.J., & Kuhn, M.R. (2009, May). The
predictive value of prosody: Differences between simple and difficult
texts in the reading of 2nd graders. Presentation to the College of
Education Research Conference, University of Georgia, Athens.
Blaauw, E. (1994). The contr ibution of prosodic boundary m ark-
er s to the per ceptual dif ference between re ad an d spon-
tan eo us sp ee ch. Spe ech Co mm un ic at io n, 14(4), 35 9–375 .
doi:10.1016/0167-6393(94)90028-0
Bredek amp, S., & Pikulsk i, J. (2008). Preventing reading difficulties
in young children: Cognitive factors. Keynote address presented at
the International Reading Association Preconference Institute #8,
Atlanta, GA.
Carlson, K., Dickey, M.W., Fra zier, L., & Cli fton, C., Jr. (200 9).
Information structure expectations in sentence comprehension.
Quarterly Journal of Experimental Psychology, 62(1), 114–139.
doi:10.1080/17470210701880171
Cer vetti, G.N., Jaynes, C.A., & Hiebert, E.H. (2009). Incre asing
oppor tun ities to acquire k nowledge through reading. In E.H.
Hiebert (E d.), Reading more, reading better (pp. 79–100). New
York: Guilford.
Cha fe , W. (1988). P un ctu at ion and the pro sody of wr it-
ten l ang uage . Wr itte n C omm uni cat ion, 5(4), 39 5 – 42 6.
doi:10.1177/0741088388005004001
Chall, J.S. (1996). Stages of reading development (2nd ed.). Fort Worth,
TX: Harcourt-Brace.
Chard, D.J., Pikulski, J.J., & McDonagh, S.H. (2006). Fluency: The
link between decoding and comprehension for struggling read-
ers. In T.V. Rasinski, C. Blachowicz, & K. Lems, (Eds.). Fluency
instr uction: Research-based best practices (pp. 39– 61). New York:
Guilford
Chen, S.-H.E. (1998). Surface cues and the development of given /
new inter pretation. Applied Psycholing uist ics, 19(4), 553–582.
doi:10.1017/S0142716400010365
Chomsky, C. (1976). After decoding: What? Language Arts, 53(3),
288–296, 314.
Clay, M.M., & Imlach, R.H. (1971). Juncture, pitch, and stress as
reading behavior variables. Journal of Verbal Lear ning and Verbal
Behavior, 10(2), 133–139. doi:10.1016/S0022-5371(71)80004-X
Con nor, C .M., Jakobsons, L.J., Crow e, E.C., & Meadows, J.G.
(2009). Instruction, student engage men t, and read ing skill
growth in reading first classrooms. The Elementary School Journal,
109(3), 221–250. doi:10.1086/592305
Cooper, W.E., & Pac cia-Cooper, J. (1980). Synta x and speech.
Cambridge, MA: Har vard University Press.
Couper-Kuhlen, E., & Selting, M. (1996). Towards an interactional
perspective on prosody and a prosodic perspective on interaction.
In E. Couper-Kuhlen & M. Selting (Eds.), Prosody in conversation:
Interactional studies (pp. 11–56). Ca mbr idge, U K: Ca mbridge
University Press.
Cowie, R., Douglas-Cow ie, E., & Wichmann, A. (2002). Prosodic
characteristics of skilled reading: Fluency and ex pressiveness
in 8–10-ye ar-old readers. L anguage and Speech, 45(1), 47–82.
Medline doi:10.1177/00238309020450010301
Cunningham, A.E., & Stanovich, K.E. (1998). What reading does
for the mind. American Educator, 22(1–2), 8–15.
Cu nni ng ha m, T.F., Hea ly, A.F., Kanengi ser, N., Chi zzick ,
L., & Wil lit ts , R .L . (198 8). Inve st igati ng th e bound ar-
ies of read ing units across ages and readi ng levels. Journal
of Ex perimen tal Chil d Psycholog y, 45(2), 175–20 8. Medline
doi:10.1016/0022-0965(88)90029-X
Daane, M.C ., Campb ell, J.R., Grigg, W.S., Goodma n, M .J., &
Ora nje, A. (2005). Fourth-Grade Students Reading Aloud: NAEP
2002 Special Study of Oral Reading. The Nation’s Report Card (NCES
2006- 469). Washington, DC: U.S. Dep artment of Education,
Institute of Education Sciences.
de Bree, E., Wijnen, F., & Zonneveld, W. (2006). Word stress produc-
tion in 3-year-old children at risk for dyslexia. Journal of Research
in Reading, 29(3), 304–317. doi:10.1111/j.1467-9817.2006.00310.x
den Ouden, H., Noordman, L ., & Terken, J. (2009). Prosodic real-
izations of global and local structure and rhetorical relations in
read aloud news reports. Speech Communication, 51(2), 116–129.
doi:10.1016/j.specom.2008.06.003
Deno, S.L. (1985). Curriculum-based measurement: The emerging
alternative. Exceptional Children, 52(3), 219–232. Medline
Deno, S.L . (20 03). Developments in cur riculum-based me asure-
ment. Remedial and Special Education, 37(3), 184–192.
Deno, S.L., & Marston, D. (2006). Curriculum-based measurement
of oral reading: An indicator of growth in fluency. In S.J. Samuels
& A.E. Farstrup (Ed s.), What research has to say about fluenc y
instruction (pp. 179–203). Newark, DE: I nter national Readi ng
Association.
Dowhower, S. L. (1991). Sp ea ki ng of pro sody: Fluen cy ’s un-
at tended be df el low. Theory Into Practice, 30(3), 165 –175.
doi:10.1080/00405849109543497
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 249
Dubin, J. (2008). Reading Richmond: How scientifically based read-
ing instruction is dramatically increasing achievement. American
Educator, 32(3), 28–34, 36.
Duffy, G.G. (2007). Thriving in a high-stakes testing environment.
Jour nal of Curriculum and Instruction, 1(1), 7–13. doi:10.3776/
joci.2007.v1n1p7-13
Ehri, L.C . (1995). Phases of development in learning to rea d
words by sight. Journal of Research in R eading, 18(2), 116–125.
doi:10.1111/j.1467-9817.1995.tb00077.x
Eisler, F.G. (1968). Psycholing uistics: Experiments in spontaneous
speech. New York: Academic.
Erek son, J. (2003, May). Prosody: The problem of expression in flu-
ency. Paper presented at the annual meeting of the International
Reading Association, Orlando, FL.
Esser, J., & Polomski, A. (1988). Comparing reading and speaking into-
nation. Amsterdam: Rodopi.
Ferreira, F. (1991). Effects of length and syntact ic complexity on
initiation times for prepared utterances. Jour nal of Memory and
Language, 30(2), 210–233. doi:10.1016/0749-596X(91)90004-4
Fletcher, J.M., Lyon, G.R., Fuchs, L.S., & Barnes, M.A. (2007).
Learning disabilities: From identification to intervention. New York:
Guilford.
Fodor, J.D. (2002, April). Psycholinguistics cannot escape proso-
dy. Spe ech P rosody 2002 Internat ional Confer ence, Ai x-en-
Provence, France.
Frazier, L., Carlson, K., & Clifton, C. (2006). Prosodic phrasing is
central to language comprehension. Trends in Cognitive Sciences,
10(6), 244–249. Medline doi:10.1016/j.tics.2006.04.002
Fuchs, L.S., Deno, S.L., & Mirkin, P. (1984). Effects of frequent cur-
riculum-based measurement and evaluation on pedagogy, stu-
dent achievement, and student awareness of learning. American
Educational Research Journal, 21(2), 449–460.
Fuch s, L.S., Fuchs, D., Hosp, M.K., & Jenkins, J.R. (2001). Oral
reading fluency as an indicator of reading competence: A theoret-
ical, empirical, and historical analysis. Scientific Studies of Reading,
5(3), 239–256. doi:10.1207/S1532799XSSR0503_3
Fuchs, L.S., Fuchs, D., & Maxwell, L. (1988). The validity of in-
formal readi ng comprehension measures. Remedial and Special
Education, 9(2), 20–28. doi:10.1177/074193258800900206
Fujiki, M., Spackman, M.P., Brinton, B., & Illig, T. (2008). Ability
of children with language impairme nt to understa nd emo-
tion conveyed by prosody in a n arrative passage. International
Journal of Language & Communication Disorders, 43(3), 330–345.
doi:10.1080/13682820701507377
Gamse, B.C., Bloom, H.S., Kemple, J.J., & Jacob, R.T. (2008). Reading
First impact study: Interim report (NCEE 2008-4016). Washington,
DC: National C enter for Education Evaluation and R egiona l
Assistance, Institute of Education Sciences, U.S. Department of
Education.
Garcia, G.E., & Bauer, E.B. (2009). Assessing student progress in
the time of No Child Left Behind. In L.M. Morrow, R. Rueda, &
D. Lapp (Eds.), Handbook of research on literacy and diversity (pp.
233–253). New York: Guilford.
Glasswell, K., & Teale, W.H. (2007). Authentic assessment of au-
thentic student work in urban classrooms. In J.R. Paratore & R.L.
McCormack (Eds.), Classroom literacy assessment: Making sense of
what students know and do (pp. 262–279). New York: Guilford.
Goldman, S.R., Meyerson, P.M., & Coté, N. (2006). Poetr y as a
mnemonic prompt in children’s stories. Reading Psychology, 27(4),
345–376. doi:10.1080/02702710600846894
Good, R.H., III, & Kamin ski, R.A. (2002). Dynamic Indicators of
Basic Early Literacy Skills (6th ed.). Eugene, OR: In stitute for the
Development of Educational Achievement. Retrieved November
30, 2009, from dibels.uoregon.edu
Good, R.H., III, Kaminski, R.A., Simmons, D., & Kame’enui, E.J.
(2001). Using Dynamic Indicators of Basic Early Literacy Skills
(DIBELS) in an outcomes-d riven model: Steps to read ing out-
comes. Oregon School Study Council, 44(1), 6–24.
Goswami, U., Thomson, J., Richardson, U., Stainthorp, R., Hughes,
D., Rosen, S., et al. (2002). Amplitude envelope onsets and de-
velopment dyslexia: A new hypothesis. Proceedings of the National
Academy of Sciences of the United States of America, 99(16), 10911–
10916. Medline doi:10.1073/pnas.122368599
Guion, S.G., Harada, T., & Clark, J.J. (2004). Early and late Spanish–
English biling uals’ acquisition of English word stress patter ns.
Bilingualism: Language and Cognition, 7(3), 207–226. doi:10.1017/
S1366728904001592
Gutiérrez-Palma, N., & Palma-Reyes, A. (2008). On the use of lexi-
cal stress in reading Spanish. Reading and Writing, 21(6), 645–
660. doi:10.1007/s11145-007-9082-x
Harris, T.L., & Hodges, R.E. (Eds.). (1995). The literacy dictionary:
The vocabulary of reading and writing. Newark, DE: International
Reading Association.
Hasbrouck, J. (20 06, Sum mer). Drop everything a nd read —but
how? Amer ican Educator. Retr ieved November 30, 2009, from
www.aft.org/pubs-reports/american_educator/issues/sum-
mer06/fluency.htm
Hasbrouck, J., & Tindal, G.A. (2006). Oral reading fluency norms:
A valuable a sse ssment tool for reading teachers. The Reading
Teacher, 59(7), 636–644. doi:10.1598/RT.59.7.3
Hiebert, E.H. (2004, April). Teaching children to become fluent read-
ers—Year 2. Presented at the meeting of the American Educational
Research Association, San Diego, CA.
Hiebert, E.H. (2006). Becoming fluent: Repeated reading with scaf-
folded texts. In S.J. Samuels & A.E. Farstrup (Eds.), What research
has to say about fluency instruction (pp. 204 –226). Newark, DE:
International Reading Association.
Him melmann, N.P., & Ladd, D.R. (2008). Prosodic description:
An introduction for field workers. Language Documentation &
Conservation, 2(2), 244–274.
Hirotani, M., Frazier, L., & Ray ner, K. (2006). Punctuation and in-
tonation effects on clause and sentence wrap-up: Evidence from
eye movements. Journal of Memory and Language, 54(3), 425–443.
doi:10.1016/j.jml.2005.12.001
Hir schberg, J. (2002). Communic ation and prosody: Functional
as pects of pros ody. Speech Communi cat ion, 36(1–2), 31–43.
doi:10.1016/S0167-6393(01)00024-3
Howell, P., & Kad i- Hanif i, K. (1991). Compar is on of pro -
so dic pr op er tie s between rea d and spont an eo us speech
m a t e r ia l. S p eech C omm un ica tion, 1 0 (2 ), 163 – 16 9.
doi:10.1016/0167-6393(91)90039-V
Hudson, R.F., Lane, H.B., & Pullen, P.C. (2005). Reading f luency
assessment and instr uction: What, why, and how? The Reading
Teacher, 58(8), 702–714. doi:10.1598/RT.58.8.1
Hu ds on, R.F., P ul len , P.C ., La ne, H.B ., & Tor ge sen, J. K.
(2009). The comple x na ture of r eadin g fluency: A multidi-
mension al view. Reading & Writ ing Quarterly, 25 (1), 4–32.
doi:10.1080/10573560802491208
Jarmulow icz, L., Tara n, V.L., & Hay, S.E. (2007). Third g raders’
metalinguistic skills, reading skills, and stress production in
derived English words. Journal of Speech, Language, and Hearing
Research, 50(6), 1593–1605. doi:10.1044/1092-4388(2007/107)
Juslin, P.N., & L aukk a, P. (2003). Communication of emotions in
vocal expression a nd music performance: D ifferent channels,
same code? Psycholog ical Bull etin, 129(5), 770– 814. Medli ne
doi:10.1037/0033-2909.129.5.770
Kame’enui, E.J., Si mmons, D.C., Good, R .H., III, & Harn, B.A .
(2001). The use of fluency-based measures in early identification
and evaluation of intervention efficac y in schools. In M. Wolf
(Ed.), Dyslexia, fluency, and the brain (pp. 307–331). Timonium,
MD: York.
Reading Research Quarterly • 45(2)
250
Ke lly, M.H., & Bock , J.K . (198 8). Stress in time. Journal of
Experimental Psychology. Human Perception and Performance, 14(3),
389– 403. Medline doi:10.1037/0096-1523.14.3.389
Kerkhofs, R., Vonk, W., Schriefers, H., & Chwilla, D.J. (2008).
Sentence processing in the visual and auditory modality: Do com-
ma and prosodic break have parallel functions? Brain Research,
1224, 102–118. Medline doi:10.1016/j.brainres.2008.05.034
Kl auda, S. L., & Gut hr ie, J.T. (20 08). Rel ation sh ips of th re e
co mp on ent s of re ad ing fl ue ncy to rea ding comp re hen-
si on . Jo ur na l of Edu ca ti on al Psycholog y, 100(2), 310 –321.
doi:10.1037/0022-0663.100.2.310
Kleiman, G.M., Winograd, P.N., & Humphrey, M.M. (1979). Prosody
and children’s parsing of sentences (Tech. Rep. No. 123). Champaign,
IL: Center for the Study of Reading.
Koriat, A., Greenberg, S.N., & Kreiner, H. (2002). The extraction
of str ucture dur ing re ading: Evidence f rom reading prosody.
Memory & Cognition, 30(2), 270–280. Medline
Krivokapić, J. (2007). Prosodic planning: Effects of phrasal length
and complexity on pause duration. Journal of Phonetics, 35(2),
162–179. Medline doi:10.1016/j.wocn.2006.04.001
Kuhn, M.R . (20 05). A comparat ive study of sm al l group flu-
enc y i ns tru c tion. R ea din g P sycho lo g y, 26 (2), 127–146 .
doi:10.1080/02702710590930492
Kuhn, M.R. (2007). Effective oral reading assessment (or why round
robin reading doesn’t cut it). In J.R. Paratore & R.L. McCormack
(Eds.), Classroom literacy assessment: Making sense of what students
know and do (pp. 101–112). New York: Guilford.
Kuhn, M.R., Schwanenflugel, P.J., Morris, R.D., Morrow, L.M., Woo,
D., Meisinger, B., et al. (2006). Teaching children to become flu-
ent and automatic readers. Journal of Literacy Research, 38(4), 357–
387. doi:10.1207/s15548430jlr3804_1
Kuhn, M.R., & Stahl, S.A. (2003). Fluency: A review of developmen-
tal and remedial practices. Journal of Educational Psychology, 95(1),
3–21. doi:10.1037/0022-0663.95.1.3
LaB erge, D., & Samuels, S.J. (1974). Toward a theory of automatic
information processi ng in reading. Cognitive Psychology, 6(2),
293–323. doi:10.1016/0010-0285(74)90015-2
Ladd, D.R. (1984). Decli nation: A review and some hypotheses.
Phonology Yearbook, 1, 53–74. doi:10.1017/S0952675700000294
Levy, B.A. (2001). Moving the bottom: Improving reading fluency.
In M. Wolf (Ed.), Dyslexia, fluency, and the brain (pp. 357–379).
Timonium, MD: York.
Log an, G .D . (19 88). To war d an i n st an ce t heo r y o f au -
tom at iz at i on. P syc hol og ic al R ev iew, 9 5 (4), 4 92 –527.
doi:10.1037/0033-295X.95.4.492
Logan, G.D. (1992). Shapes of reaction-time distributions and shapes
of learning curves: A test of the instance theor y of automaticity.
Journal of Experimental Psychology. Learning, Memory, and Cognition,
18(5), 883–914. Medline doi:10.1037/0278-7393.18.5.883
Logan, G.D. (1997). Automaticity and reading: Perspectives from the
instance theory of automatization. Reading & Writing Quarterly,
13(2), 123–146. doi:10.1080/1057356970130203
Logan, G.D., Taylor, S.E., & Ether ton, J.L. (1999). A ttention and
automat ici ty: Toward a theoret ica l integration. Psycholog ical
Research, 62(2–3), 165–181. doi:10.1007/s004260050049
Madelaine, A., & Wheldall, K. (1999). Curriculum-based mea-
surement of reading: A c ritical review. Int ernational Journal
of Disab il it y, Develop me nt a nd Ed uca tio n, 46 (1) , 71– 85.
doi:10.1080/103491299100731
Ma de lai ne , A., & Wheldal l, K. (2 004). Curr iculum-ba sed
me asurement of re ad in g: Recent ad va nces. International
Jour nal of Disab ility Development and Education, 51(1), 57–82.
doi:10.1080/1034912042000182201
Mat hson, D.V., A llington, R.L., & Solic, K.L. (2006). Hijacking
fluency and instr uctionally informat ive as sessments. In T.
Rasinsk i, C. Blachowicz, & K. Lems (Eds.), Fluency instruction:
Research-based best practices (pp. 106–119). New York: Guilford.
McCallum, R.S., Sharp, S., Bell, S.M., & George, T. (2004). Silent
versus oral reading comprehension and efficiency. Psychology in
the Schools, 41(2), 241–246. doi:10.1002/pits.10152
McKenna, M.C., & Picard, M.C. (2006). Revisiting the role of mis-
cue analysis in effective teaching. The Reading Teacher, 60(4), 378–
380. doi:10.1598/RT.60.4.8
McKenna, M.C., & Stahl, S.A. (2003). Assessment for reading instruc-
tion. New York: Guilford.
Meisi nge r, E.B., & Bradley, B. A. (200 8). Classroom pra ctices
for sup por ting fluency developme nt. In M.R. Kuh n & P.J.
Schwanenflugel (Eds.), Fluency in the classroom (pp. 36–54). New
York: Guilford.
Meisinger, E.B., Bradley, B.A., Schwanenflugel, P.J., & Kuhn, M. (in
press). Teachers’ perceptions of word callers and related literacy
concepts. School Psychology Review.
Meisinger, E.B., Bradley, B.A., Schwanenflugel, P.J., Kuhn, M., &
Morr is, R. (2009). Myt h and reality of the word caller: The re-
lationship between teacher nominations and prevalence among
elementar y school children. School Psycholog y Q uarterly, 24,
147–159.
Miller, J., & Schwanenflugel, P.J. (2006). P rosody of syntacti -
cally complex sentences in the oral reading of young children.
Journal of Education al Psycholog y, 98(4), 839 –853. Medline
doi:10.1037/0022-0663.98.4.839
Miller, J., & Schwanenflugel, P.J. (20 08). A longitudinal study of
the development of reading prosody as a dimension of oral read-
ing fluency in early elementary school children. Reading Research
Quarterly, 43(4), 336–354. doi:10.1598/R RQ.43.4.2
Mokhtari, K., & Thompson, H.B. (2006). How problems of read-
ing fluency and comprehension are related to difficulties in syn-
tactic awareness skills among fifth g raders. Reading Research and
Instruction, 46(1), 73–94.
Mostow, J., & Beck, J. (2005, June). Micro-analysis of fluency gains in
a reading tutor that listens. Paper presented at the Society for the
Scientific Study of Reading, Toronto, Canada.
Mostow, J., & Duong, M. (2009, July). Automated assessment of oral
reading express ivene ss. Proceedings of the 14th International
Conference on Artificial Intelligence in Education, Brighton, UK.
National Institute of Child Health and Human Development. (2000).
Report of the National Reading Panel. Teaching children to read: An
evidence-based assessment of the scientific research literature on read-
ing and its implications for reading instruction (NIH Publication No.
00-4769). Washington, DC: U.S. Government Printing Office.
Noordman, L., Dassen, I., Swerts, M., & Terken, J. (1999). Prosodic
markers of t ext structur e. In K. van Hoek, A. Kibr ik, & L.
Noordman (Eds.), Discourse studies in cognitive linguistics: Selected
papers from the 5th Inter national Cognitive Linguistics Conference
(pp. 133–148). Amsterdam: John Benjamin’s.
O’Shea, L.J., Sindelar, P.T., & O’Shea, D. (1987). The effects of re-
peated readings and attentional cues on the reading fluency and
comprehension of learning disabled readers. Learning Disabilities
Research, 2(2), 103–109.
Orsolini, M., Fanari, R., Tosi, V., de Nigris, B., & Carrier, R. (2006).
From phonolog ical recoding to lexic al readi ng: A long itudinal
study on reading development in Italian. Language and Cognitive
Processes, 21(5), 576–607. doi:10.1080/01690960500139355
Paris, S.G. (2005). Rei nterpr eti ng the developmen t of re ading
skills. Reading Research Q uarterly, 40(2), 184–202. doi:10.1598/
RRQ.40.2.3
Par is, S.G. (2008, December). Con strained skills—so what? Oscar
Causey address presented at the National Reading Conference,
Orlando, FL.
Pat el, R., & Grigos, M.I. (2006). Acou stic ch aracteri za tion of
th e questio n- st atement contrast in 4, 7, and 11 ye ar-ol d
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 251
children. Speech Communication, 48(10), 1308–1318. doi:10.1016/j.
specom.2006.06.007
Perfetti, C. A. (1985). Reading ability. New York: Oxford University
Press.
Perfetti, C.A. (1992). The representation problem in reading acqui-
sition. In P.B. Gough, L.C. Ehri, & R. Treiman (Eds.), Reading
acquisition (pp. 145–174). Hillsdale, NJ: Erlbaum.
Pikulski, J. (2005, May). The crit ical nat ure of building vocabulary
in early literacy. Key note presented at the International Reading
Association Preconference Institute #8, San Antonio, TX.
Pik ulsk i, J.J., & Chard, D.J. (2005). Fluency: Bridge between de-
coding and reading comprehension. The Reading Teacher, 58(6),
510–519. doi:10.1598/RT.58.6.2
Pinnell, G.S., Pikulski, J.J., Wixson, K.K., Campbell, J.R., Gough,
P.B., & Beatty, A.S. (1995). Listening to children read aloud: Data
from NAEP’s integrated reading performance record (IRPR) at Grade
4. The Nation’s Report Card. Report No. 23-FR-04. Washington,
DC: O ffice of E ducational Research a nd Improvement, U.S.
Department of Education.
Plante, E., Holland, S.K., & Schmithorst, V.J. (2006). Prosodic pro-
cessing by children: An fMRI study. Brain and Language, 97(3),
332–342. Medline doi:10.1016/j.bandl.2005.12.004
Pluck, M. (2006). “Jonathon is 11 but reads like a struggling 7-year
old”: Providing assistance for str uggling reader s w ith a tap e-
assisted reading program. In T. R asi nski, C. Blachowicz, & K.
Lems (Eds.), Fluency instruction: Research-ba sed best practices (pp.
192–208). New York: Guilford.
Powell-Smith, K.A., & Bradley-Klug, K.L. (2001). Another look at
the “C” in CBM: Does it really matter if curriculum-based mea-
surement reading probes are curriculum-based? Psychology in the
Schools, 38(4), 299–312. doi:10.1002/pits.1020
Pressley, M. (2000). What should comprehension instruction be the
instruct ion of ? In M.L. Kamil, P.B. Mosenth al, P.D. Pearson, &
R. Barr (Eds.), Handbook of reading research (Vol. 3, pp. 545–561).
Mahwah, NJ: Erlbaum.
Pressley, M., Hilden, K.R., & Shankl and, R.K. (2006). An evalua-
tion of end-grade-3 Dyn amic Indicators of Basic Early Literacy
Skills (DIBELS): Speed reading w ithout comprehension, predict-
ing little. East Lansing, MI: State University College of Education,
Literacy Achievement Research Center (LARC).
Pr ior, S.M., & Wel ling, K.A. (2001). “R ead in your he ad ”: A
Vygotskian analysis of the transition from oral to silent reading.
Reading Psychology, 22(1), 1–15. doi:10.1080/02702710151130172
Pr otopapa s, A., Archon ti , A., & Sk alou mb ak as, C. (2007).
Readi ng ability is negat ively related to Stroop i nterferen ce.
Cog nitive Psycholog y, 54(3), 251–282. Medli ne doi:10.1016/j.
cogpsych.2006.07.003
Ramus, F., Hauser, M., Miller, C., Mor ris, P., & Mehler, J. (2000).
Language discrimination by human newborns and by cotton-
top tamar in mon keys. Scie nce, 28 8(54 64), 349 –351. Medline
doi:10.1126/science.288.5464.349
RAND Readi ng Study Group. (2002). Reading for understanding:
Toward an R&D program in reading comprehension. Santa Monica,
CA: R AND Corporation.
Ra shot te, C. A., & Torge sen , J.K. (1985). Repe ated reading and
reading f luency in learning disabled children. Reading Research
Quarterly, 20(2), 180–188. doi:10.1598/RRQ.20.2.4
Rasinski, T.V. (2004). Assessing reading fluency. Honolulu, HI: Pacific
Resources for Education and Learning.
Rasin ski, T.V. (2006). A brief history of reading f luency. In S.J.
Samuels & A.E. Farstr up (Eds.), What research has to say about
fluency instruction (pp. 4–23). Newark, DE: International Reading
Association.
Rasinski, T.V, Blachowicz, C., & Lems, K. (Eds.). (2006). Fluency
instruction: Research-based best practices. New York: Guilford.
Rasinsk i, T.V., Padak, N.D., L inek, W.L., & Sturtevant, E. (1994).
Effects of fluency development on urban second-grade readers.
Journal of Educational Research, 87(3), 158–165.
Rasinski, T.V., Reutzel, R., Chard, D., & Linan-Thompson, S. (in
press). Reading f luency. In M.L. Kamil, P.D. Pearson, E.B. Moje,
& P. A ff lerbach (Eds.), Handbook of reading research (Vol. 4).
Mahwah, NJ: Erlbaum.
Rasinski, T.V., Ri kli, A., & Johnston, S. (2009). Reading f luency:
More than automat icit y? More than a concern for the pr ima-
ry grades? Literacy R esearch and Instru cti on, 48(4), 350 –361.
doi:10.1080/19388070802468715
Rawson, K.A. (2007). Testing the shared resource assumption in
theories of text processing. Cognitive Psychology, 54(2), 155–183.
Rawson, K.A., & Middleton, E.L. (2009). Memory-based processing
as a mechanism of automaticity in text comprehension. Journal of
Experimental Psychology. Learning, Memory, and Cognition, 35(2),
353–369. Medline doi:10.1037/a0014733
Reutzel, D.R . (1996). Developing at-risk readers’ oral reading flu-
ency. In L.R. Pu tn am (Ed.), How to beco me a bet ter reading
teacher: Strategies for assessment and intervention. (pp. 241–254).
Englewood Cliffs, NJ: Merrill.
Reutzel, D.R. (2003, May). Fluency: What is it? How to assess it? How
to develop it! Paper presented at Reading Research 2003, Orlando,
FL.
Reutzel, D.R., Fawson, P.C., & Smit h, J. A. (2008). Reconsidering
silent sustained reading: An exploratory study of scaffolded
silent re ading. Journal of Educational Research, 102(1), 37–50.
doi:10.3200/JOER.102.1.37-50
Riedel, B.W. (2007). The relation between DIBELS, reading compre-
hension, and vocabulary in urban first-grade students. Reading
Research Quarterly, 42(4), 546–567. doi:10.1598/RRQ.42.4.5
Samuels, S.J. (2004). Toward a theory of automatic information pro-
cessing in reading, revisited. In R.B. Ruddell & N.J. Unrau (Eds.),
Theoretical models and processes (pp. 1127–1148). Newark, DE:
International Reading Association.
Samuels, S.J. (2006). Reading f luency: Its past, present, and future.
In T. Rasinski, C. Blachowicz, & K. Lems (Eds.), Fluency instruc-
tion: Research-based best practices (pp. 7–20). New York: Guilford.
Samuels, S.J. (2007). The DIBELS tests: Is speed of barking at print
what we mean by reading fluency? Reading Research Q uarterly,
42(4), 563–566.
Samuels, S.J., & Farstrup, A.E. (Eds.). (2006). What research has to
say about fluency instruction. Newark, DE: International Reading
Association.
Sanderman, A.A., & Collier, R. (1997). Prosodic phrasing and com-
prehension. Language and Speech, 40(44), 391–409.
Schilling, S.G., Carlisle, J.F., Scott, S.E., & Zeng, J. (2007). Are flu-
ency measures accurate predictors of reading achievement? The
Elementary School Journal, 107(5), 429–448. doi:10.1086/518622
Schre ibe r, P.A. (1991). Understa nding prosody ’s role in read-
ing acq uis iti on. Theor y In to P rac tic e, 30( 3), 158 –16 4.
doi:10.1080/00405849109543496
Schwanenflugel, P.J., Hamilton, A.M., Kuhn, M.R., Wisenbaker,
J.M., & Stahl, S. A. (2004). Becom ing a f luent re ader: Reading
skill and prosodic feat ures in the or al read ing of young read-
ers. Journal of Educational Psycholog y, 96(1), 119–129. Medline
doi:10.1037/0022-0663.96.1.119
Schwa nenf lugel, P.J., Ha milton, C.E., Neu ha rt h-Pritchett, S.,
Restrepo, M.A., Bradley, B.A., & Webb, M.-Y. (in pre ss). PAVEd
for success: A n Evaluation of a comprehensive literacy program
for 4-year-old children. Journal of Literacy Research.
Schwanenflugel, P.J., Kuhn, M.R., Meisinger, E.B., & Morris, R.D.,
Foels, P., Woo, D.G., et al. (2008, March). A longitudinal study of
the development of reading fluency and comprehension in the early
elementary school years. Poster session presented at the annual
Reading Research Quarterly • 45(2)
252
meeting of the American Education Research Association, New
York, NY.
Schwanenf lugel, P.J., Kuh n, M.R ., Morris, R.D., Morrow, L .M.,
Mei singer, E.B., Woo, D.G., et al. (2009). Insights i nto flu-
ency instruct ion: Shor t- and long-term effects of two readi ng
programs. Lit erac y Research and Instruction, 48(4), 318–33 6.
doi:10.1080/19388070802422415
Schwanenflugel, P.J., Meisinger, E., Wisenbaker, J.M., Kuhn, M.R.,
Str auss, G.P., & Morr is, R.D. (2006). Becoming a f luent and
automatic reader in the early elementar y school years. Reading
Research Quarterly, 41(4), 496–522. doi:10.1598/RRQ.41.4.4
Schwanenflugel, P.J., Mor ris, R.K., Kuhn, M.R., Strauss, G.P., &
Sieczko, J.M. (2008). The inf luence of word unit size on the
development of Stroop interference in early word decoding.
Reading and Writing: An Interdisciplinary Journal, 21(3), 177–203.
doi:10.1007/s11145-007-9061-2
Schwanenflugel, P.J., & Ruston, H.P. (2008). Becoming a fluent read-
er: From theor y to practice. In M.R. Kuhn & P.J. Schwanenf lugel
(Eds.), Fluency in the classroom (pp. 1–16). New York: Guilford.
Schwebel, E. A. (2007). A comparative st udy of small group fluency
instruction—A replication and extension of K uhn’s (2005) study.
Unpublished master’s thesis, Kean University, Union, NJ.
Shanahan, T. (2005, May). Improving instruction for young children:
Making sense of the National Literacy Panel. Paper presented at the
Internationa l Reading A ssociation P reconference Institute #8,
San Antonio, TX.
Shapiro, E.S. (2004). Academic skills problems: Direct assessment and
intervention (3rd ed.). New York: Guilford.
Shinn, M.R., & Shinn, M.M. (2002). AIMSweb training workbook:
Administration and scoring of reading maze for use in general outcome
measurement. Eden Prairie, MN: Edformation.
Shuk la, M., Nespor, M., & Mehler, J. (2007). A n inter action be-
tween pros ody and stati stics in the segmentat ion of f lue nt
speech. Cognitive Psychology, 54(1), 1–32. Medline doi:10.1016/j.
cogpsych.2006.04.002
Sibley, D., Biwer, D., & Hesch, A. (2001). Establishing curriculum-
based measurement oral reading fluency performance standards
to predict success on local and state tests of reading achievement.
(ERIC Document Reproduction Service No. ED453527)
Simpson , E.A., Olive r, W.T., & Fr agaszy, D. (20 08). Supe r-
expressive voices: Music to my ears? Behavioral and Brain Sciences,
31(5), 596–597.
Smith, C.L. (2004). Topic transition s and durational prosody in
reading aloud: Production and modeling. Speech Communication,
42(3–4), 247–270. doi:10.1016/j.specom.2003.09.004
Snedeker, J., & Trueswell, J. (2003). Using prosody to avoid am-
biguity: Effects of speaker awa rene ss and referential context.
Journal of Memor y and Language, 48(1), 103–130. doi:10.1016/
S0749-596X(02)00519-3
Snedeker, J., & Yuan, S. (2008). Effects of prosodic and lexical
constraints on parsing in young children (and adults). Journal
of Memory and Language, 58(2), 574–608. Medline doi:10.1016/j.
jml.2007.08.001
Snow, C.E., Burns, M.S., & Griffin, P. (1998). Preventing reading dif-
ficulties in young children. Washington, DC: National Academy
Press.
Stahl, S.A., & Heubach, K., & Holcomb, A. (2005). Fluency-oriented
reading instruction. Journal of Literacy Research, 37(1), 25 –60.
doi:10.1207/s15548430jlr3701_2
Sta nov ich, K .E. (1986). M atthew effect s in reading: Some con-
seq uences of i ndividual differences in the acqui sit ion of lit-
eracy. Reading Research Quarterly, 21(4), 360 –407. doi:10.1598/
RRQ.21.4.1
Stanovich, K.E., Cunningham, A.E., & West, R.F. (1981). A longitu-
dinal study of the development of automatic recognition skills in
first graders. Journal of Reading Behavior, 13(1), 57–74.
Stecker, P.M., & Fuchs, L.S. (2000). Effecting superior achieve-
ment using curriculum-based measurement: The importance of
individual progress monitoring. Learning Disabilities Research &
Practice, 15(3), 128–135. doi:10.1207/SLDRP1503_2
Stecker, S.K., Roser, N.L., & Martinez, M.G. (1998). Understanding
oral reading fluency. In T. Shanahan & F.V. Rodriguez-Brown
(Eds.), 47th yearbook of the National Reading Conference (pp. 295–
310). Chicago: National Reading Conference.
Stolterfoht, B., Frie derici, A.D., Alter, K., & Steube, A. (2007).
Processing focus structure and implicit prosody dur ing silent
reading: Differential ERP effects. Cognition, 104(3), 565–59 0.
Medline doi:10.1016/j.cognition.2006.08.001
Surányi, Z., Csépe, V., Richardson, U., Thompson, J.M., Honbolygć,
F., & Goswami, U. (2009). Sensitivity to rhythmic parameters in
dyslexic children: A comparison of Hungarian & English. Reading
and Writing, 22(1), 41–56. doi:10.1007/s11145-007-9102-x
Sweet, A.P., & Snow, C.E. (Eds.). (2003). Rethinking reading compre-
hension. New York: Guilford.
Swets, B., De smet, T., Hambr ick, D.Z., & Ferreira, F. (2007). The
role of working memory in syntactic ambiguity resolution: A psy-
chometr ic approach. Journal of Experimental Psychology. General,
136(1), 64– 81. Medline doi:10.1037/0096-3445.136.1.64
Teale, W.H., Paciga, K.A., & Hoffman, J.L. (2007). Beginning read-
ing instruction in urban schools: The curriculum gap ensures a
continuing achievement gap. The Reading Teacher, 61(4), 344–348.
doi:10.1598/RT.61.4.8
Temperley, D. (2009). Distributional stress regular ity: A cor pus
study. Jour nal of Psycholinguistic Research, 38(1), 75–92. Medline
doi:10.1007/s10936-008-9084-0
Thomson, J.M., Fryer, B., Maltby, J., & Goswam i, U. (20 06).
Aud itory and motor rhy th m aware ness in adu lts with dys-
le xi a. J our nal of R es earc h in Rea ding, 29 (3), 33 4–3 48.
doi:10.1111/j.1467-9817.2006.00312.x
Torgesen, J.K., & Hudson, R.F. (2006). Reading f luency: Critical
issues for str uggling readers. In S.J. Samuels & A.E. Farstr up
(Eds.), What research has to say about fluency instruction (pp. 130–
158). Newark, DE: International Reading Association.
Vellutino, F.R., Fletcher, J.M., Snowling, M.J., & Sca nlon, D.M.
(2004). Specific reading disabilit y (dyslexi a): What have we
lea rned in the past four decades? Journal of Child Psycholog y
and Psychi atry, and Allied Disciplin es, 45(1), 2– 40. Medl in e
doi:10.1046/j.0021-9630.2003.00305.x
Wayman, M.M., Wallace, T., Wiley, H.I., Tichdt, R., & Espin, C.A.
(2007). Literature synthesis on curriculum-based measurement
in reading. The Journal of Special Education, 41(2), 85–120. doi:10.
1177/00224669070410020401
Wells, B., & Peppe, S. (2003). Intonation abilities of children with
speech and language impairments. Journal of Speech, Language, and
Hearing Research, 46(1), 5–20. doi:10.1044/1092-4388(2003/001)
Wennerstrom, A. (2001). The music of everyday speech: Prosody and
discourse analysis. London: Oxford University Press.
Whalley, K., & Hansen, J. (2006). The role of prosodic sensitivity
in children’s reading development. Journal of Research in Reading,
29(3), 288–303. doi:10.1111/j.1467-9817.2006.00309.x
Wheeldon, L., & Lahir i, A. (1997). Prosodic units in speech pro-
ducti on. Jour nal of Me mory and Lang uage, 37(3), 356 –381.
doi:10.1006/jmla.1997.2517
Wixson, K.K., & Lipson, M.Y. (2009, May). Response to interve n-
tion: Promises, possibilities, and potential problems for reading pro-
fessionals. Paper presented at the Reading Research Conference,
Minneapolis, MN.
Wolf, M., & Katzir-Cohn, T. (2001). Reading fluency and its inter-
vention. Scientific Studies of Reading, 5(3), 211–229. doi:10.1207/
S1532799XSSR0503_ 2
Woo d, C. (2006). Met rical stress se ns it iv it y in you ng ch il -
dr en and its rel at ionship to phonological aw ar enes s and
Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency 253
re adi ng . Jo urn al of Research in Reading, 29(3), 270 –2 87.
doi:10.1111/j.1467-9817.2006.00308.x
Young, A., & Bowers, P.G. (1995). Individual difference and text
difficulty determinants of reading fluency and expressive-
ness. Journal of Experimental Child Psychology, 60(3),
428–454.
Zutell, J., & Rasinski, T.V. (1991). Trai ning teacher s to attend to
their students’ oral reading fluency. Theory Into Practice, 30(3),
211–217. doi:10.1080/00405849109543502
Zvonik, E., & Cummins, F. (2003). The effect of surrounding phrase
lengt hs on pause duration. Retrieved December 16, 2009, from
www.isca-speech.org/archive/eurospeech_2003/e03_0777.html
Melanie R. Kuhn is an associate professor in literacy
education at Boston University, Massachusetts, USA; e-mail
melaniek@bu.edu.
Paula J. Schwanenflugel is a professor of educational
psychology, psychology, linguistics, and cognitive science at
The University of Georgia, Athens, USA; e-mail pschwan@
uga.edu.
Elizabeth B. Meisinger is an assistant professor of school
psychology at The University of Memphis, Tennessee, USA;
e-mail bmsinger@memphis.edu.