Content uploaded by Gary Feng
All content in this area was uploaded by Gary Feng on Apr 12, 2019
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
Scientific Studies of Reading
ISSN: 1088-8438 (Print) 1532-799X (Online) Journal homepage: https://www.tandfonline.com/loi/hssr20
How Individual Differences Interact With Task
Demands in Text Processing
Zuowei Wang, John Sabatini, Tenaha O’Reilly & Gary Feng
To cite this article: Zuowei Wang, John Sabatini, Tenaha O’Reilly & Gary Feng (2017) How
Individual Differences Interact With Task Demands in Text Processing, Scientific Studies of
Reading, 21:2, 165-178, DOI: 10.1080/10888438.2016.1276184
To link to this article: https://doi.org/10.1080/10888438.2016.1276184
Published online: 25 Jan 2017.
Submit your article to this journal
Article views: 352
View Crossmark data
Citing articles: 5 View citing articles
How Individual Differences Interact With Task Demands in Text
Zuowei Wang , John Sabatini, Tenaha O’Reilly, and Gary Feng
Educational Testing Service
Reading is affected by both situational requirements and one’s cognitive
skills. The current study investigated how individual differences interacted
with task requirements to determine reading behavior and outcome. We
recorded the eye movements of college students, who differed in reading
efficiency, while they completed a multiple-choice (MC) comprehension test
in two within-subject conditions: one in which they read passages and
answered MC questions as in a typical reading test and one in which they
wrote a summary before answering the MC questions. We found students
spent longer time reading the text in the summary-writing condition,
resulting in a benefit in the time they spent when answering MC questions.
This time benefit was larger for students who had relatively low reading
efficiency. These results demonstrated that both task requirements and
individual differences can interact to affect reading behavior and perfor-
mance. Implications for reading practice and assessment are discussed.
Reading is a goal-driven, task-oriented activity that draws upon a set of cognitive processes, skills, and
knowledge (Bohn-Gettler & Kendeou, 2014; McCrudden, Magliano, & Schraw, 2010; McCrudden &
Schraw, 2007). Reading goals influence and interact with cognitive processing (e.g., standards of
coherence), thus affecting how one strategically allocates ones’cognitive resources to build under-
standing from text sources (Van Den Broek, Bohn-Gettler, Kendeou, Carlson, & White, 2011).
One important setting where goals and tasks are relatively well defined is during a reading
comprehension test (Hornof, 2008; Santman, 2002). In the prototypical test design, a student
reads a passage and answers a set of comprehension questions. One would predict that in such a
context, students will apply the set of processes that they perceive as sufficient to accomplish the task
(i.e., answer all questions correctly). It is far less likely that they will linger over the passage, ask
themselves other questions about it, or engage in other kinds of tasks that one might perform when
reading outside a test situation. If the questions are all multiple choice (MC), other strategic choices
are less likely to be called upon than if one is asked to construct or produce a response (Rupp, Ferne,
& Choi, 2006). Thus, a reading comprehension test situation is a relatively natural context for
investigating goal/task-oriented reading.
We were curious about the following question: How might different students adjust their
cognitive processes when they themselves have different underlying skill profiles? That is, how do
individual differences interact with different goals and task demands? Examining the interaction
between individual difference and the task demand of reading may provide insights for both reading
practice and reading assessment. Specifically, we used reading efficiency as an individual difference
measure that reflected both reading speed and accuracy, and we manipulated the commonly used
MC reading comprehension test by adding a summary writing task.
CONTACT Zuowei Wang email@example.com Educational Testing Service, 660 Rosedale Road, MS 13E, Princeton, NJ 08541.
© 2017 Educational Testing Service
SCIENTIFIC STUDIES OF READING
2017, VOL. 21, NO. 2, 165–178
In the following section we review the relevant literature that contextualized and motivated
the current study. We discuss how task goals impact text processing, how strategies such as
summary writing encourage deeper processing, how students process MC-format tests, and how
individual differences such as reading efficiency may interact with task goals to impact text
Task goal impacts text processing
Reading comprehension involves the integration of the text base with related prior knowledge to
form a coherent situation model of the text (Kintsch, 1998). During this process, one’s reading goals
play an important role by setting up the context for making appropriate inferences (Kintsch & Van
Dijk, 1978). In support of this view, research has revealed that certain instructions change readers’
goal-focusing process, which in turn affects how the reading material is processed. In a review,
McCrudden and Schraw (2007) categorized four types of “relevance”instructions that can play a role
in readers’goals and text processing: (a) using questions to focus readers’attention on targeted
segments of the text, (b) asking readers to explain reasons or mechanisms based on their under-
standing, (c) reading from a particular perspective, or (d) reading for a specific purpose (e.g., study
vs. entertainment). Following this categorization, McCrudden et al. (2010) designed an experiment
and manipulated the instructions given to participants to create two types of perspectives to
approach the reading material. Results showed that information that was relevant to the assigned
perspective was read slower and remembered better. In addition, interviews performed after the
experiment indicated that the instructions changed readers’goals, which accounted for different
strategy usage during reading.
In a related study, Van Den Broek, Lorch, Linderholm, and Gustafson (2001) asked college
students to read expository texts under two conditions. In one condition, students read the texts
under the impression that they were studying these materials; in the other condition, they read the
texts for entertainment. The participants’thinking process was evaluated via both thinking aloud
(online) and free recall (offline). Results showed that students with a study goal produced more
coherence-building inferences, whereas students having an entertainment goal had more associations
and evaluations of the materials. Furthermore, students having a study goal remembered the
materials better. In short, readers’goals affect reading strategies, which in turn result in differential
recruitment of cognitive processes during reading.
Using goals to encourage deep comprehension
One of the relevance instructions proposed by McCrudden and Schraw (2007) was elaborative
interrogation, a common reading strategy. Such strategies are likely to promote deeper text processing
by impacting reading goals. For example, generating questions, a method shown to improve reading
performance (Rosenshine, Meister, & Chapman, 1996), helps readers form clear reading goals (i.e., to
answer these questions) and thus promotes deeper processing (Graesser & Lehman, 2011).
Among the efforts to improve comprehension, summarization is perhaps one of the most studied
reading strategies (Head, Readence, & Buss, 1989). The process to summarize a text shares great
similarities with comprehension itself. In forming a summary, one needs to omit trivial and
redundant information, substitute lower level information with superordinate concepts, and find
or create topic sentences (Brown, Campione, & Day, 1981). These procedures resemble the forma-
tion of a “macrostructure,”a key component of comprehension in which one disregards unnecessary
information and make inferences from texts (Kintsch, 1998; Kintsch & Van Dijk, 1978). This
macrostructure may help encourage deeper processing by forcing students to go beyond the
disconnected details of text and focus on how the text is organized at a global level, that is, how
the parts of the text fit together.
166 Z. WANG ET AL.
Whereas the basic propositional elements of comprehension are often considered automatic
(Kintsch, 1998), summarization may require deliberate effort in condensing the contents to get the
gist of text (Brown, Day, & Jones, 1983). The extra effort required in a summarization task
encourages deep processing and thus improves both comprehension itself (Bean & Steenwyk,
1984; Doctorow, Wittrock, & Marks, 1978) and metacomprehension—the monitoring of one’s
comprehension level (Thiede & Anderson, 2003).
Clearly, summarization is an effective reading strategy, as it may encourage students to process a
text in a more global fashion. More specifically, from the goal-focusing perspective (McCrudden &
Schraw, 2007), summarization may force readers to focus on the relevant information in order to
develop a coherent mental model of the text.
Goals and processing in MC reading tests
Compared to a summary writing task, the widely used MC reading comprehension test may
encourage a different type of goal structure and processing in readers’minds. In a typical MC
reading test, comprehension is assessed by answering questions that are related to information from
the text. This is based on the assumption that the quality of comprehension should be highly
correlated with how well readers answered these questions. One critique of this approach is that
MC questions target only a subsample of the full text and they do not directly measure global text
coherence and the mental model of readers (Kintsch, 1998; Kintsch & Van Dijk, 1978).
In line with this thinking, Rupp et al. (2006) tested 10 adult readers (ages 18–34) with a
standardized MC reading comprehension test and used semistructured interviews to investigate
the thinking process of these subjects when they worked on the MC questions. Results showed that
these subjects did not read the text in a linear way. They treated the questions as local problems to be
solved by looking for answers through reading the text. The authors concluded that “the construct of
reading comprehension is assessment specific”(p. 441). In other words, a traditional style reading
comprehension test may encourage local processing at the expense of global processing, because the
default aim in such situations is to answer each question correctly. In absence of a more global
processing goal, this strategy is not surprising, because readers are trying to be as efficient as
However, one important question is whether specific task goals affect different readers in the
same way. In the next section we review prior studies on how individual differences may interact
with task requirements in reading.
Individual differences in text processing and its interaction with task characteristics
People differ not only in how well they read (product) but also in how they read (process), and the
latter is often related to the former. In an eye-tracking study of college students reading expository
texts, Hyönä, Lorch, and Kaakinen (2002) identified four types of readers: fast linear readers, who
read the texts rapidly in sequence; slow linear readers, who read linearly but often repeatedly read
the same sentence; nonselective reviewers, who often went back and read previously read sentences;
and topic structure processors, who paid special attention to headings. Among the four types of
readers, the topic structure processors had the highest working memory capacity and they showed
the highest level of comprehension in a summary writing task. This study shows a possible advantage
for style of text processing but also suggests that the processing style may be sensitive to individual
differences (in this case, working memory capacity).
Even in an MC reading test, readers show differences in how they approach the reading materials.
In one experiment, Vidal-Abarca, Salmerón, and Mañá (2011) demonstrated a difference among
high school students. Whereas some students read through the passages before looking at the
corresponding MC questions, other students chose to start from the questions. Performance data
showed that the group who chose to read the passages first had an advantage over the other group.
SCIENTIFIC STUDIES OF READING 167
In another two experiments, Vidal-Abarca et al. (2011) showed that higher skilled readers better
monitored their comprehension and performed more efficient search for relevant information. The
individual differences in text processing (Hyönä et al., 2002; Vidal-Abarca et al., 2011) indicates that
if one manipulates the task requirement of reading assessments, these individual differences could
lead to (dis)advantages for a particular group of readers in certain conditions.
Some recent studies have directly investigated how individual differences and task characteristics
interacted to affect reading behavior. Bohn-Gettler and Kendeou (2014) measured individual
differences in readers’working memory capacity and manipulated both the context of reading and
the structure of text. Readers’cognitive processes during the task were evaluated with a think-aloud
paradigm; readers’memory for the reading materials was assessed by a summary writing task.
Results revealed that an interaction between reading context and working memory affected readers’
cognitive processes during reading. For example, high working memory readers engaged in more
paraphrasing when they read in a study context than in an entertainment context, but low working
memory readers did not show significant differences in these two reading contexts. In addition, the
context of reading and the structure of the text (descriptive vs. problem-response) also affected
readers’memory of the materials.
The study by Bohn-Gettler and Kendeou (2014) is one of the few that investigated the interaction
between individual difference and task characteristics. In this study, we seek to extend this line of
research. First, we use an eye-tracking methodology, rather than think-aloud, to have a cleaner
measure of processing. The think-aloud paradigm requires that participants pause after reading each
sentence and report their thoughts. This interrupts readers’normal reading, and thus the interpreta-
tion of results may be limited because of this methodology. Second, we look at reading efficiency as
the individual difference measure, rather than working memory. Reading efficiency or rate is a
commonly used metric across grade levels as an indicator of the quality of foundational text
processing skills and has been found to have a moderate relationship to reading comprehension,
as reviewed next.
Individual differences in reading efficiency
One way to distinguish individual differences in reading ability is via reading fluency or efficiency.
Skilled adult oral reading falls in the range of about 165 to 177 words per minute (WPM) on
moderately easy passages (readability in third to eighth grade; Baer, Kutner, Sabatini, & White,
2009). Silent reading in skilled adults is from 20 to 50 WMP faster than oral reading, but the two are
highly correlated (Carver, 1990; Rayner, Pollatsek, Ashby, & Clifton, 2012). Silent reading is more
natural for adult readers. This might be one reason why historically comprehension has been found
to be better in silent versus oral reading (Mead, 1917; Pintner, 1913).
In the current experiment, we used a cloze procedure (used in curriculum-based measurement
and sometimes referred to as a Maze task) that has been found to be strong indicator of reading
efficiency and sometimes a proxy of basic reading comprehension (Wayman, Wallace, Wiley, Ticha,
& Espin, 2007). In this task design, words from complete sentences of a passage were omitted,
resulting in blanks distributed across the passage. The reader must fill in these blanks by choosing
from several easy options.
We prefer the term reading efficiency over reading fluency in this study, because we used the cloze
procedure with both oral and silent reading conditions. The shared variance between reading
efficiency and reading fluency is likely very high (Eason, Sabatini, Goldberg, Bruce, & Cutting,
2013). Reading fluency has been most often defined and operationalized by some combination of
automatized word recognition (LaBerge & Samuels, 1974; Torgesen, Wagner, & Rashotte, 1999),
along with aspects related to continuous text reading such as prosody or expression (Daane,
Campbell, Grigg, Goodman, & Oranje, 2005; Fuchs, Fuchs, Hosp, & Jenkins, 2001; Klauda &
Guthrie, 2008; Samuels, 2006). Reading fluency, therefore, is typically measured by having indivi-
duals read aloud, so that accuracy, speed, and prosody can be observed, noting miscues such as word
168 Z. WANG ET AL.
errors, skips, pauses, regressions, and so on. In contrast, our cloze procedure included both silent
and oral reading, and it evaluated basic understanding by requiring students to monitor passage
meaning, minimally at the sentence level, to respond accurately to sentence completion items.
Because our subjects achieved a near-perfect accuracy score on the basic comprehension aspect of
the task (i.e., sentence completion items), we used the term reading efficiency to describe their speed
to read and complete these sentences with basic comprehension.
We reasoned that even college students would show speed differences in completing this low-level
reading task and that those differences in efficiency or rate could be associated with their processing
style. We would not predict significant variation in accuracy, given that the type of comprehension
task used in this study was originally intended for middle school students and therefore should be
relatively easy for a college-level reader. However, even with a more simple comprehension task, less
efficient reading could be a symptom of underlying foundational skills difficulties (e.g., nonauto-
matized word recognition; National Research Council, 2012) or less reading experience more
generally (Stanovich, West, Cunningham, Cipielewski, & Siddequi, 1996). Whatever the cause,
inefficient reading may impact processing and behavior while answering more traditional MC
questions. Thus, our goal was to investigate the impact and interaction of individual differences of
low- and high-efficiency readers when confronted with different task demands during a comprehen-
In the current study, we compared college students’reading behavior and subsequent performance
in a regular MC reading task and a modified version that required the same students to write a
summary before answering MC questions. Specifically, we evaluated how students who differed in
their basic reading efficiency were affected by the addition of a summary writing task for some of the
passages in an MC reading test. We hypothesized that (a) students would engage in more active
reading behavior when asked to write a summary of a text, evidenced by spending more time reading
the text while writing the summary and less time rereading when asked subsequent MC questions
about the text, (b) students would be faster in finding the answers to subsequent reading compre-
hension questions after writing a summary, and (c) this benefit of summary writing would be larger
for less efficient readers.
Sixty undergraduate students (14 male) from a large public university in the southwest United States
participated in the study. They were recruited via distribution of fliers in large undergraduate
sociology and human development courses. No history of vision or reading difficulties were
reported. Participants received $15 in compensation for their participation. Due to computer
error, one student’s behavioral data and another student’s eye-tracking data were lost.
Efficiency of Basic Reading Comprehension
The Efficiency of Basic Reading Comprehension (EBRC) task (Sabatini, Bruce, Steinberg, & Weeks,
2015)was used as a measure for efficiency of basic reading comprehension. EBRC was developed for
students ranging from Grades 5–10, and the reliability for Grade 10 (as reflected by Cronbach’s
alpha) was .95. In the task, expository passages were presented on a computer screen, building one
sentence at a time. Students read a sentence and pressed the space bar on their keyboard once they
were ready to view the next sentence. After pressing the space bar, the current sentence would turn
SCIENTIFIC STUDIES OF READING 169
gray (making it harder to read) and the next sentence would build onto the passage. For 75% of the
sentences, one word (never the first sentence) was missing, resulting in a blank in these sentences.
Students needed to fill these blanks by selecting one of three options using their keyboard. The time
between each key press was recorded by the computer. In total, the EBRC included four passages
ranging from 234 to 345 words in length and approximately 13 to 23 sentences in each passage. We
calculated the average WPM for each passage by dividing the number of words of a passage by the
time needed to finish reading that passage. Students read two of the passages silently and the other
two aloud. The order of the passages and conditions were counterbalanced.
Reading comprehension test
The four passages used in the EBRC task were recycled in the reading comprehension test, but the
two tasks did not share any questions (Sabatini et al., 2015). In other words, student saw the same
passage twice, once in the efficiency task and once in the comprehension task. The reliability of the
comprehension test was .83 for Grade 10 students. During the test, students read four passages with
six to eight MC questions for each passage. Each MC question had three or four options. The
questions asked students to identify important details and main ideas and to generate minor
inferences (e.g., anaphora resolution). However, the comprehension questions did not require
students to make more demanding, knowledge-based inferences that involve the integration topical
background knowledge and textual information.
The four passages were divided into two within-participant conditions. In one condition, students
were asked to write a summary before they worked on MC questions. For summary writing, students
were instructed that they should (a) include the main ideas and only the main ideas from the
passage, (b) use their own words, and (c) not include their opinions or information from outside the
passage. In the other condition, students directly worked on MC questions without a summary, as is
typical in reading comprehension tests. That is, every student saw four passages, two of which also
included a summary writing task before answering MC questions and two with only MC questions.
No other instructions were provided.
Each passage remained within the left half of the computer screen (no scrolling was needed) when
students worked on summarization or MC questions, which were presented on the right half of the
screen. In other words, the passage was available while answering the summary and MC questions.
Throughout the reading comprehension task, students were told to complete the task at their own
pace, although they were aware that their eye movements during the task were tracked.
To calculate the time that students spent looking at various contents of the reading task (e.g., texts of
passages, reading comprehension questions), we used eye tracking to record students’eye move-
ments with the Tobii T60 eye-tracking system (Tobii Technology, Falls Church, VA). The system
consists of a 17-in. LCD screen on which the aforementioned test materials were presented and
infrared cameras below the screen to capture subjects’pupil location and movement. The infrared
cameras have a sampling rate of 60 Hz to identify subjects’fixation location on the LCD screen. In
our study, students sat in front of the screen with their eyes about 60 cm from the eye tracker. Before
the eye tracking, we performed a 9-point calibration, a function provided by Tobii Studio so that the
eye-tracking program can find the correct correspondence between the location of subjects’pupils
and where they are looking at on the screen.
To facilitate the analysis of eye-tracking data, we defined three types of areas of interest: passages,
MC questions (including question stems and options), and summary writing areas (including
summary writing instructions and text box where students typed in their summary responses).
170 Z. WANG ET AL.
After signing the consent form, students started with the EBRC task. They then performed the
reading comprehension test with their eye movements tracked. Two of the passages required
students to write a summary before answering MC questions, and the other two did not. The
order of this summary condition was counterbalanced.
Students’summary responses were scored manually by two human coders based on a holistic
rubric that took into consideration (a) the number of key ideas of the passages; (b) the quality of
paraphrasing (i.e., not directly copied from the passages); (c) accuracy of information; and (d)
objectivity, that is, whether the response contained personal opinions that was not included in the
original texts. The two coders discussed until reaching agreement on each summary response to
determine on a scale of 0–3 the quality of the summary.
Students performed the EBRC task in two modality conditions: They read half of the passages
silently and the other half aloud. Students generally achieved perfect scores on word selection (basic
comprehension part of the efficiency task) with an average accuracy of 99%, which confirms that
they read for understanding in both conditions.
For the WPM measure, the two modality conditions showed some differences. The speed of silent
reading (M= 192, SD = 53) was faster than oral reading (M= 165, SD = 32), t(57) = 5.77, p< .001.
Because the correlation between these two measures was high, r(56) = .764, p< .001, we combined
these two conditions and used the average WPM of four passages across the two conditions as the
performance score on the EBRC task. Using a median split based on this combined score (M= 178,
SD = 40), students were divided into high- and low-efficiency groups for the analysis next (Table 1).
We ran a two-way analysis of variance to see how efficiency and modality condition were related
to the dependent variable WPM. There was a main effect of efficiency group, F(1, 112) = 35.15,
p< .01, and the high-efficiency group read about 63 words faster than the low-efficiency group
(Table 1). The effect of modality condition was not significant, F(1, 112) = 1.69, p= .20. However,
the interaction between reading efficiency and modality condition was significant, F(1, 112) = 9.39,
p< .01. As shown in Table 1, this significant interaction is because the difference between oral and
silent reading is much larger in the high efficiency group (44 WPM) than the low efficiency group
Table 1. Descriptive statistics for performance on tasks by efficiency group.
Low, M(SD) High, M(SD) All, M(SD)
WPM (oral & silent) 147 (19) 210 (30) 179 (40)
Oral 142 (20) 188 (24) 165 (32)
Silent 152 (21) 232 (46) 192 (53)
First reading time
Summary 104.3 (59.0) 75.7 (47.0) 90.0 (54.8)
No summary 30.1 (33.4) 14.9 (16.0) 22.5 (27.1)
Summary 33.6 (24.7) 31.6 (23.2) 32.6 (23.8)
No summary 65.2 (42.8) 38.8 (31.3) 52.0 (39.5)
MC time (time after first reading)
Summary 182.3 (58.2) 154.1 (48.8) 168.4 (55.2)
No summary 228.5 (90.3) 164.8 (55.8) 196.7 (81.0)
Summary .87 (.12) .91 (.10) .89 (.11)
No summary .88 (.13) .93 (.06) .91 (.11)
Summary score 2.26 (.84) 2.29 (.76) 2.28 (.79)
Note. All time measures are in seconds. WPM = words per minute; MC = multiple-choice.
SCIENTIFIC STUDIES OF READING 171
Reading comprehension processes
An analysis of variance on participants’first comprehension passage reading time (i.e., time spent
reading the text, before students completed the first MC question) shows that, overall, the high-
efficiency group was significantly faster at reading the text than the low-efficiency group, F(1,
56) = 6.156, p< .01, and the reading time for the summary writing condition was longer than the
no summary condition, F(1, 56) = 103.77, p< .01 (Table 1). The increased reading time for the
summary condition is expected, as the extra time was possibly spent on reading the text closely
enough to write the summary. Note that this effect is not due to the extra time spent writing the
summary, as the measure included here reflects only the time spent looking at the passage (thanks to
eye tracking). The interaction between summary condition and efficiency group was not significant,
F(1, 56) = 1.03, p= .31 (Figure 1).
For rereading time (i.e., time spent rereading the text after answering the first question), in
contrast, there was a significant interaction between summary condition and efficiency group, F(1,
55) = 9.16, p< .01. As shown in Figure 2, after summary writing, the low-efficiency group spent
much less time rereading the passage in order to answer MC questions than without writing a
summary; in contrast, the effect of summary writing was smaller for the high-efficiency group. The
effect of efficiency group was not significant, F(1, 55) = 3.43, p= .07; the effect of summary condition
was significant, F(1, 55) = 26.2, p< .01.
Figure 3 shows the total time that participants spent reading and answering the comprehension
questions as a function of summary condition and efficiency group. The analyses revealed that the
effect of efficiency group was significant, F(1, 55) = 7.54, p< .01; the effect of summary condition
was significant, F(1, 55) = 25.03, p< .01; and the interaction between efficiency group and summary
Figure 1. First reading time of passages in the summary and no summary conditions by group.
Figure 2. Rereading time of passages in the summary and no summary conditions by group.
172 Z. WANG ET AL.
condition was significant, F(1, 55) = 7.41, p< .01. As can be seen from the figure, when students
were asked to summarize the text before answering the MC questions, the low-efficiency group
showed a benefit in reduced time to answer subsequent MC questions. In other words, the time to
complete comprehension questions for the low-efficiency group was shorter after a summary than
when there was no summary.
In terms of the students’accuracy on the MC questions, the accuracy rate was generally high,
averaging 91% (SD = 11%) for the no summary passages and 89% (SD = 11%) for the summary
passages. This indicates that a problem of speed–accuracy trade-off is unlikely. To test this, we
examined the effect of efficiency group and summary condition on accuracy of responses. As expected,
these factors were not significantly related to accuracy: the effect of efficiency group, F(1, 56) = 3.61,
p= .062; the effect of summary condition F(1, 56) = 1.435, p= .236; and the interaction between
summary condition and efficiency group, F(1, 56) = .046, p= .83, was not significant.
In addition, because students went through both the summary and no summary conditions in a
repeated measure, within-subject design, the order of summary versus no summary conditions might
affect their behavior (i.e., Passages 1 and 3 required summary writing vs. Passages 2 and 4). For example,
students who took the summary condition first may or may not change their reading strategies later in a
condition that did not require summary writing (i.e., a transfer effect to subsequent conditions). To test
this, we looked at whether the order of the summary writing condition affected students’reading
behavior. For first reading time, no order effect was identified, F(1, 54) = .03, p= .86, nor did it interact
with efficiency group, F(1, 54) = .55, p= .46. In other words, the low-efficiency participants who first
wrote a summary did not carry over or transfer any processing benefits to a later condition when no
summary was required. In that case, low-efficiency participants reverted back to their minimal proces-
sing that ends up costing them more rereading time when answering the comprehension questions.
Finally, the quality of summary responses of the two groups was compared. As shown in Table 1,
the two groups produced summaries of similar quality; the high efficiency on average had a score of
2.29 and the low group 2.26 (out of 3).
A growing body of research has underscored the importance of task goals and relevance processing on
reading comprehension (McCrudden et al., 2010; Van Den Broek et al., 2011; Vidal-Abarca et al., 2011).
This body of research indicates that goals or tasks when reading texts can impact what information is
attended to, remembered, and comprehended. Other researchers have suggested that the impact of task
goals may be modulated by individual differences (Bohn-Gettler & Kendeou, 2014; Hyönä et al., 2002).
In the current study, we were interested in determining whether one potential individual difference,
reading efficiency, would modulate the impact of reading goals on comprehension.
Figure 3. Total time (after first reading) needed to answer reading comprehension questions in summary and no summary
conditions by group. Note. MC = multiple-choice.
SCIENTIFIC STUDIES OF READING 173
More specifically, we selected a common reading situation, a traditional-style reading comprehension
test, as the context for reading passages. In such situations, students may adopt different reading
processes than in nontesting situations. In fact, as shown in Figure 1, students spent much less time
reading the two passages when they were in the nosummary condition knowing that they were toanswer
only MC questions compared to the two passages in the summary condition. It appears in a testing
context, students seem to spend time “searching for answers”rather than reading for coherence, model
building, and global understanding. As noted, this is understandable behavior, when a minimalist
processing strategy is sufficient for accurately responding to questions.
Given this finding, we also wanted to determine whether summary writing would help change
students’default test-taking strategies and encourage them to read the entire text rather than search
for answers. We also hypothesized that this strategy effect (providing the summary task) could possibly
be modulated by an individual difference, namely, the student’s level of reading efficiency. For students
who are more efficient, they have the resources to process the text faster and they may be more strategic.
In contrast, for less skilled readers (i.e., low efficiency), providing a strategy such as summarization before
reading may demand deeper processing, demand more attention to the text, and possibly help structure a
global representation of the text. With a more organized structure, low-efficiency students might be
faster when answering questions than when they were not required to summarize the text.
Our experiment was consistent with this interpretation. When less efficient readers were asked to
provide a written summary before reading two of the passages, they spent more time initially reading
them (Figure 1), as compared to the two texts in the condition in which no summary was required.
Moreover, the low-efficiency students who wrote the summaries also spent less time rereading those
two texts when answering the comprehension questions (Figures 2 and 3). We take this as evidence
that these students must have built up memory representations for each text that they drew upon
when answering the comprehension questions.
In contrast, a different pattern occurred for the low-efficiency students when they did not write a
summary. They spent less time initially reading the two texts and more time rereading each when
answering the comprehension questions. Presumably, these students did not have a good memory
for the text material and consequently they had to refer back to the texts to locate the answers.
It should be noted that the experiment incorporated a within-participant design. In other words,
each student had an opportunity to read two texts in a summary and two texts in a no summary
condition. Interesting, but perhaps unfortunate, the benefits of the summary condition when
presented first, did not transfer over to the subsequent condition when no summary was required.
In other words, low-efficiency students reverted back to their usual searching strategy rather than
reading the text more thoroughly in the first place. In this way, the results seem to support a
minimalist interpretation (McKoon & Ratcliff, 1992) in that students were doing the minimum
amount of processing to get the task done. This also confirmed the importance of task instructions
in affecting readers’behavior (McCrudden & Schraw, 2007).
Although the results of the low-efficiency group are interesting, the results of the high-efficient
group are also noteworthy. High-efficiency students were faster at reading the text and answering the
questions. This could be because of their raw processing power, their faster reading ability, or their
automatic use of efficient reading strategies. Although this experiment was not designed to tease
apart these explanations, it is interesting to note that the summary condition had almost no effect on
rereading (Figure 2) or question answering time (Figure 3). That is, these students probably formed a
mental model rapidly via their initial reading of each text and that was held in memory regardless of
the summary condition. Does this mean that this initial mental model was sufficient for all task
demands? We doubt this interpretation, because the high-efficiency students did take some extra
initial reading time (Figure 1) when asked to write summaries (Table 1). More likely, it is the case
that the high-efficiency students were faster processors and were more efficient at organizing and
storing the text in memory by default, independently of our summary manipulations.
174 Z. WANG ET AL.
Because the study design required multiple readings of the same text by all students, the results can be
interpreted with respect to studies of the rereading effect, that is, more rapid reading of words or phrases
when one is rereading the same or a similarly themed text (Dowhower, 1987; Levy, Nicholls, & Kohen,
1993; Millis, Simon, & Tenbroek, 1998; Raney, 2003; Raney & Rayner, 1995). For example, the context-
dependent representation (CDR) model by Raney (2003) posits that the facilitation of rereading can stem
from any of the three levels of comprehension processing, namely, surface form, textbase, or situation
model (Kintsch, 1998), resulting in a continuum of facilitation. Further, in the CDR model,differences in
task demand and individual abilities also influence the nature and magnitude of the facilitation effect.
In the current study, for high-efficiency readers—who read each text very quickly the first time in
the EBRC cloze task—we see no additional reduction in the time required to answer MC questions
accurately with or without the demand of writing a summary (Figures 2 &3). The only added time
was the additional time needed to reread the passage in order to compose a quality summary. This
additional time rereading indicates that the students did not perceive the quality of the situation
model they formed based only on the initial cloze task as adequate to the task demands of writing a
summary. They needed to reread for depth, detail, and likely coherence building in writing the
summary, that is, a more demanding task required more text processing. This resulted in longer text
reading times in the two texts that required a summary than in the two where no summary was
For low-efficiency readers, however, there was a reduction in the time to answer subsequent MC
questions in the two passages preceded by summary writing. This suggests that the deeper, situation
model processing in the summary writing task facilitated a richer memory model, which yielded
more efficient search (of memory or text) when answering the MC questions. We interpret the time
reduction in answering the MC questions as a benefit of deeper, situation model processing, when
low-efficiency readers were required to complete the more demanding summary writing task. We
note that any rereading effect would have been the same for all participants in the study, as all of the
students saw all the same text passages twice (once in the EBRC task and again in the reading
comprehension test). The only manipulation was that each student had to write a summary for two
of the four passages, in different counterbalanced orders. In short, these results confirmed that
individual differences and task demands affect how the facilitation happens, in concordance with this
interpretation of the CDR model.
However, one should keep in mind that the typical measures used in the rereading studies are
differences in mean time to process individual words and phrases on the second reading, not the
sum total time of reading a passage, comprehending, and answering MC questions or writing
summaries. Thus, there is more individual choice and control over the level and depth of processing
necessary to achieve the task demands, with more interruptions of text processing than in typical
rereading studies. The interpretation of the CDR model we offer here is more of an application of the
theories emanating from that research rather than an extension of that research itself. In any event,
the facilitative effects of rereading would be evident in both conditions of the current study, and as
such, the summary effect cannot be entirely explained by rereading.
The interaction between summary writing and reading efficiency has implications for reading assess-
ment. By adding a summary writing task to the regular MC test, we changed the relative performance
difference between high- and low-efficiency groups in several ways. For first reading time, the summary
writing task increased the difference between the two groups; that is, the low-efficiency group spent
more time reading passages in order to write the summary than did the high-efficiency group (Figure 1).
If the time limits for completing the total test required highly efficient reading rates, then the addition of
a task that required closer reading of passage content would differentially impact the low-efficiency
SCIENTIFIC STUDIES OF READING 175
group, because this task requires some additional time to complete adequately. However, if closer,
deeper reading for formation of a coherent, situation model is of construct importance, then the
summary writing task seems to encourage such global processing.
In contrast, for the time to answer MC questions, the summary writing task shrank the difference
between high- and low-efficiency groups (Figure 3). Specifically, the low-efficiency group needed much
less time to answer MC questions after they had written summaries of the related passages, which helped
them catch up with the high-efficiency group. In other words, more time spent reading early on saves
more time later when answering questions. Given more difficult tests, this time difference could translate
into performance differences in accuracy. In such situations, if the test does not have high time pressure,
low-efficiency readers will probably benefit from writing a summary of the text, even if it is not required
by the test. In short, these results supportthe notionthat the way reading comprehension is evaluated has
consequences for the construct of reading comprehension (Rupp et al., 2006). Providing a summary may
induce behavior that is consistent with reading the passage and constructing a global memory repre-
sentation of the text. These are construct-relevant behaviors that are consistent with reading theory
The interaction effect between task requirement and individual difference in reading efficiency may
also have implications for students’daily reading activities. Low-efficiency readers generally need more
time to cover the same content of a passage. Because their first reading time during the comprehension
test was similar to high-efficiency readers, they may have processed less information upon this first
rereading of the text content. As a result, once they saw the MC question, they had to compensate for
this by more rereading. One can imagine that if these low-efficiency readers were not aware of this need
to expend extra effort when needed (e.g., reading a textbook for study purpose), they might not realize
that they had shallow understanding of the study material. This study shows that a summary writing
strategy might be helpful in such situations. After writing a summary the low-efficiency readers seemed
to have a better mental memory model from which to answer MC questions, as evidenced by the
reduced time they required to go back to the text to search (or confirm) answers.
Although the data are encouraging, there are a number of limitations. First, the sample size is small
and not representative of the general population. Thus, the results may not generalize to other
students. Second, the comprehension test used in this study was designed for middle schools
students and, consequently, was easy for these college students. This resulted in near ceiling effects
both on the quality of the summaries and for the MC scores. Although this increased our ability to
interpret differences in response time, future studies should examine the impact on accuracy as well,
with a more age-appropriate comprehension test. If the summary manipulation is truly of benefit,
then low-efficiency students might also score higher on the MC comprehension questions if they
formed a more coherent memory representation of the text (Bråten, Gil, & Strømsø, 2011).
Third, the present study included multiple readings of the text, one for the efficiency test and the other
for the comprehension test. Previous studies have shown that repeated reading improves young students’
fluency (Dowhower, 1987; Levy et al., 1993) and it changes the resource allocation of adult readers from
word identification to text-level integration (Millis et al., 1998; Raney, 2003; Raney & Rayner, 1995).
However, because we used a within-subject design, this should have increased the memory representa-
tion and worked against finding an effect for the summary condition, as four texts were read, two in each
condition. Thus, the rereading effect cannot explain the results (i.e., summary manipulation).
Fourth, although we discovered a time benefit of summary writing for low-efficiency readers, we did not
have any experimental manipulation that allowed us to understand the exact mechanisms for the benefit of
summary writing. Future studies can manipulate the summary-writing task to explore the mechanism for
this benefit. Fifth, the texts used to measure efficiency were the same as the texts used to measure
comprehension. Future studies should employ separate measures and texts to assess the two constructs
to determine whether the findings replicate.
176 Z. WANG ET AL.
In short, this article underscores the importance of task goals on reading comprehension. In particular,
it suggests that students’anticipation to write a summary before they read may help in the formation of
mental model that is held in memory. This mental model then can be used by less efficient readers to help
them pay closer attention to the text on initial read and provide some time savings when answering MC
questions later. The results of this study also suggest that the benefit of summary writing is specific to those
students who need it, the less skilled readers.
We thank Jennifer Lentini, Don Powers, Jesse Sparks, and Jingyuan Huang.
The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education,
through Grant R305G04065; grant R305F100005 to the Educational Testing Service as part of the Reading for
Understanding Research Initiative; and in partnership with Arizona State University. The opinions expressed are
those of the authors and do not represent views of Educational Testing Service or the Institute or the U.S. Department
Zuowei Wang http://orcid.org/0000-0002-9832-6193
Baer, J., Kutner, M., Sabatini, J., & White, S. (2009). Basic reading skills and the literacy of america’s least literate adults:
Results from the 2003 national assessment of adult literacy (naal) supplemental studies. Washington, DC: U.S.
Department of Education.
Bean, T. W., & Steenwyk, F. L. (1984). The effect of three forms of summarization instruction on sixth graders’
summary writing and comprehension. Journal of Literacy Research,16(4), 297–306. doi:10.1080/
Bohn-Gettler, C. M., & Kendeou, P. (2014). The interplay of reader goals, working memory, and text structure during
reading. Contemporary Educational Psychology,39(3), 206–219. doi:10.1016/j.cedpsych.2014.05.003
Bråten, I., Gil, L., & Strømsø, H. I. (2011). The role of different task instructions and reader characteristics when
learning from multiple expository texts. In M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance
and learning from text (pp. 95–122). Greenwich CT: Information Age.
Brown, A. L., Campione, J. C., & Day, J. D. (1981). Learning to learn: On training students to learn from texts.
Educational Researcher,10(2), 14–21. doi:10.3102/0013189X010002014
Brown, A. L., Day, J. D., & Jones, R. S. (1983). The development of plans for summarizing texts. Child Development,54
(4), 968–979. doi:10.2307/1129901
Carver, R. P. (1990). Reading rate: A review of research and theory. San Diego, CA: Academic Press.
Daane, M. C., Campbell, J. R., Grigg, W. S., Goodman, M. J., & Oranje, A. (2005). Fourth-grade students reading aloud:
NAEP 2002 special study of oral reading. Washington, DC: U.S. Department of Education.
Doctorow, M., Wittrock, M. C., & Marks, C. (1978). Generative processes in reading comprehension. Journal of
Educational Psychology,70(2), 109–118. doi:10.1037/0022-06184.108.40.206
Dowhower, S. L. (1987). Effects of repeated reading on second-grade transitional readers’fluency and comprehension.
Reading Research Quarterly,22(4), 389–406. doi:10.2307/747699
Eason, S. H., Sabatini, J., Goldberg, L., Bruce, K., & Cutting, L. E. (2013). Examining the relationship between word
reading efficiency and oral reading rate in predicting comprehension among different types of readers. Scientific
Studies of Reading,17(3), 199–223. doi:10.1080/10888438.2011.652722
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading
competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading,5(3), 239–256.
Graesser, A., & Lehman, B. (2011). Questions drive comprehension of text and multimedia. In M. T. McCrudden, J. P.
Magliano, & G. Schraw (Eds.), Text relevance and learning from text (pp. 53–74). Charlotte, NC: Information Age.
Head, M. H., Readence, J. E., & Buss, R. R. (1989). An examination of summary writing as a measure of reading
comprehension. Literacy Research and Instruction,28(4), 1–11. doi:10.1080/19388078909557982
SCIENTIFIC STUDIES OF READING 177
Hornof, M. (2008). Reading tests as a genre study. The Reading Teacher,62(1), 69–73. doi:10.1598/RT.62.1.8
Hyönä, J., Lorch, J. R. F., & Kaakinen, J. K. (2002). Individual differences in reading to summarize expository text:
Evidence from eye fixation patterns. Journal of Educational Psychology,94(1), 44–55. doi:10.1037/0022-06220.127.116.11
Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York, NY: Cambridge University Press.
Kintsch, W., & Van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review,
85(5), 363–394. doi:10.1037/0033-295X.85.5.363
Klauda, S. L., & Guthrie, J. T. (2008). Relationships of three components of reading fluency to reading comprehension.
Journal of Educational Psychology,100(2), 310–321. doi:10.1037/0022-0618.104.22.1680
LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive
Psychology,6(2), 293–323. doi:10.1016/0010-0285(74)90015-2
Levy, B. A., Nicholls, A., & Kohen, D. (1993). Repeated readings: Process benefits for good and poor readers. Journal of
Experimental Child Psychology,56(3), 303–327. doi:10.1006/jecp.1993.1037
McCrudden, M. T., Magliano, J. P., & Schraw, G. (2010). Exploring how relevance instructions affect personal reading
intentions, reading goals and text processing: A mixed methods study. Contemporary Educational Psychology,35(4),
McCrudden, M. T., & Schraw, G. (2007). Relevance and goal-focusing in text processing. Educational Psychology
Review,19(2), 113–139. doi:10.1007/s10648-006-9010-7
McKoon, G., & Ratcliff, R. (1992). Inference during reading. Psychological Review,99(3), 440–466. doi:10.1037/0033-
Mead, C. D. (1917). Results in silent versus oral reading. Journal of Educational Psychology,8(6), 367–368. doi:10.1037/
Millis, K. K., Simon, S., & Tenbroek, N. S. (1998). Resource allocation during the rereading of scientific texts. Memory
& Cognition,26(2), 232–246. doi:10.3758/BF03201136
National Research Council. (2012). Improving adult literacy instruction: Options for practice and research.Washington,
DC: National Academies Press.
Pintner, R. (1913). Oral and silent reading of fourth grade pupils. Journal of Educational Psychology,4(6), 333–337.
Raney, G. E. (2003). A context-dependent representation model for explaining text repetition effects. Psychonomic
Bulletin & Review,10(1), 15–28. doi:10.3758/BF03196466
Raney, G. E., & Rayner, K. (1995). Word frequency effects and eye movements during two readings of a text. Canadian Journal
of Experimental Psychology/Revue Canadienne De Psychologie Expérimentale,49(2), 151–173. doi:10.1037/1196-
Rayner, K., Pollatsek, A., Ashby, J., & Clifton, J. C. (2012). Psychology of reading. New York, NY: Psychology Press.
Rosenshine, B., Meister, C., & Chapman, S. (1996). Teaching students to generate questions: A review of the
intervention studies. Review of Educational Research,66(2), 181–221. doi:10.3102/00346543066002181
Rupp, A. A., Ferne, T., & Choi, H. (2006). How assessing reading comprehension with multiple-choice questions shapes the
construct: A cognitive processing perspective. Language Testing,23(4), 441–474. doi:10.1191/0265532206lt337oa
Sabatini, J., Bruce, K., Steinberg, J., & Weeks, J. (2015). SARA reading components tests, RISE forms: Technical
adequacy and test design. ETS Research Report Series,2015(2), 1–20. doi:10.1002/ets2.12076
Samuels, S. J. (2006). Toward a model of reading fluency. In S. J. Samuels & A. E. Farstrup (Eds.), What research has to
say about fluency instruction (pp. 24–46). Newark, DE: International Reading Association.
Santman, D. (2002). Teaching to the test?: Test preparation in the reading workshop. Language Arts,79(3), 203–211.
Stanovich, K. E., West, R. F., Cunningham, A. E., Cipielewski, J., & Siddequi, S. (1996). The role of inadequate print
exposure as a determinant of reading comprehension problems. In C. Cornoldi & J. Oakhill (Eds.), Reading
comprehension difficulties: Processes and intervention (pp. 15–32). Hillsdale, NJ: Erlbaum.
Thiede, K. W., & Anderson, M. C. (2003). Summarizing can improve metacomprehension accuracy. Contemporary
Educational Psychology,28(2), 129–160. doi:10.1016/S0361-476X(02)00011-5
Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. (1999). Test of word reading efficiency. Austin, TX: Pro-Ed.
Van Den Broek, P., Bohn-Gettler, C., Kendeou, P., Carlson, S., & White, M. (2011). When a reader meets a text: The
role of standards of coherence in reading comprehension. In M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.),
Text relevance and learning from text (pp. 123–140). Charlotte, NC: Information Age.
Van Den Broek, P., Lorch, R. F., Linderholm, T., & Gustafson, M. (2001). The effects of readers’goals on inference
generation and memory for texts. Memory & Cognition,29(8), 1081–1087. doi:10.3758/BF03206376
Vidal-Abarca, E., Salmerón, L., & Mañá, A. (2011). Individual differences in task-oriented reading. In M. T.
McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text revelance and learning from text (pp. 267–294).
Greenwich, CT: Information Age.
Wayman, M. M., Wallace, T., Wiley, H. I., Ticha, R., & Espin, C. A. (2007). Literature synthesis on curriculum-based
measurement in reading. The Journal of Special Education,41(2), 85–120. doi:10.1177/00224669070410020401
178 Z. WANG ET AL.