Content uploaded by Patricia Alexander
Author content
All content in this area was uploaded by Patricia Alexander on Sep 30, 2017
Content may be subject to copyright.
Review of Educational Research
Month 201X, Vol. XX, No. X, pp. 1 –35
DOI: 10.3102/0034654317722961
© 2017 AERA. http://rer.aera.net
1
Reading on Paper and Digitally: What the Past
Decades of Empirical Research Reveal
Lauren M. Singer and Patricia A. Alexander
University of Maryland
This systematic literature review was undertaken primarily to examine the
role that print and digitally mediums play in text comprehension. Overall,
results suggest that medium plays an influential role under certain text or
task conditions or for certain readers. Additional goals were to identify
how researchers defined and measured comprehension, and the various
trends that have emerged over the past 25 years, since Dillon’s review.
Analysis showed that relatively few researchers defined either reading or
digital reading, and that the majority of studies relied on researcher-devel-
oped measures. Three types of trends were identified in this body of work:
incremental (significant increase; e.g., number of studies conducted, vari-
ety of digital devices used), stationary (relative stability; e.g., research
setting, chose of participants), and iterative (wide fluctuation; e.g., text
length, text manipulations). The review concludes by considering the sig-
nificance of these findings for future empirical research on reading in print
or digital mediums.
Keywords: reading, comprehension, digital reading, medium
Nobody is going to sit down and read a novel on a twitchy little screen. Ever.
—Annie Proulx, fiction writer (1994)
Although this may come as a surprise to Proulx (1994), it would appear that the
world is digitally at one’s fingertips. Open 24 hours a day, 365 days a year, the
digital world has become a one-stop text source, be it for news, recreational read-
ing, or information sharing via Facebook, blogs, or tweets (DiMaggio & Hargittai,
2001). Humans live in a society that is constantly plugged into the Internet
whether by computer or by handheld device. Although it goes without saying that
the digital age has come with many benefits, including rapid and expanded access
to information and untold networking capabilities (Castells, 2011; Labrecque, vor
dem Esche, Mathwick, Novak, & Hofacker, 2013; Usluel, 2016), questions
remain about the implications of such digital access and the many digital devices
722961RERXXX10.3102/0034654317722961Singer and AlexanderReading on Paper and Digitally
research-article2017
Singer & Alexander
2
(e.g., computers, tablets, and smartphones) that allow for that access for reading
and learning from text (Underwood, Underwood, & Farrington-Flint, 2015).
More specifically, the use of digital devices as reading tools has garnered
increased importance as schools move to paperless classrooms across the globe
(Giebelhausen, 2015; Shishkovskaya, Sokolova, & Chernaya, 2015). These paper-
less classrooms allow the reader to alter the size of the text, highlight important
passages, and search related terms outside of the text with the click of a button. Not
surprisingly in light of these developments, 97% of students by 2009 had access to
a computer in their classroom (National Center for Education Statistics, 2013).
Moreover, even outside the classroom context, more and more individuals are
engaged in online reading. For instance, and contrary to Proulx’s (1994) prognos-
tication, Zickuhr, Rainie, Purcell, Madden, and Brenner (2012) found that 43% of
Americans and 48% of those between the ages of 18 and 29 read lengthy texts,
such as newspapers or books, digitally—a number expected to increase exponen-
tially (Stephens, 2014). These figures raise the fundamental question of how the
use of such digital reading materials might potentially alter perceptions of what it
means to read and the comprehension that results, for better or for worse.
In fact, such a fundamental question has been posed in years past. For example,
in 1992, Dillon conducted a review of the literature intended to examine differ-
ences that might exist when reading from a printed source versus an electronic
source. To our knowledge, this was one of the only reviews that examined print
reading vis-à-vis digital reading. However, although that review can serve as a
starting point in the conversation about print and digital reading, a more contempo-
rary analysis of the extant literature is clearly warranted. We see this review as
warranted not solely because of any shortcomings that might be ascribed to Dillon’s
review but also because much has changed technologically since the early 1990s.
Moreover, as we suggested at the outset, all signs point to a growing presence
of digital reading in the lives of students and their teachers. One reason for our
conviction regarding this trend is that there are now a plethora of devices to
employ when reading digitally, from computers to other mobile devices such as
iPads, Kindles, and even smart watches. Thus, in this systematic review, we set
out to examine the empirical literature published since 1992—the year of Dillon’s
review—that pertained to the mediums in which reading occurred (i.e., in print
and digitally). Our overarching goals were to more richly describe the state of the
research encompassing print and digital reading and to better ascertain how the
affordances of print or digital mediums relate to what students understand from
those textual encounters.
One justification for this systematic review is the limited understanding of
how particular attributes of the learner, the text, or the context might interact
with the medium to enhance or inhibit comprehension. The theoretical or empir-
ical models that currently inform research, including this inquiry, deal more
with the nature of single-text comprehension (Kintsch, 1988) or with the effects
of multimedia within a given text (Mayer, 1997). There are also well-articulated
models of learning and performance when multiple texts are implicated (Bråten
& Strømsø, 2011; Rouet & Britt, 2011). However, to our knowledge, the medium
of text delivery in these single-text or multiple-text models remains underinves-
tigated as a pertinent factor.
Reading on Paper and Digitally
3
For example, perhaps the most cited model of comprehension is Kintsch’s
(1988) construction-integration model of comprehension, which serves to
explicate how comprehension results from the interaction of the textual content
and readers’ knowledge and experiences by means of the text base and the situ-
ation models, respectively. This model, however, does not consider whether
texts are presented in print or digitally. Furthermore, Mayer’s (1997, 2011)
cognitive theory of multimedia learning (CTML) has been built on decades of
empirical research addressing processing demands of texts that incorporate
both linguistic and visuographic content. The CTML model outlines 12 prin-
ciples of text and visual integration, such as the coherence principle or the
redundancy principle, intended to guide the design of multimedia materials.
Although informative to those invested in multimedia documents, the CTML
does not consider the nesting of such materials in print versus digital mediums.
Finally, from the emerging models of multiple source use, such as the Multiple
Documents-Task-Based Relevance Assessment and content extraction model
(Rouet, 2006; Rouet & Britt, 2011), there is a growing awareness that several
factors, such as individuals’ epistemic beliefs, task directives, or the provoca-
tive nature of the topic, influence comprehension. Yet the fact that such multi-
ple documents may be conveyed in print or digitally has not been directly
incorporated in emergent models.
Although these well-studied theoretical models can shed light on what
occurs when individuals process texts either in print or digitally, they have not
addressed questions about when, for whom, and for what purposes one mode
of delivery (i.e., print or digital) might prove more beneficial than another. As
articulated by Jenkins (1974) in his tetrahedral model of learning, and as argued
more contemporarily by Alexander, Schallert, and Reynolds (2009), research-
ers concerned with learning and performance must consider the interactions
among the who, what, and where dimensions of a given situation. Thus, this
systematic review was undertaken to explore such dimensions with the goal of
informing ensuing efforts to construct an evidence-based model of reading in
print and digitally.
The specific research questions that guided this study were as follows:
Research Question 1: Within the literature addressing both print and digital
reading, how has comprehension been defined?
Although the conceptions of print and digital reading articulated by research-
ers were empirical questions we sought to investigate, our literature search was
informed by a conceptualization of reading reflected in the Reading Framework
for the National Assessment of Educational Progress (NAEP; National Assessment
Governing Board [NAGB], 2008). Specifically, for the purpose of this systematic
review, we broadly defined reading as the dynamic process of understanding and
drawing meaning from written text. We regarded this general conception as rele-
vant whether the process of reading occurred in print or digitally.
There is strong justification for our decision to look expressly at researchers’
conceptions of reading and digital reading. For one, there are those who have
decried the lack of conceptual clarity and specificity within the educational
Singer & Alexander
4
literature (e.g., Alexander & Dochy, 1995; Dinsmore, Alexander, & Loughlin,
2008; Murphy & Alexander, 2000). There is also evidence that even foundational
concepts (e.g., knowledge, beliefs, learning, motivation, or self-regulation), includ-
ing those related to reading (Alexander, Schallert, & Hare, 1991), are often vari-
ably, vaguely, and inconsistently defined within the empirical literature. Therefore,
as a first step in trying to disentangle the findings of the literature related to reading
in print and digitally, it seemed wise to ascertain whether those engaged in that
research were operating from an explicit or consistent conceptual base.
Research Question 2: Within the literature addressing both print and digital
reading, how has comprehension been assessed?
When examining the literature that encompassed the process of reading in both
print and digital mediums, we did not wish to overlook the product of those undertak-
ings. In effect, we wanted to determine how participants’ comprehension was gauged.
Our intention was to chart the levels of comprehension (i.e., locate and recall, inte-
grate and interpret, and critique and evaluate; NAGB, 2008) assessed within each
study. This decision was informed by the assumption that medium may play a more
influential role when comprehension questions move beyond gist understanding
(Singer & Alexander, 2017). In the same vein, we wanted to document the form of
those comprehension assessments (e.g., multiple-choice or constructed-response)
because the literature suggests that question type may also influence comprehension
outcomes (Pearson & Hamm, 2005; Sarroub & Pearson, 1998).
Research Question 3: What trends pertaining to participants, additional (non-
comprehension) measures, text types, and digital devices can be identified
within the literature on print and digital reading?
As part of this systematic review of the literature, we wanted to incorporate a
trend analysis. In 1993, Alexander and Knight argued that educational trends, or
distinguishable patterns of events in learning and instruction, generally fall into
three categories: incremental (upward developments), stationary (little or no
change over time), and iterative (repetitive change). Since then, researchers have
found these trend designations to be informative (e.g., Alexander, Murphy, &
Greene, 2012; Alexander, Murphy, & Woods, 1996; Dumas, Alexander, & Singer,
2015). Thus, for the present analysis, we attempted to ascertain whether there
were clear upward, stable, or repetitive trends in the empirical research on reading
in print and digitally.
Research Question 4: How do the current trends in the print and digital read-
ing literature compare to the trends reported by Dillon in 1992?
Every systematic review requires some starting date in the search for pertinent
literature. Despite whatever shortcomings can be ascribed to Dillon’s (1992)
review of print and digital reading, it does represent a valid starting point for the
current investigation and a bases for comparison. Among the shortcomings that
Reading on Paper and Digitally
5
must be acknowledged, Dillon’s survey of the literature was neither systematic
nor “best-evidence” in form (Slavin, 1986). Consequently, it was not evident how
the studies that formed the bases for his findings were chosen. In addition, Dillon’s
review did not consider the definitions of reading or digital reading that guided
researchers’ inquiries. This is concerning because without ascertaining research-
ers’ meaning of core constructs, the consistency of reported outcomes remains at
issue. Finally, for the most part, Dillon forwarded rather tenuous conclusions
regarding print and digital reading that demand reexamination. It was his position
that there were too many factors at work to more definitively conclude what dif-
ferences, if any, manifested across the mediums. Despite these obvious issues, we
nonetheless found Dillon’s review to be a useful starting point for our own analy-
sis of the research literature pertaining to reading in paper and digitally.
Method
Search Procedures
When establishing our search parameters, we were influenced by formative
works that examined the nature of reading comprehension (e.g., Kintsch, 1988)
and digital reading (e.g., Leu, Kinzer, Coiro, & Cammack, 2004). We began this
project knowing that such influential pieces would not be included in the system-
atic review due to their theoretical nature. Nonetheless, we considered these works
informative in framing our questions, establishing our search parameters, and sug-
gesting relevant researchers and publications. With such formative pieces and our
prior empirical studies as guides (Singer & Alexander, 2017; Singer, Alexander, &
Berkowitz, 2017), we initiated a systematic search of the literature.
All literature searches were conducted using the ERIC, PsychInfo, and Web of
Science databases and a title and abstract search procedure. Furthermore, these
database searches were limited to peer-reviewed publications in the English and
to the last quarter century (i.e., 1992 to May 2017). We chose this time frame since
we were specifically interested in the literature that was not critically examined
during Dillon’s (1992) review. We also considered this time frame as justified
given the rapidly changing nature of digital reading (Underwood et al., 2015). In
an effort to address our research questions, we used the terms “reading digitally”
(n = 129), “reading online” (n = 111), “digital reading” (n = 221), “computer read-
ing” (n = 101), “ereading” (n = 189), “learning on computer” (n = 58), and “learn-
ing digitally” (n = 57) to conduct the initial search. This initial phase of the
systematic search resulted in 859 documents. Although these various terms were
used to search databases, we will consistently refer to reading or reading in print
when describing reading that occurs off-line and to digital reading or reading
online when speaking about reading involving hypermedia technology.
In addition to the aforementioned procedure, we physically examined the table
of contents for journals appearing two or more times in the initial search results. For
pragmatic reasons, we limited this facet of the search to the past 5 years of journal
volumes. The list of those physically searched journals appears in Table 1. We also
perused the publication lists for specific authors who contributed two or more stud-
ies to our initial database as a way to locate any potential studies that might fit our
inclusion parameters (see Table 2). Finally, we used a backward-snowballing
method to expand the search. This process entails reviewing the reference lists of
6
identified articles to unearth any previously overlooked documents that appear to fit
the search parameters. Collectively, these subsequent search procedures contributed
an additional 19 documents to the research pool, bringing the total numbers of
works to be further scrutinized to 878 (see Figure 1).
Inclusion Criteria
Beyond the initial search parameters, several specific criteria were established
to ascertain which documents in the initial pool should be retained for final analy-
sis. Specifically, studies were included in the final analysis if they fulfilled the
following criteria:
1. Involved both print and digital reading
2. Were empirical studies
3. Entailed more than self-report measures
4. Included a measure of comprehension as an outcome
TABLE 1
Journals hand searched for relevant studies
Computers & Education
Contemporary Educational Psychology
Journal of Educational Psychology
Journal of Experimental Education
Journal of Literacy Research
Review of Educational Research
Reading Psychology
Reading Research Quarterly
TABLE 2
Vitas of specific authors searched for relevant publications
Azevedo, R.
Cromley, J. G.
Coiro, J.
Eshet, Y.
Kuriawan, S. H.
Larson, L. C.
Leu, D. J.
Mangen, A.
Noyes, J. M.
Reinking, D.
Roswell, J.
Sutherland-Smith, W.
Zawilinski, L.
7
In effect, publications were retained for analysis only if they featured par-
ticipants engaged with both print and digital texts. For instance, although it
used print reading comprehension scores as a control variable, Coiro’s (2011)
study on predicting reading comprehension was excluded from this review
because the researcher collected data only on digital reading performance.
Furthermore, all articles kept for further analysis had to be empirical in nature.
This meant that scholarly treatises, anecdotal observations, and reviews
(unless meta-analytic) were exempt from inclusion. An example of a review
that was excluded for this reason was Tanner’s (2014) article, Digital vs.
Print: Reading Comprehension and the Future of the Book. While interesting
as a survey of published works, this article did not entail the reporting of an
original empirical study.
The third criterion established for inclusion required that researchers rely on
more than self-report data to reach their conclusions. We deemed this step as
essential so that there were objective data pertaining to reading in print or digi-
tally. Many of the articles returned by the initial search met the first two criteria
but relied only on survey or opinion data. For example, a study by Spencer (2006)
examined learners’ preferences for reading from a printed text or on a computer
screen. However, in this study, she relied solely on participants’ self-reported
FIGURE 1. Diagram of article search and screening steps taken to mark studies for
exclusion and inclusion in the literature review.
Singer & Alexander
8
preferences. She did not document participants’ pattern of engagement in any
form of print or digital reading activity against which the self-reported prefer-
ences could be calibrated.
Finally, to be retained in this review, articles identified through the search pro-
cess had to include some measure of reading comprehension as an outcome vari-
able. For example, a study by Roth, Tuch, Mekler, Bargas-Avila, and Opwis
(2013) examined the effects of object placement on different websites on partici-
pants’ processing using eye-tracking technology. Although relevant, this study did
not meet the final inclusion criterion because the authors did not have any dedi-
cated measure of reading comprehension. In effect, Roth et al. did not attempt to
link participants’ examination of website objects to their understanding or recall
of the textual content displayed on those websites.
Final Pool
Through our database searches, journal hand searches, vita searches of specific
authors, and use of backward-snowballing method, the number of potentially rel-
evant documents amounted to 878 unique publications. Figure 1 illustrates this
search and screening procedure. Of our initial pool of 878 articles, 604 were
excluded when the screening of the abstract demonstrated that those documents
either were nonempirical or did not involve both print and digital reading. For
example, although Topping’s (1997) article on reading digitally in school and at
home examined a relevant topic, it was excluded from this review because the
abstract revealed that this was a document discussing developments in digital
reading devices with no original empirical data forthcoming. An example of a
study that was excluded after an abstract scan because it did not address both print
and digital reading was an article by Dalton, Proctor, Uccelli, Mo, and Snow
(2011). Although the research was relevant to our review, this article was excluded
because the authors only examined comprehension in fifth-grade students reading
digitally. Reading in print was not a part of this investigation.
This initial abstract screening phase of review left us with a pool of 254 studies
for a full-text screening. Of these 254 publications, 98 studies were excluded for
using only self-report measures. For example, a study by Franze, Marriott, and
Wybrow (2014) queried 162 academics about their preferences and reading habits
in print and digital mediums using an online questionnaire. However, in this study
there were no measures that extended beyond self-reported data. In addition,
through our full-text screening, we excluded 121 studies that did not expressly
measure comprehension as an outcome variable. For example, Eshet-Alkalai and
Geri (2007) assessed students’ critical thinking skills while reading in print and
digitally. However, Eshet-Alkalai and Geri’s study did not take into consideration
students’ comprehension of what they read in any form, thus meriting its exclu-
sion from this review.
These steps in review left us with a total of 36 studies to be charted and ana-
lyzed. In effect, these 36 studies served as the database to address our specific
research questions. In preparation for analysis, each of the 36 publications was
catalogued on the basis of the following: (a) author, (b) year of publication, (c)
number of participants; (d) grade level of participants (i.e., elementary, middle
Reading on Paper and Digitally
9
school, high school, college, post-college); (e) participant details (i.e., gender,
SES, and special status), (f) text type (i.e., exposition, narrative, or both), (g) text
length, (h) text manipulations, (i) setting (i.e., instructional, research, and nonaca-
demic), (j) task, (k) comprehension and other measures administered, and (l) out-
comes reported. The full charting of these 36 studies is provided in the appendix
(see Supplementary Table S1 in the online version of this journal).
Definitional Coding Scheme
To initiate our analysis of the literature, we coded all included studies based on
whether or not definitions of reading or reading digitally were present, and
whether those definitions were explicit or implicit in nature. The procedure we
followed in this phase of analysis was informed by prior systematic reviews that
delved into conceptual patterns with the literature (Baggetta & Alexander, 2016;
Dinsmore et al., 2008; Murphy & Alexander, 2000). Specifically, definitions were
coded as explicit if the authors’ guiding conception of reading or reading digitally
was expressly stated. In contrast, an implicit definition was recorded when the
authors’ intended definition of reading or reading digitally had to be inferred from
language used through the document.
Furthermore, for those conceptions categorized as explicit, we coded whether
the definition was conceptual, componential, operational, or multifaceted.
According to the coding scheme established for this purpose, a conceptual defi-
nition was viewed as more ontological if it sought to capture the essence of
reading or reading digitally. In effect, a conceptual definition would attempt to
answer the basic question of what is reading or reading digitally by describing
its foundational nature. By comparison, a componential definition was one that
delineates the elements, skills, or abilities that are regarded as core to reading.
Definitions would fall within the category of componential if they sought to
answer the question what does reading or reading digitally entail. Next, an oper-
ational definition was more process-oriented in that it focused on how reading
or reading digitally was thought to occur. In addition, to be classified as opera-
tional, the definition had to do more than simply mention the word process. It
had to describe that process to some extent. Finally, a multifaceted definition
was one that incorporated more than one of the aforementioned categories (e.g.,
conceptual and operational).
To establish the interrater agreement for definitional categories, a four-step
coding procedure was employed. Specifically, following initial training using
the coding manual, the first author and a trained research assistant working
independently first indicated the presence or absence of any explicit or implicit
definition for reading or digital reading in all 36 documents. There was com-
plete agreement on this initial decision, so the remaining steps of the coding
procedure were initiated. In Step 2, working independently, documents that
included some form of a definition were coded for whether the authors’ delinea-
tion was explicitly stated or required inferencing on the coder’s part. For explicit
definitions, coders recorded the authors’ exact language, while for implicit defi-
nitions the words or phrases in text that were the basis for the inferred concep-
tion were recorded. Next, in Step 3, the coders made a determination as to
Singer & Alexander
10
whether explicit definitions were conceptual, componential, operational, or
multifaceted in nature. To assist in this more fine-tuned analysis, the coding
manual included the what is, what does, and how descriptions for conceptual,
componential, and operational definitions, respectively. When more than one of
these descriptions was present, the definition was coded as multifaceted. The
level of interrater agreement for Steps 2, 3, and 4 collectively was 97.14%.
Text Length Categories
When charted articles included word counts of the texts read (36.11% of the
36), the texts were classified as either short or long in length. This decision to cat-
egorize article according to text length was based on reported associations between
length and medium for texts longer than one page (Wästlund, 2007; Wästlund,
Reinikka, Norlander, & Archer, 2005). Specifically, Wästlund and colleagues con-
tended that the need to scroll with longer online texts increased the cognitive
demands on readers and, thus, appeared to negatively affect recall for digital
medium. Using a printed page as the guideline, we calculated that the word count
for a page of published text roughly equated to 500 words. Thus, texts were coded
as short if they were under 500 words in total and long when the word count was
500 words or more. This decision rule was applied in an attempt to disambiguate
the potential role of text length in terms of comprehension outcomes reported.
Research Setting Categories
Included studies were coded as following for the setting in which the research
took place: I = instructional, R = research, and NA = nonacademic. Instructional
settings referred to studies where data were collected within the participants’ aca-
demic setting (e.g., classroom, computer lab, after-school program). Settings were
coded as research when data collection occurred within a more controlled or con-
trived context, such as a laboratory. Finally, settings were coded as nonacademic
when the study took place outside of a school or educational context, such as a
home-based study. This decision rule was applied in an attempt to examine the
potential role setting might play on the reported outcomes.
Results and Discussion
Research Question 1: Defining Reading and Digital Reading
The first goal of this systematic review was to examine how those engaged in
research on reading in print and digital forms conceptualized reading and digital
reading. In terms of conceptualization, we sought to explore not only the explicit
definitions incorporated in the articles but also any definitions that we could
extrapolate from the words and phrases employed by the researchers in the dis-
course. Although we had no particular expectations regarding the frequency or
nature of the definitions that might populate this body of work, we were unpleas-
antly surprised by the paucity of either explicit or implicit definitions we could
document for either reading or digital reading.
Conceptualizing Reading
Specifically, of the 36 charted studies (see Supplementary Table S1 in the
online version of this journal), only 9 (25%) included any manner of definition of
Reading on Paper and Digitally
11
reading—be it explicit or implicit. Given the relatively small number of studies
with explicit or implicit definitions of reading, we consolidated the definitions for
reading and for digital reading in Table 3. Of those nine articles, eight (22.22%)
contained explicit definitions, where the guiding conception of reading was
expressly indicated. In three of those instances, the provided definitions were con-
ceptual in nature. For example, Stern and Shalev’s (2013, p. 1) definition (i.e.,
“Reading is a complex process [that] involves multiple components and is affected
by various factors”) is a conceptual definition because it speaks broadly about the
nature of the process but does not explicate on the components and processes
occurring.
There was one article whose authors (Ali, Wahid, Samsudin, & Idris, 2013)
explicitly defined reading componentially by delineating the elements, skills,
or abilities regarded as core to reading. Specifically, these authors stated that
reading required “understanding words, sentences, and paragraphs” (p. 27).
Only one article contained an operational definition of reading. This article by
Ortlieb, Sargent, and Moreland (2014, p. 4) cited the process-oriented defini-
tion for reading forwarded by Anderson and Pearson (1984, p. 55) who
described reading as involving “the retrieval of previously acquired schema to
assist the processing and understanding of new unfamiliar information.” The
final two explicit definitions were multifaceted in nature. For example, Singer
and Alexander (2017, p. 4) defined reading as “an active, constructive, mean-
ing-making process . . . readers are expected to form connections between their
own prior knowledge and the ideas expressed in or inferred by the text per se.”
This definition was coded as including both conceptual (“what is”) and opera-
tional (“how”) elements.
Finally, only one definition could be assembled from words and phrases the
authors used within the document. This implicit definition came from the article
by Mayes, Sims, and Koonce (2001). Our assemblage of the implicit definition,
which follows, is indicated by (a) presenting the authors’ exact words in italics,
(b) employing brackets to show text segments appearing in different locations in
the text, and (c) using regular text to indicate the connecting words or phrases
added to make the definition more coherent.
Reading [relies on working memory] [to retain words to allow the reader to link words
together in a meaningful process]. Reading also includes [word recognition] and other
[processes that occur automatically].
Conceptualizing Digital Reading
Unfortunately, the definitions of digital reading were similarly scant to those
for reading. Only five articles (13.89%) included a definition of digital reading in
any form. Within this small subset, two definitions were explicit and the remain-
ing three were implicit. For example, the explicit multifaceted definition for-
warded by Margolin, Driscoll, Toland, and Kegler (2013, p. 7) summarized the
five components for reading online offered by Leu et al. (2004): “identifying a
problem, locating information, evaluating the information, synthesizing informa-
tion, and communicating information.” The authors then expounded on those
components by stating,
12
TABLE 3
Definitions provided in included studies
Source of information Definition
Authors Year Reading Digital reading
Ali, Wahid, Samsudin,
and Idris
2013 ECP, Understanding words, sentences, and paragraphs
(p. 27)
Eden and Eshet-Alkalai 2013 EM, Active reading (i.e., the reader’s ability to edit a given
text and demonstrate comprehension by identifying and
correcting text errors (p. 3)
Lenhard, Schroeders,
and Lenhard
2017 EM, In advanced text comprehension, however, vocabu-
lary, background knowledge, reading strategies, and rea-
soning abilities play a more prominent role . . . and the
importance of reading fluency gradually declines (p. 2)
Margolin, Driscoll,
Toland, and Kegler
2013 ECO, Reading is a process that, once learned, allows an
individual to mentally represent written text (p. 1)
EM, Leu et al. (2004) describe five components for
online reading: identifying a problem, locating
information, evaluating the information, synthesizing
information, and communicating information. The
description of these components suggests that reading
online involves more than simply understanding what
is encountered. It also suggests that the reader engage
in other higher level processing of the material beyond
creating a mental representation of the text (p. 7)
Mayes, Sims, and
Koonce
2001 IM, Relies on working memory to retain words to allow the
reader to link words together in a meaningful process.
Also includes word recognition and other processes that
occur automatically
IM, Two processes occurring; visual system must be able
to perform under new demands and cognitive effort is
different
(continued)
13
Source of information Definition
Authors Year Reading Digital reading
Ortlieb, Sargent, and
Moreland
2014 EO, Reading comprehension results from the retrieval of
previously acquired schema to assist the processing and
under- standing of new unfamiliar information (Ander-
son & Pearson, 1984) (p. 4)
IM, Ability to engage in textual reading and interactions
Siegenthaler, Wurtz,
Bergamin, and Groner
2011 IM, That e-ink technology allows for a reading process
that is similar to that of reading print (p. 2)
Singer and Alexander 2017 EM, An active, constructive, meaning-making process . . .
readers are expected to form connections between their
own prior knowledge and the ideas expressed in or inferred
by the text per se (p. 4)
Stern and Shalev 2013 EM, Consuming information based on cognition. Infor-
mation is encoded, organized, stored, remembered,
and applied (p.11)
Wästlund, Reinikka,
Norlander, and Archer
2005 ECO, Reading is a complex process which involves mul-
tiple components and is affected by various factors (p.1)
Young 2014 ECO, comprehension requires many different sub-processes,
which include knowledge integration, coherence, and
parsing (p. 6)
Note. ECO = conceptual explicit; ECP = componential explicit; EO = operational explicit; EM = multifaceted explicit; IM = implicit.
TABLE 3 (CONTINUED)
Singer & Alexander
14
The description of these components suggests that reading online involves more than
simply understanding what is encountered. It also suggests that the reader engage in
other higher level processing of the material beyond creating a mental representation of
the text. (p. 7)
This was considered an explicit multifaceted definition because it addressed
both conceptual and componential elements of reading digitally.
It is conceivable that the lack of dedicated definitions of digital reading reflects
the researchers’ unstated perception that the distinction between reading and read-
ing digitally has more to do with the context of the process and is not some recon-
ceptualization of the basic construct. We see evidence of that perception in the
implicit definition of digital reading we were able to assemble from Siegenthaler,
Wurtz, Bergamin, and Groner’s (2011) article:
Reading digitally requires [e-ink technology] that [allows for a reading process that is
similar to that of reading print].
Research Question 2: The Assessment of Comprehension
The second question that guided this analysis focused on the assessment of
comprehension within the literature on reading in print and digitally. For this anal-
ysis, we initially set out to examine the form of those assessments (e.g., multiple-
choice, free-recall, or constructed-response). Then, we wanted to explore the level
of understanding these assessments tapped (i.e., locate and recall, integrate and
interpret, or critique and evaluate; NAGB, 2008). We determined that there was
noticeable variability in the measures and approaches that researchers employed
to test participants’ comprehension. For example, 15 studies included multiple-
choice reading comprehension questions (e.g., 10 multiple-choice reading com-
prehension questions; Noyes, Garland, & Robbins, 2004). Six studies included
short-construction questions (e.g., three short-response items; Singer & Alexander,
2017). The remaining studies incorporated comprehension questions that were
unspecified, free recalls, or text summaries.
We also noted that a majority of researchers employed multiple comprehension
measures (61.11%). For example, Kerr and Symons (2006) used free recall, eight
cued-recall multiple-choice questions, and seven cued reading comprehension
multiple-choice questions to assess comprehension. Although this effort to include
multiple indicators of reading comprehension is a positive characterization of the
charted literature, other documented patterns raised critical questions. For
instance, the psychometric properties of study measures were not always well
documented. Moreover, the majority of the studies (63.89%) involved researcher-
developed measures of reading comprehension rather than established assess-
ments, and in many of those instances (91.42%), the reliability and validity of
data from those research-developed measures were not provided. For example,
Noyes et al. (2004) detailed their comprehension task as consisting of “10 multi-
ple-choice questions following by the administration of the NASA-TLX” (p.
112). Although their measure of comprehension may be a sound choice, the lack
of details regarding reliability and validity is concerning.
15
Research Question 3: Analyzing the Trends
The third issue about which we queried the literature pertained to what trends,
if any, could be discerned from the past quarter century. Building on prior work
(Alexander et al., 1996; Alexander et al., 2012; Alexander & Knight, 1993; Dumas
et al., 2015), we sought to ascertain whether trends shaping reading comprehen-
sion in print and digitally could be identified. Given the relatively short time span
for this trend analysis, we recognized that the ability to discern patterns would
prove challenging. For that reason, we initially broke the span of this analysis
(i.e., 25 years) into three 8-year time periods (i.e., 1992–2000, 2001–2008, and
2009–2017). However, because we identified no published studies meeting our
criteria for 1992 to 2000, we excluded that time period from further trend analy-
sis. We then explored whether the components we charted (e.g., participants’ aca-
demic level, number of text sources) manifested dramatic (incremental), little
(stationary), and variable (iterative) changes. Thus, Table 4 displays the summary
of the components over the two time periods where publications occurred (2001–
2008 and 2009–2017).
Incremental Trends
We first examined the summarized data to identify if any of the charted com-
ponents displayed dramatic upward shifts (Alexander et al., 1996, Alexander
et al., 2012; Alexander & Knight, 1993; Dumas et al., 2015). It is important to
note that the identification of a trend as incremental was not predicated on any
specific numerical criterion. Rather, this designation indicated a clearly discern-
ible increase over time with no evidence of decrement. Based on that description,
we identified four components from this exploration that fit that this dramatic,
upward shift: number of studies conducted, data sources included and amount of
data gathered, the variety of digital devices employed, and the number of texts
processed.
Number of studies conducted. When we initiated this systemic review, it was our
expectation that we would find a number of empirical studies at each of the three
time intervals designated. Perhaps the most surprising finding from this review
was the determination that there were no studies involving the reading of texts
under both print and digital conditions between 1992 and 2000 that met our selec-
tion criteria. In contrast, there were 14 documented studies between 2001 and
2008 and 22 investigations published between 2009 and 2017. We regard this
movement from 0 to 14 to 22 as indicative of an incremental trend.
In trying to understand this development, we can make several observations.
For one, research on digital reading remained active during the 1992 to 2000 time
period (Chu, 1995; Hartas & Moseley, 1993; Soe, Koki, & Chang, 2000). The
nature of that research, however, made it inappropriate for inclusion in this review.
In effect, it appeared that the need to juxtapose processing in print to digital pro-
cessing proved a less pressing concern. Rather, efforts during this period seemed
to be allocated to demonstrating that “online reading” or “digital reading” was a
unique cognitive activity that warranted focused empirical energy (Kellner, 2000;
Lankshear & Bigum, 1999; Leu, Leu, & Leu, 1999). Furthermore, the resurgence
16
of interest that occurred in the subsequent period (2001–2008) coincided with
peak sales in personal computer (Reimer, 2005) and emerging debates over the
TABLE 4
Summary of study components by publication period
Study component level 2001–2008, n (%) 2009–2017,a n (%)
Charted studies 14 (38.89) 22 (61.11)
Data sources
One 8 (57.14) 4 (18.18)
Two 2 (14.29) 13 (59.09)
Three or More 4 (28.57) 5 (22.73)
Digital devices used
One 13 (92.86) 20 (90.90)
Two 1 (7.14) 2 (9.10)
Participant academic level
Early elementary 2 (14.29) 3 (13.63)
Middle school 1 (7.14) 2 (9.11)
High school 0 (0.00) 3 (13.63)
College 10 (71.43) 11 (50.00)
Postcollege 1 (7.14) 3 (13.63)
Text sources
One 10 (71.43) 12 (54.55)
Multiple 4 (28.57) 10 (45.45)
Digital devices
Computer 11 (78.57) 13 (59.09)
Other 3 (21.43) 9 (40.91)
Text genre
Narrative 2 (14.29) 5 (22.73)
Expository 12 (85.71) 11 (50.00)
Both 0 (0.00) 6 (27.27)
Text length
Shorter 4 (28.57) 4 (18.18)
Longer 3 (21.43) 4 (18.18)
Not provided 7 (50.00) 14 (63.64)
Manipulated texts
Manipulated 4 (28.57) 8 (36.36)
Not manipulated 10 (71.43) 14 (63.64)
Setting
Research 7 (50.00) 12 (54.54)
Instructional 6 (42.86) 8 (36.36)
Nonacademic 1 (7.14) 2 (9.11)
aAll studies were charted that were available at date of submission, May 1, 2017.
Reading on Paper and Digitally
17
nature of reading in print and digital forms (Kerr & Symons, 2006; Kurniawan &
Zaphiris, 2001; Macedo-Rouet, Rouet, Epstein, & Fayard, 2003), which may have
served as catalysts.
In the period between 2009 and 2017, the increased availability of personal
computers was followed with a sharp rise in the availability and affordability of
handheld devices, including smartphones and tablets (Zickuhr & Rainie, 2014). A
good number of these new devices were developed primarily for use as eBooks or
eReaders. Perhaps the growing presence of hypermedia both in and out of schools
and the growing popularity of digital reading devices spurred questions about the
effect of such medium on students’ reading and learning from text.
Increased data sources. During the past quarter century, researchers also gath-
ered increasingly more data about readers engaged in the processing of print and
digital texts. As shown in Table 4, the majority of studies published between 2001
and 2008 (57.14%) reported only one indicator, which by default was a measure
of reading comprehension. In contrast, the majority of studies appearing between
2009 and 2017 included two (59.09%) or more (22.73%) data sources. This
noticeable shift in the number of data sources may be partially attributable to the
ease at which information can be collected not only prior to or after reading but
also during the processing of print and especially digital texts.
Support for the aforementioned contention can be found in the forms of data
collected. Specifically, for the period of 2001 to 2008, only one investigation
(7.14%) included any physiological data (e.g., eye movement or heart rate). In
that investigation, Wästlund et al. (2005) recorded heart rates of participants while
they engaged with the reading task. This compares to five (22.73%) studies pub-
lished between 2009 and 2017 that incorporated some form of physiological data
source. In one such study, Siegenthaler et al. (2011) recorded eye movements
using an infrared video eye-tracking device as college students read texts.
Variety of digital devices. There was also a sharp increase in the availability and
affordability of digital devices across to time periods, including those dedicated
to digital reading, such as eBooks or eReaders (Tyner, 2014). Not surprisingly,
therefore, the use of noncomputer devices exhibited a noticeable increase from
2001–2008 (n = 3, 21.43%) to 2009–2017 (n = 9, 40.91%). For example, De
Jong and Bus (2004) used an animated eBook to examine reading comprehen-
sion, while Zambarbieri and Carniglia (2012) used a laptop and eReader as sepa-
rate conditions within their study. The focus on handheld and portable devices in
recent investigations of print and digital reading is understandable given the broad
popularity of eReaders. In fact, Zickuhr et al. (2012) reported that those reading
digitally read on average twice as many books as those reading print books.
Moreover, the actual design of the digital reading devices has proven intrigu-
ing to researchers, such as the effects of print size, scrolling features, or ergo-
nomic features on cognitive functioning (Stoop, Kreutzer, & Kircz, 2013). Thus,
the field has witnessed not only increased use of noncomputer devices in the
charted studies but also more empirical attention to the effects of differing digital
devices on text processing and reader comprehension (Kim & Anderson, 2008;
Singer & Alexander
18
Margolin et al., 2013). This may help explain the appearance of nine (40.91%)
studies during the period of 2009 to 2017 that incorporated a digital device other
than a desktop computer. For example, Zambarbieri and Carniglia (2012) ana-
lyzed comprehension and eye movements of participants when reading from a
printed book, computer display, and a handheld eReader. What these researchers
largely determined was that there were no significant differences in comprehen-
sion across devices.
Number of texts processed. Perhaps the clearest example of a dramatic shift can
be seen in the number of texts that participants were required to process in print
and digitally. Were we to pose the question, “Did charted studies involve one or
more than one text,” the answer would seem rather definitive. Specifically, the
majority of charted studies (69.44%) entailed the reading a single text under print
and digital conditions. For two of those studies, researchers actually used mul-
tiple segments from the same text to assess print and digital reading (Siegenthaler
et al., 2011; Zambarbieri & Carniglia, 2012).
However, this overall percentage is misleading when identified studies are
examined by two time frames—before 2003 and after. To be more precise, prior
to 2003, we identified no instances of multiple text sources being employed in the
research on print and digital reading. After 2003, 36.11% (13) of the charted stud-
ies used multiple texts to investigate print and digital reading. This sharp rise in
the use of multiple texts coincides with the emergence of theoretical and empirical
models of multiple source use that have begun to populate the literature. Those
models include the Multiple Documents-Task-Based Relevance Assessment and
Content Extraction (Rouet & Britt, 2011), the content-source integration model
(Stadtler & Bromme, 2007), and the discrepancy-induced source comprehension
model (Braasch, Rouet, Vibert, & Britt, 2012). Given the burgeoning interest in
multiple-text comprehension, and the new models that are populating the litera-
ture (e.g., cognitive affective engagement model; List & Alexander, 2017), we
expect this upward trend to continue into the years to come.
Stationary Trends
Based on our analysis of the literature, we identified four components of the
empirical research on print and digital reading that have remained rather consis-
tent over the past 25 years: participants’ academic level, text genres, research
settings, and task specifications.
Participants’ academic level. While interrogating the literature, we concluded
that the majority of studies (88.89%) involved school-age readers (i.e., early
elementary through college), and this pattern held whether the time frame was
2001–2008 or 2009–2017. In fact, of the 36 charted studies, only four included
postcollege or adult readers (e.g., DeZee, Durning, & Denton, 2005). Further-
more, when we considered the breakdown of school-age populations, we found
that only four studies had early elementary readers as the focus (e.g., Kim &
Anderson, 2008), and even fewer (n = 3) centered on middle school readers (e.g.,
Puhan, Boughton, & Kim, 2005).
Reading on Paper and Digitally
19
By far the population of greatest interest in analyzed studies was college stu-
dents (e.g., Annand, 2008; Foasberg, 2014). This was true whether the studies
were published between 2001 and 2008 (71.43%) or 2009 and 2017 (50%). Why
readers at this academic level were selected, however, was left to speculation,
since it was rare for researchers to offer any rationale for their choice of under-
graduate readers. In fact, Singer and Alexander (2017) were the only researchers
targeting undergraduates who provided a theoretical or empirical justification for
this design decision. This pattern begs the questions of whether college readers
represent a sample of convenience.
Text genre. Another rather consistent pattern over the course of this systematic
review related to the text genres and also the genres read by participants within
age/grade groups. For instance, when we look across the two periods in terms of
genres, we observed that exposition remained the favored form of text used in
the empirical studies (85.71% and 50%, respectively). However, the nature of
the trend for genre became even clearer when we examined the relation between
genre and participants’ academic level. Specifically, when the participants were
early elementary students, researchers relied solely on narrative texts (e.g., De
Jong & Bus, 2004; Jones, 2011; Kim & Anderson, 2008; Ortlieb, Sargent, &
Moreland, 2014). In contrast, 79.41% of the studies involving college readers
employed only expository text or exposition and narrative combined. Granted, it
may be understandable that narrative texts were more evident within these earlier
grades. However, the absence of any exposition for these younger readers is ques-
tionable, especially when attempting to understand how reading comprehension
transpires in print or digital mediums. For instance, the Reading Framework for
the 2009 NAEP (NAGB, 2008) has suggested that exposition should account for
50% of texts read in Grade 4. Furthermore, prominent reading researchers have
decried the lack of exposition within the early elementary grades (Dreher, 2003;
Duke, 2000; Duke & Pearson, 2008).
Research setting. Another rather consistent pattern over the course of this sys-
tematic review concerned the setting in which the research was conducted. As
mentioned, studies were coded into one of the three following setting categories:
instructional, research, or nonacademic. When we looked at the publications for
the two time periods, we found that the majority of studies took place in a research
setting (50% and 54.54%, respectively). For example, Siegenthaler et al., (2011)
monitored participants’ eye movements while they read novels and answered
comprehension questions. Given the sophisticated equipment required to monitor
eye movement, it is no surprise that this study, like the majority of the others in
our review, was conducted in a laboratory context.
Also, the number of studies conducted in classrooms or other academic con-
text, like the school computer lab, held rather steady across the two time periods
(42.86 and 36.36, respectively). Many of these researchers spoke to their desire to
collect data in a setting where reading in both mediums was familiar and natural
to their participants. For instance, Ortlieb et al. (2014) chose to conduct research
in an after-school reading intervention program to more naturalistically investi-
gate the role of medium in comprehension. Finally, there were very few studies in
Singer & Alexander
20
either time period that collected data in a nonacademic setting, such as the read-
ers’ home (n = 3). Two of those studies involved young children reading with
parents in the home (e.g., De Jong & Bus, 2004), a common literacy activity at
this age (Baker, Dreher, & Guthrie, 2000).
Task specifications. When it came to examining the study tasks, we found it dif-
ficult to discern the precise nature of the directives given to participants. This
is surprising given that research suggests that task demands influence both the
processes and the products of text engagement (McCrudden, 2011; McCrudden,
Magliano, & Schraw, 2010). Nonetheless, even if not explicitly stated within the
text, we were able to discern a certain pattern that showed little variance over
time. Specifically, for the majority of studies in both time periods, participants
were directed to read in order to answer questions about what they read (85.71%
and 81.82%, respectively). Given this lack of task-specific information for most
of the analyzed studies, we were hampered in the determination of whether the
characteristics of task had a significant role to play in reading in print or digitally.
Iterative Trends
Iterative trends, according to Alexander and Knight (1993), are those move-
ments or events that continually reappear on the educational landscape. Due to the
rather restrictive time frame of this review, iterative trends may be more clearly
described as charted components displaying fluctuations. With that qualification
in mind, we identified two components that we regarded as interactive in charac-
ter: degree of text manipulation and text length.
Text manipulation. If one were to look solely at the number of studies involving
text manipulations in some form, it would appear that this charted component
would fit the description of a stationary trend. However, a deeper examina-
tion revealed a substantial degree of fluctuation across the two time periods. To
be more specific, in investigations of print and digital reading, a majority of
researchers during the two time periods elected to present texts in an unaltered
manner (71.43% and 63.64%, respectively). For example, Zambarbieri and Carni-
glia (2012) compared text processing while reading across a desktop PC, iPad,
eReader, and a print book but retained the character of the texts. In another study,
while counterbalancing the order in which the texts and mediums were presented,
two Microsoft Word documents that had been read by some in print were simply
uploaded onto a computer when participants were ask to read digitally (Ackerman
& Lauterman; 2012).
However, what are not conveyed in these overall percentages are the degree
and form of manipulation that occurred in a substantial number of studies dis-
persed across the two time frames (n = 11, 30.56%). Those manipulations included
changing the fonts in size and character, varying the line spacing, and altering the
number of columns on each page. For example, Stern and Shalev (2013) pre-
sented high school students with passages of equal difficulty in print and digital
form that had different spacing features (single-spaced text or double-spaced
text). These researchers wanted to determine if manipulating the spacing of text
Reading on Paper and Digitally
21
affected participants’ reading comprehension abilities. What Stern and Shalev
reported was that there was no main effect for text spacing on comprehension.
However, there was a significant interaction between medium, spacing, and atten-
tion level of the reader. Specifically, when the high schoolers had good attention
as measured by eye-tracking, they performed best when the text was digital and
single-spaced. Conversely, when attention was medium or poor, participants per-
formed worse on the digital, single-spaced text.
Scrolling was another text feature that researchers intentionally manipulated.
In one study, Singer and Alexander (2017) reduced the length of selected passages
so the target text would fit on one page of text. Their reason for this manipulation
was to eliminate the need for scrolling in order to control for the increased cogni-
tive load brought up by navigational issues when scrolling (Wästlund, 2007).
Other researchers have investigated the effects of scrolling on readers’ perfor-
mance in digital environments. For example, Kerr and Symons (2006) manipu-
lated the digital text precisely to require scrolling for some readers. On the basis
of this manipulation, these researchers concluded that the demands associated
with scrolling contributed to a greater efficiency in processing in print versus
digitally.
Alternatively, Kurniawan and Zaphiris (2001) tested the effects of one-, two-,
and three-column formats on middle-aged and senior readers’ preferences for and
understanding of texts. What they determined from this manipulation was that the
number of columns did not matter for these mature readers. Interestingly, these
older readers were found to read more quickly in print than digitally, a finding that
was an anomaly with regard to the relation between speed and medium within this
review.
Of all the studies that incorporated text manipulation, the one by De Jong and
Bus (2004) was clearly an outlier due to the degree of manipulation. These
researchers were investigating children’s emergent story understanding across
print and digital stories. Toward that end, they created digital versions of a fiction
story for younger readers that included animations placed throughout the text. De
Jong and Bus also added interactive features, such as the option to have an ani-
mated character start or stop reading the words to the participant. What these
researchers found was that the inclusion of text animations and other augmenta-
tions did not did not increase these emerging readers’ comprehension, as they had
predicted. In fact, it was noted that these emergent readers did better on all indica-
tors (i.e., words, phrase, and story structure) under the print versus electronic
condition. The researchers attributed to the children’s greater familiarity with
printed rather than electronic texts. Given that this investigation was conducted
more than a decade ago and in light of the markedly increased presence of digital
materials in the lives of young children, it would be interesting to empirically
determine if the same outcomes would result today.
Text length. Our initial intention in charting text length was to examine if this fac-
tor was associated with comprehension outcomes when participants read in print
and digitally. Based on prior studies, there was the expectation that longer texts
that would require scrolling would increase the demands on reading digitally. For
Singer & Alexander
22
that analysis, we classified text as shorter (i.e., ≤500 words or ≤1 page) or longer
(i.e., ≥500 words or ≥1 page) in length. However, in terms of trends, we determined
that a clear majority of studies for both the 2001–2008 and 2009–2017 periods
provided no information about the specific length of texts used (50% and 68.18%,
respectively). Yet, among those studies where length was specified (41.67%), we
observed that there was a high level of variability in the length of texts used across
the time periods. For instance, as shown in Table 4, there were comparable num-
bers of studies for the two time periods that used shorter and longer texts. How-
ever, those texts ranged in length from a single sentence (Lenhard, Schroeders, &
Lenhard, 2017) to 2,500-word documents (Stakhnevich, 2002).
Even more intriguing was the outcome we identified when text length was
juxtaposed to the outcomes reported for print or digital mediums. Specifically,
when we took into account the length of texts processed in print or digitally, the
otherwise fuzzy picture for the relation between medium and comprehension out-
comes came into clearer focus. More precisely, there were conflicting findings
reported across all charted investigation as to when the medium of delivery related
in any meaningful way to participants’ comprehension. Some studies reported that
reading comprehension was better in print than in digital (e.g., Mangen, Walgermo,
& Bronnick, 2013; Noyes et al., 2004), while others documented better compre-
hension when readers processed texts digitally rather than in print (e.g., Kerr &
Symons, 2006; Verdi, Crooks, & White, 2014). Still other researchers found no
significant differences in reading comprehension for print or digital mediums
(e.g., Akbar, Al-Hashemi, Taqi, & Sadeq, 2013; H. K. Lee, 2004; Rockinson-
Szapkiw, Courduff, Carter, & Bennett, 2013; Young, 2014).
However, on closer examination, we were able to discern an association
between text length and medium. Specifically, when the texts being processed
were shorter in length, there was no significant effect for medium on comprehen-
sion (e.g., Ali et al., 2013; Dundar & Akcayiri, 2012; Eden & Eshet-Alkalai, 2013;
Margolin et al., 2013) or comprehension was significantly better in the digital
versus print medium (e.g., Kerr & Symons, 2006). However, when the text
involved more than 500 words or took up more than a page of the book or screen,
comprehension scores were significantly better for print than for digital reading
(e.g., Davis & Neitzel, 2012; Mangen et al., 2013; Mayes et al., 2001). This inter-
action between text length and medium, which we regard as an important finding,
was evidenced for 91.67% of the charted studies in which researchers specified
the text lengths being processed.
The sole exception to this length by medium pattern was the study by
Stakhnevich (2002) in which English as a Second Language (ESL) learners read
about Mississippi culture and history and then completed a 20-item multiple-
choice comprehension measure. What Stakhnevich found was that participants,
who were students that had arrived in the United States within the past 2 weeks
and whose primary language was not English, performed better on a comprehen-
sion measure after reading 2,500-word texts digitally. However, there are several
features of this specific investigation that might have led to its deviation from the
observed length by medium pattern. For one, the study involved a special popula-
tion (i.e., ESL learners) and the processing of texts on unfamiliar topic (i.e.,
Mississippi culture and history). The digital version of the texts included an online
Reading on Paper and Digitally
23
glossary and dictionary access, which surely would be helpful for newly enrolled
ESL students. Furthermore, the small sample size (n = 31) and lack of data sources
beyond the comprehension measure are certainly limitations to weigh when con-
sidering this outlier.
Despite the aforementioned exception, the pattern that emerged for medium by
text length could perhaps be explained by processing differences when texts are
presented in segments versus continuously. In effect, shorter texts, whether on
paper or on screen, could be taken in via routine saccadic eye movements and
fixations (Siegenthaler et al., 2011). By contrast, longer texts conveyed digitally
required readers to scroll between portions of the text, and the empirical evidence
suggests that frequent scrolling within a digital environment increases the cogni-
tive demands on readers (Proaps & Bliss, 2014; Wästlund, 2007). Such increased
cognitive demands could, therefore, translate into diminished comprehension or
poorer recall for even older participants processing digital texts (Wästlund, 2007).
Research Question 4: Similarities and Differences in Trends Across Time
Our fourth research question called for a comparison of the incremental, sta-
tionary, and iterative trends just described with those previously reported. To
make this comparison, we turn back to the findings that Dillon reported in his
1992 review, which was the takeoff point for this current investigation. Although
the number of trends unearthed in this review were more numerous than those that
Dillon forwarded, we identified two points of comparison that merit discussion:
the design and conduct of the reviews, and evidence-based conclusions.
Design and Conduct of Reviews
There were two shared attributes between Dillon’s (1992) review and the cur-
rent analysis of the literature. Fundamentally, both reviews were undertaken for the
purpose of understanding the relative benefits and consequences of reading in print
or digitally by analyzing the work of others. Toward that end, Dillon and we elected
to consider only those published works in which participants processed text under
both print and digital conditions—a commonality in reviews that was an artifact of
search parameters and selection criteria. Another artifactual difference between the
two reviews pertained to temporal scope. Although Dillon never explicitly stated
the time frame for his review, we deduced from the works he referenced that his
analysis spanned 34 years, with the earliest piece appearing in 1958 and the oldest
in 1992. Our current review, in contrast, encompassed 25 years (1992–2017).
Beyond this common ground, however, there were several features of the design
and conduct of these reviews that set these two reviews apart.
For one, as mentioned, Dillon’s (1992) review was not systematic in its design.
In effect, it was not evident how the relevant literature was sampled in that review
or what other criteria beyond the identification of studies involving both print and
digital reading were established to determine the inclusion or exclusion of works.
Furthermore, there was no systematic charting of identified studies referenced or
provided. This fact complicated the ability to judge the validity or strength of
Dillon’s conclusions. Our review differs in that we systematically searched and
charted the literature (see Supplementary Table S1 in the online version of this
journal) and based our findings on documented outcomes.
Singer & Alexander
24
In addition, while the emphasis of the 1992 review was on empirical studies,
Dillon referenced other forms of publications in substantiating his findings, such
as Tinker’s (1958) literature review. In contrast, our inclusion criteria required
articles to be empirical publications. Furthermore, while not explicitly stated, the
articles included in Dillon’s review did not necessarily include any explicit mea-
sure of reading comprehension or performance. Instead, studies that collected
only self-reported preference data were also analyzed (e.g., Cakir, Hart, & Stewart,
1980). By comparison, our inclusion criteria required that researchers measured
participants’ comprehension in some manner and that the study rely on more than
self-report data. Our overarching goal was to construct a foundation for under-
standing the potential influence of medium (i.e., print or digital) on what readers
understood or recalled. We also wanted to base our interpretations on something
other than readers’ self-perceptions, which are often inaccurate (Ackerman &
Goldsmith, 2011; Hacker, Bol, & Bahbahani, 2008; Singer & Alexander, 2017).
Evidence-Based Conclusions
As stated, questions can also be raised about the strength of Dillon’s (1992)
conclusions and the substantiation of resulting claims. Dillon, in effect, made cer-
tain claims about reading in print or digitally but tended to support such claims
with descriptions of particular investigations rather than with data drawn from the
collection of studies. For example, Dillon’s review concluded, “Without evidence
to the contrary though, it would seem as if reading from VDUs [visual display
units] does not negatively affect comprehension rates” (p. 9). As support for this
contention, Dillon stated,
The most recently published study covering this issue is by Muter and Maurutto (1991)
who asked readers to answer questions about a short story read either on paper or screen
immediately after finishing the reading task. They reported no significant comprehension
difference between readers using either medium. (p. 9)
Whether the Muter and Maurutto (1991) study was typical or an anomaly was
left to the imagination. As a counter to Dillon’s (1992) approach, we sought to
discuss outcomes that represented trends across a body of empirical work and that
could, thus, be quantitatively supported.
Furthermore, Dillon’s (1992) approach was to present findings factor by factor
(e.g., reading rate or eye movement) in his review. For example, there was an
entire section of the review summarizing four articles that had incorporated eye
movement data. As a consequence of this approach, Dillon did not attempt to
consolidate such individual findings in any significant way. Nor did Dillon look
expressly at the potential interaction among variables, such as the role that text
length might play in reading performance in print or digital condition. Moreover,
Dillon’s approach to examining isolated factors such as reading rate, while over-
looking the interplay with other factors like text length, may have masked impor-
tant conclusions to be reached. We see this approach as contributing, in part, to
Dillon’s tendency to report more inconclusive findings, which he attributed to
“the variety of methodologies, procedures and stimulus materials employed in
these studies” (p. 11).
25
Conclusions and Implications
It is important to state that we did not undertake this review to judge if reading
digitally belongs in our society. The ubiquity of reading digitally has already
answered that question. In fact, as time and technology progress, the convenience
of reading digitally fortifies its stake. Indeed, the ubiquity of technology is one of
the reasons a systematic review of reading in print and digitally seems warranted.
To our knowledge, this is the only systematic review on the topic of reading in
different mediums since 1992 that juxtaposes the contemporary field of reading
digitally against the long-established and deep-rooted research on reading in print.
Our goal of understanding print vis-à-vis digital reading increases in urgency
as high-stakes assessments move to digital formats. For example, undergraduate
and graduate entrance assessments such as the Scholastic Aptitude Test (College
Board, 2009) and the Graduate Record Examination (Educational Testing Service,
2013) are primarily administered digitally. Furthermore, national and interna-
tional assessments, such as the Programme for International Student Assessment
(PISA, 2015) and NAEP (NAGB, 2008), are instituting digital administration, as
well as scenario-based tasks that incorporate digital literacy. Consequently, we
have a responsibility to try to understand what consequences may arise for readers
when high-stakes reading assessments are not only delivered digitally but also
include features such as animations or video.
Furthermore, researchers have a responsibility to define what they mean by
reading and to indicate whether that general definition suffices regardless of
medium (i.e., print or digitally) or of the digital features that are introduced into
text. Our systematic review determined that the majority of studies failed to define
either reading or digital reading. Moreover, those relatively few researchers who
did explicitly or implicitly define reading did not seem compelled to similarly
define digital reading explicitly or implicitly (e.g., Kerr & Symons, 2006;
Kurniawan & Zaphiris, 2001; Lenhard et al., 2017). In those instances, we might
assume that these researchers perceived no difference for the processing of text
across print and digital mediums—a perception that has been questioned by oth-
ers (e.g., Coiro, 2011; Leu, Kinzer, Coiro, Castek, & Henry, 2013).
Conversely, for those even rarer researchers who expressively defined reading
and reading digitally (Margolin et al., 2013; Mayes et al., 2001; Ortlieb, Sargent,
& Moreland, 2014), the focus was often on the unique processing demands that
come with processing in an online environment. In effect, for these researchers,
there appeared to be an intention to distinguish between reading digitally, where
traditional texts are simply delivered via hypermedia with few enhancements
(Bodmann & Robinson, 2004), and digital reading (Singer et al., 2017), where the
ability to function within the Internet world instigates new cognitive processes or
processing skills for navigating the many elements and features on websites,
including text. The high-stakes assessment of reading is not immune to the conun-
drum of distinguishing between reading digitally and digital reading. In fact, the
Programme for International Student Assessment 2015 Reading Literacy
Framework specifies that their digital assessment of reading relies “on a set of
fundamental skills for using computers” (Organisation for Economic Co-operation
and Development, 2015, p. 29).
Singer & Alexander
26
Beyond this more conceptual quandary, there was other news, more or less
positive, to report. On one hand, the reviewed studies provided more information
about what was read than how reading was defined. Of the 36 included studies, 33
provided some details about aspects of the text, such as text type and length. On
the other hand, the details provided were often insufficient. Specifically, of the 33
studies that provided any information about the text, only 8 provided details
regarding both text type and length. This lack of specificity concerning aspects of
the text is particularly problematic because research has established that aspects
of the text, such as text type and length, play an important role in reading compre-
hension (Graesser, McNamara, & Louwerse, 2003; Kendeou, Muis, & Fulton,
2011; Kintsch, 1980). Certainly, in this review, we were able to understand the
potential comprehension decrement that comes from reading longer texts digitally
rather than in print. There may be even more to uncover about textual aspects and
their effect on reading comprehension for print and digital mediums, but those
understandings will remained buried without sufficient probing.
In addition to the aforementioned need for details on textual aspects, there is a
need for more clarification regarding individual differences factors and text pro-
cessing in print or digitally. Simply stated, individual difference factors are the
variations or deviations among individuals with regard to the characteristics
shown to play a significant role in human learning and development (e.g., work-
ing memory, academic ability, gender; Gagné & Glaser, 1987). In the case of
reading in print and digitally, individual difference factors such as reading rate,
vocabulary knowledge, and topic knowledge have been shown to be particularly
pertinent (Afflerbach, 2015; Luke, Henderson, & Ferreira, 2015). Surprisingly,
very few studies in this review considered such relevant individual difference fac-
tors as fluency or topic knowledge as potential explanations for performance out-
comes between print and digital reading (Kendeou et al., 2011). Thus, assessing
the role of individual differences factors could help clarify patterns in comprehen-
sion performance across mediums.
Another area of concern throughout our review was what was measured within
the studies. Although our criteria excluded studies that relied strictly on self-
report data, this conservative filter still revealed shortcomings in measurement.
For one, the majority (63.89%) of reading comprehension measures used were
researcher developed. The psychometric properties of researcher-developed mea-
sures were often underreported and, even when reported, did not convey compel-
ling evidence of strong validity and reliability. Furthermore, researcher-developed
measures are generally configured specifically to the goals of the study, which can
result in more favorable outcomes than would be realized from standardized or
well-established indicators (Kimberlin & Winterstein, 2008). Minimally, this pat-
tern of results calls for much more diligence among researchers investigating
print and digital reading to be forthcoming about the psychometrics of all mea-
sures used, and to consider a variety of measurement tools including well-cali-
brated and well-established indicators of performance.
Moreover, interrogating the findings across studies in this review was difficult
without detailed descriptions of comprehension measures including question for-
mat (i.e., multiple-choice or short constructed-response), scoring criteria, and item
difficulty levels. For one, within the broader assessment literature, multiple-choice
Reading on Paper and Digitally
27
questions often target the location and recall of specific information, whereas con-
structed-response questions can require participants to forge more complex infer-
ences or to critique and evaluate the text (NAGB, 2008). However, because the
majority of the studies employed multiple-choice measures, there were inadequate
data to allow us to consider the differences by question format as we had intended.
In those instances when information on comprehension questions was delin-
eated—whether research-developed or standardized and whether multiple-choice
or constructed-response—the studies rarely examined different levels of compre-
hension (i.e., main idea or supporting details). In fact, only 8.33% of studies man-
aged to probe comprehension on more than one level. In one study that incorporated
multiple levels of comprehension (Singer & Alexander, 2017), the authors con-
cluded that no significant differences in comprehension outcomes by medium
emerged for larger grain size questions (e.g., identify the main idea). In contrast,
when questions were more detailed or specific in nature (e.g., identify the support-
ing points), readers performed significantly better when reading in print. This find-
ing suggests that looking only globally at comprehension outcomes when concerned
about the role of reading medium may well mask important differences that can be
seen only when the grain size of understanding is systematically assessed.
Beyond the concerns for format and level of comprehension assessment, there
is another element to reading in print and reading digitally that merits consider-
ation. Specifically, conducting this review led us to question another dimension
that was often overlooked within the included studies—the nature of the task
undertaken by participants. Because this review required all studies to have a
measure of comprehension, participants were always engaging with an awareness
that, minimally, their recall or interpretation of text content would be assessed.
Within this review of the literature, it was rarely explicitly stated by the research-
ers how or if the task was communicated to the participants prior to reading. It has
been documented that task demands influence both the processes and the products
of text engagement (McCrudden, 2011; McCrudden, Magliano, & Schraw, 2010).
As such, researchers need to be aware of the affordances and consequences that
the tasks they devise have for participants, be detailed in their descriptions of
those experimental tasks, and consider task features when interpreting their
outcomes.
As this consideration of the nature of reading, the psychometric properties of
study measures, learner characteristics, and the features of the task suggest, there
are multiple factors influencing reading in print and digital forms that must be
weighed in any emergent models of comprehension. Moreover, it is our conten-
tion that these factors operate in conjunction, and therefore they cannot be exam-
ined in isolation. For example, when we considered text length and question type
as potential explanatory factors, we were able to unearth differences in compre-
hension by medium. In effect, when longer texts are involved or when individuals
are reading for depth of understanding and not solely for gist, print appears to be
the more effective processing medium (e.g., Lenhard et al., 2017; Mangen et al.,
2013; Singer &Alexander, 2017).
Another example of this complexity was found when examining beginning
readers between the ages of 5 and 6. These data revealed that when children of this
age read simple texts, medium appears to have little influence on comprehension
Singer & Alexander
28
outcomes (e.g., De Jong & Bus, 2004; Dundar & Akcayir, 2012). However, for
readers of other ages, such as high school students, engaged in the processing of
more complex texts, the findings suggest that medium type matters in comprehen-
sion (e.g., Eshet-Alkalai & Geri, 2009; Lenhard et al., 2017). For example,
Lenhard et al. (2017) concluded that although participants read more quickly in
digital medium, it led to a shallower processing of the text. In effect, under these
circumstances, medium type plays a more significant role in comprehension out-
comes. In future efforts to understand how text processing unfolds in print vis-à-
vis digitally, researchers can either choose to ignore the complexity that confronts
them or embrace it as part of a more comprehensive and integrated research
design. It is our recommendation that the interplay of relevant factors should be
routinely considered if researchers are attempting to fine-tune their understand-
ings of the effects of reading in print and digitally for learning and performance.
Yet another unexplored area for future inquiry pertains to the form of digital
device being employed. On a positive note, we unearthed a trend of an increasing
number of studies choosing to examine the differences across multiple digital
devices (e.g., De Jong & Bus, 2004; Tyner, 2014). In light of the continuing
advancements in technology for reading digitally, this is a welcome change and
one we would expect to see continue in the years to come. One reason we encour-
age this pursuit is that perhaps the differences that occur when reading in print and
digitally are partly due to the neurocognitive processes instigated by particular
features of digital devices. For example, visual legibility of digital texts, basic to
word processing and comprehension, is influenced by several factors, including
backlighting and luminance contrast (Stoop et al., 2013). By examining more than
one digital device, researchers can better pinpoint the optimal conditions for read-
ing digitally. Our field must endeavor to understand where the visual ergonomic
differences across devices fit into the comprehension calculus.
In addition, future studies need to focus on capturing processing data. Such
real-time measures of what is occurring while reading in print and digitally will
offer critical information regarding the processing of text. For example, research
has demonstrated that scrolling affects comprehension. As a case in point,
Catalado and Oakhill (2000) found that comprehension is worsened when readers
have to search from text that requires extended scrolling. Nonetheless, we have
very few means of capturing the process of reading as it unfolds in real time.
It is important to highlight that this review was undertaken in the hope of serv-
ing as a starting point for theoretical models of reading comprehension that address
critical dimensions such as learner differences, text characteristics, and task
demands. As this systematic review has reinforced, there are basic learner differ-
ences that need to be incorporated in an emerging model of reading in print or digi-
tally, including learners’ age, their reading ability, and their relevant background
knowledge. However, it is advisable that such models deal with these dimensions
in interactive way. In addition to the aforementioned areas, medium must be fac-
tored into theoretical models. This review, in conjunction with other research, will
help set the parameters for what may constitute a viable model in our field.
No matter how complex the question of reading across mediums may be,
teachers and students must understand how and when to employ a digital reading
device. It is fair to say that reading digitally is part and parcel of living and
Reading on Paper and Digitally
29
learning in the 21st century. Nonetheless, there is unquestionably a place for print
in schools and in the lives of students outside of school. For those invested in
understanding and promoting student learning, therefore, there is little gained
from setting up a false dichotomy between reading and digital reading.
Consequently, we must arm ourselves with empirical evidence of when, where,
and for whom greater benefits are accrued from reading in print, digitally, or in
combination. Researchers should consider conducting a meta-analysis of the rel-
evant literature in the future to further advance our knowledge on this topic.
Reading in print or digital form should not be horse race question. One medium
will not and should not be regarded as routinely better for comprehension.
Although the question regarding differences in comprehension across mediums is
a complex one, we cannot turn a blind eye to its exploration, because digital texts
are pervasive in students’ and teachers’ lives. Although Proulx (1994) may be
unsettled by the ubiquity of digital reading devices present today, perhaps she was
correct in her assessment of the timeless pleasure gained from reading a printed
book. Both mediums appear to have a place in literacy and in learning that must
be more fully appreciated.
References
References marked with an asterisk indicate studies included in the literature review.
Ackerman, R., & Goldsmith, M. (2011). Metacognitive regulation of text learning: On
screen versus on paper. Journal of Experimental Psychology: Applied, 17, 19-32.
doi:10.1037/a0022086
*Ackerman, R., & Lauterman, T. (2012). Taking reading comprehension exams on
screen or on paper? A metacognitive analysis of learning texts under time pressure.
Computers in Human Behavior, 28(5), 1816–1828.
Afflerbach, P. (Ed.). (2015). Handbook of individual differences in reading: Reader,
text, and context. New York, NY: Routledge.
*Akbar, R., Al-Hashemi, A., Taqi, H., & Sadeq, T. (2013). Efficacy of learning: Digital
sources versus print. Journal of Education and Practice, 4(8), 98–114.
Alexander, P. A., & Dochy, F. J. (1995). Conceptions of knowledge and beliefs: A
comparison across varying cultural and educational communities. American
Educational Research Journal, 32, 413–442.
Alexander, P. A., & Knight, S. L. (1993). Dimensions of the interplay between learning
and teaching. Educational Forum, 57, 232–245.
Alexander, P. A., Murphy, P. K., & Greene, J. A. (2012). Projecting educational psy-
chology’s future from its past and present: A trend analysis. In K. A. Harris, S.
Graham, & T. Urdan (Eds.), Educational psychology handbook: Vol. 1. Theories,
constructs, and critical issues (pp. 3–32). Washington, DC: American Psychological
Association.
Alexander, P. A., Murphy, P. K., & Woods, B. S. (1996). Research news and comment:
Of squalls and fathoms: Navigating the seas of educational innovation. Educational
Researcher, 25(3), 31–39.
Alexander, P. A., Schallert, D. L., & Hare, V. C. (1991). Coming to terms: How
researchers in learning and literacy talk about knowledge. Review of Educational
Research, 61, 315–343.
Alexander, P. A., Schallert, D. L., & Reynolds, R. E. (2009). What is learning anyway?
A topographical perspective considered. Educational Psychologist, 44(3), 176–192.
Singer & Alexander
30
Ali, A. Z. M., Wahid, R., Samsudin, K., & Idris, M. Z. (2013). Reading on the computer
screen: Does font type has effects on web text readability? International Education
Studies, 6(3), 26–35.
Anderson, R. C., & Pearson, P. D. (1984). A schema-theoretic view of basic processes
in reading comprehension. In P. D. Pearson, R. Barr, M. L. Kamil, & P. Mosenthal
(Eds.), The handbook of reading research (pp. 255–292). New York, NY: Longman.
*Annand, D. (2008). Learning efficacy and cost-effectiveness of print versus e-book
instructional material in an introductory financial accounting course. Journal of
Interactive Online Learning, 7, 152–164.
Baggetta, P., & Alexander, P. A. (2016). Conceptualization and operationalization of
executive function. Mind, Brain and Education, 10, 10–33.
Baker, L., Dreher, M., & Guthrie, J. T. (Eds.). (2000). Engaging young readers:
Promoting achievement and motivation. New York, NY: Guildford Press.
*Bodmann, S. M., & Robinson, D. H. (2004). Speed and performance differences
among computer-based and paper-pencil tests. Journal of Educational Computing
Research, 31(1), 51–60.
Braasch, J. L., Rouet, J-F., Vibert, N., & Britt, M. A. (2012). Readers’ use of source
information in text comprehension. Memory & Cognition, 40, 450–465.
Bråten, I., & Strømsø, H. I. (2011). Measuring strategic processing when students read
multiple texts. Metacognition and Learning, 6, 111–130.
Cakir, A., Hart, D. J., & Stewart, T. F. M. (1980). Visual display terminals: A manual
covering ergonomics, workplace design, health and safety, task organization. Ann
Arbor, MI: University Microfilms International.
Castells, M. (2011). The information age: Economy, society, and culture: Vol. 1. The
rise of the network society (2nd ed.). Chichester, England: John Wiley.
Catalado, M. G., & Oakhill, J. (2000). The effect of text organization (original vs.
scrambled) on readers’ ability to search for information. Journal of Educational
Psychology, 92, 791–799.
Chu, M. L. L. (1995). Reader response to interactive computer books: Examining liter-
ary responses in a non-traditional reading setting. Literacy Research and Instruction,
34, 352–366.
Coiro, J. (2011). Predicting reading comprehension on the internet contributions of
offline reading skills, online reading skills, and prior knowledge. Journal of Literacy
Research, 43, 352–392.
College Board. (2009). The Scholastic Aptitude Test. New York, NY: Author.
Dalton, B., Proctor, C. P., Uccelli, P., Mo, E., & Snow, C. E. (2011). Designing for
diversity: The role of reading strategies and interactive vocabulary in a digital read-
ing environment for fifth-grade monolingual English and bilingual students. Journal
of Literacy Research, 43, 68–100.
*Davis, D. S., & Neitzel, C. (2012). Collaborative sense-making in print and digital
text environments. Reading and Writing, 25, 831–856.
*De Jong, M. T., & Bus, A. G. (2004). The efficacy of electronic books in fostering
kindergarten children’s emergent story understanding. Reading Research Quarterly,
39, 378–393.
*DeZee, K. J., Durning, S., & Denton, G. D. (2005). Effects of electronic versus print
format and different reading resources on knowledge acquisition in the third-year
medicine clerkship. Teaching and Learning in Medicine, 17, 349–354.
Dillon, A. (1992). Reading from paper versus screens: A critical review of the empiri-
cal literature. Ergonomics, 35, 1297–1326. Retrieved from https://www.ischool
.utexas.edu/~adillon/Journals/Reading.htm
Reading on Paper and Digitally
31
DiMaggio, P., & Hargittai, E. (2001). From the “digital divide” to “digital inequality”:
Studying Internet use as penetration increases (University Working Paper No. 15).
Princeton, NJ: Center for Arts and Cultural Policy Studies.
Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual
lens on metacognition, self-regulation, and self-regulated learning. Educational
Psychology Review, 20, 391–409.
Dreher, M. J. (2003). Motivating struggling readers by tapping the potential of infor-
mation books. Reading & Writing Quarterly, 19, 25–38.
Duke, N. K. (2000). 3.6 minutes per day: The scarcity of informational texts in first
grade. Reading Research Quarterly, 35, 202–224.
Duke, N. K., & Pearson, P. D. (2008). Effective practices for developing reading com-
prehension. Journal of Education, 189, 107–122.
Dumas, D., Alexander, P. A., & Singer, L. M. (2015). Analyzing historical patterns,
examining current trends, and forecasting change in the field of educational psychol-
ogy: A cross-cultural perspective. Knowledge Cultures, 3(2), 7–18.
*Dundar, H., & Akcayir, M. (2012). Tablet vs. paper: The effect on learners' reading
performance. International Electronic Journal of Elementary Education, 4, 441–
450.
*Eden, S., & Eshet-Alkalai, Y. (2013). The effect of format on performance: Editing
text in print versus digital formats. British Journal of Educational Technology, 44,
846–856.
Educational Testing Service. (2013). The Graduate Record Examination. Princeton,
NJ: Author
Eshet-Alkalai, Y., & Geri, E. (2007). Does the medium affect the message? The influ-
ence of text representation format on critical thinking. Human Systems Management,
26, 269–279.
*Eshet-Alkalai, Y., & Geri, E. (2009). Changes over time in digital literacy.
CyberPsychology & Behavior, 12, 713–715.
*Foasberg, N. M. (2014). Student reading practices in print and electronic media.
College & Research Libraries, 75, 705–723.
Franze, J., Marriott, J., & Wybrow, M. (2014, September). What academics want when
reading digitally. In Proceedings of the 2014 Symposium on Document Engineering
(pp. 199–202). New York, NY: ACM.
Gagné, R. M., & Glaser, R. (1987). Foundations in learning research. In R. M. Gagné
(Ed.), Instructional technology: Foundations (pp. 49–83). Hillsdale, NJ: Lawrence
Erlbaum.
Giebelhausen, R. (2015). The paperless music classroom. General Music Today, 29(2),
45–49.
*Gill, K., Mao, A., Powell, A. M., & Sheidow, T. (2013). Digital reader vs. print media:
The role of digital technology in reading accuracy in age-related macular degenera-
tion. Eye, 27, 639–643.
Graesser, A. C., McNamara, D. S., & Louwerse, M. M. (2003). What do readers need
to learn in order to process coherence relations in narrative and expository text? In
A. P. Sweet & C. E. Snow (Eds.), Rethinking reading comprehension (pp. 82–98).
New York, NY: Guilford.
Hacker, D. J., Bol, L., & Bahbahani, K. (2008). Explaining calibration accuracy in
classroom contexts: the effects of incentives, reflection, and explanatory style.
Metacognition and Learning, 3, 101–121.
Hartas, C., & Moseley, D. (1993). “Say that again, please”: A scheme to boost reading
skills using a computer with digitized speech. Support for Learning, 8, 16–21.
Singer & Alexander
32
Jenkins, J. J. (1974). Remember that old theory of memory? Well, forget it. American
Psychologist, 29, 785–795.
Jones, A. (2011). Seeing the messiness of academic practice: Exploring the work of
academics through narrative. International Journal for Academic Development, 16,
109–118.
Kellner, D. (2000). New technologies/new literacies: Reconstructing education for the
new millennium. Teaching Education, 11, 245–265.
Kendeou, P., Muis, K. R., & Fulton, S. (2011). Reader and text factors in reading com-
prehension processes. Journal of Research in Reading, 34, 365–383.
*Kerr, M. A., & Symons, S. E. (2006). Computerized presentation of text: Effects on
children’s reading of informational material. Reading and Writing, 19, 1–19.
*Kim, J. E., & Anderson, J. (2008). Mother-child shared reading with print and digital
texts. Journal of Early Childhood Literacy, 8, 213–245.
Kimberlin, C. L., & Winterstein, A. G. (2008). Validity and reliability of measurement
instruments used in research. American Journal of Health-System Pharmacy, 65,
2276–2284.
Kintsch, W. (1980). Learning from text, levels of comprehension, or: Why anyone
would read a story anyway. Poetics, 9, 87–98.
Kintsch, W. (1988). The use of knowledge in discourse processing: A construction-
integration model. Psychological Review, 95, 163–182.
*Kurniawan, S. H., & Zaphiris, P. (2001). Reading online or on paper: Which is faster?
Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.2890
&rep=rep1&type=pdf
Labrecque, L. I., vor dem Esche, J., Mathwick, C., Novak, T. P., & Hofacker, C. F.
(2013). Consumer power: Evolution in the digital age. Journal of Interactive
Marketing, 27, 257–269.
Lankshear, C., & Bigum, C. (1999). Literacies and new technologies in school settings.
Curriculum Studies, 7, 445–465.
*Lee, H. K. (2004). A comparative study of ESL writers’ performance in a paper-based
and a computer-delivered writing test. Assessing Writing, 9, 4–26.
*Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus
print reading comprehension depends on task complexity and proficiency. Discourses
Processes. Advance online publication.
Leu, D. J., Jr., Kinzer, C. K., Coiro, J., & Cammack, D. (2004). Toward a theory of new
literacies emerging from the Internet and other ICT. In R. B. Ruddell & N. Unrau
(Eds.), Theoretical models and processes of reading (5th ed., pp. 1568–1611).
Newark, DE: International Reading Association.
Leu, D. J., Jr., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2013). New literacies:
A dual level theory of the changing nature of literacy, instruction, and assessment. In
D. Alvermann, N. J. Urau, & R. B. Ruddell (Eds.), Theoretical models and processes
of reading (6th ed., pp. 1150–1182). Newark, DE: International Reading Association.
Leu, D. J., Jr., Leu, D. D., & Leu, K. R. (1999). Teaching with the Internet: Lessons
from the classroom (3rd ed.). Norwood, MA: Christopher-Gordon.
List, A., & Alexander, P. A. (2017). The cognitive affective engagement model of mul-
tiple source use. Educational Psychologist, 52, 182–199.
Luke, S. G., Henderson, J. M., & Ferreira, F. (2015). Children’s eye-movements during
reading reflect the quality of lexical representations: An individual differences
approach. Journal of Experimental Psychology, 41, 1675–1683.
*Macedo-Rouet, M., Rouet, J. F., Epstein, I., & Fayard, P. (2003). Effects of online
reading on popular science comprehension. Science Communication, 25, 99–128.
Reading on Paper and Digitally
33
*Mangen, A., Walgermo, B. R., & Bronnick, J. (2013). Reading linear texts on paper
versus computer screen: Effects on reading comprehension. International Journal
of Educational Research, 58, 61–68.
*Margolin, S. J., Driscoll, C., Toland, M. J., & Kegler, J. L. (2013). E-readers, com-
puter screens, or paper: Does reading comprehension change across media plat-
forms? Applied Cognitive Psychology, 27, 512–519.
Mayer, R. E. (1997). Multimedia learning: Are we asking the right questions?
Educational Psychologist, 32, 1–19.
Mayer, R. E. (2011). Does styles research have useful implications for education prac-
tice? Learning and Individual Differences, 21, 319–320.
*Mayes, D. K., Sims, V. K., & Koonce, J. M. (2001). Comprehension and workload
differences for VDT and paper-based reading. International Journal of Industrial
Ergonomics, 28, 367–378.
McCrudden, M. T. (2011). Do specific relevance instructions promote transfer appro-
priate processing? Instructional Science, 39, 865–879.
McCrudden, M. T., Magliano, J. P., & Schraw, G. (2010). Exploring how relevance
instructions affect personal reading intentions, reading goals and text processing: A
mixed methods study. Contemporary Educational Psychology, 35, 229–241.
Murphy, P. K., & Alexander, P. A. (2000). A motivated exploration of motivation ter-
minology. Contemporary Educational Psychology, 25, 3–53.
Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and
books: The paperless office revisited? Behaviour & Information Technology, 10,
257–266.
National Assessment Governing Board. (2008). Reading Framework for the 2009
National Assessment of Educational Progress. Washington, DC: Author.
National Center for Education Statistics. (2013). The Nation’s Report Card: A First
Look: 2013 Mathematics and Reading (NCES 2014-451). Washington, DC: National
Center for Education Statistics, Institute of Education Sciences, U.S. Department of
Education.
*Noyes, J., Garland, K., & Robbins, L. (2004). Paper-based versus computer-based
assessment: Is workload another test mode effect? British Journal of Educational
Technology, 35, 111–113.
Organisation for Economic Co-operation and Development. (2015). Programme for
International Student Assessment PISA 2015 Reading Framework. Paris, France:
Author.
*Ortlieb, E., Sargent, S., & Moreland, M. (2014). Evaluating the efficacy of using a
digital reading environment to improve reading comprehension within a reading
clinic. Reading Psychology, 35, 397–421.
Pearson, P. D., & Hamm, D. N. (2005). The assessment of reading comprehension: A
review of practices—past, present and future. In Children’s reading comprehension
and assessment (pp. 13–79). Mahwah, NJ: Lawrence Erlbaum.
Proaps, A. B., & Bliss, J. P. (2014). The effects of text presentation format on reading
comprehension and video game performance. Computers in Human Behavior, 36,
41–47.
Proulx, E. A. (1994, May 26). Books on top. New York Times. Retrieved from http://
www.nytimes.com/books/99/05/23/specials/proulx-top.html
*Puhan, G., Boughton, K. A., & Kim, S. (2005). Evaluating the comparability of paper-
and-pencil computerized versions of a large-scale certification test. ETS Research
Report Series, 2(15). Retrieved from http://onlinelibrary.wiley.com/
doi/10.1002/j.2333-8504.2005.tb01998.x/pdf
Singer & Alexander
34
Reimer, J. (2005, December 15). Total share: 30 Years of personal computer market
share figures. Retrieved from http://arstechnica.com/features/2005/12/total-share/8/
*Rockinson-Szapkiw, A. J., Courduff, J., Carter, K., & Bennett, D. (2013). Electronic
versus traditional print textbooks: A comparison study on the influence of university
students’ learning. Computers & Education, 63, 259–266.
Roth, S. P., Tuch, A. N., Mekler, E. D., Bargas-Avila, J. A., & Opwis, K. (2013).
Location matters, especially for non-salient features: An eye-tracking study on the
effects of web object placement on different types of websites. International Journal
of Human-Computer Studies, 71, 228–235.
Rouet, J.-F. (2006). The skills of document use: From text comprehension to web-based
learning. Mahwah, NJ: Lawrence Erlbaum.
Rouet, J.-F., & Britt, M. A. (2011). Relevance processing in multiple document com-
prehension. In: M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text rele-
vance and learning from text (pp. 19–52). Greenwich, CT: Information Age.
Sarroub, L., & Pearson, P. D. (1998). Two steps forward, three steps back: The storm
history of reading comprehension assessment. The Clearing House, 72, 97–105.
Shishkovskaya, J., Sokolova, E., & Chernaya, A. (2015). “Paperless” foreign lan-
guages teaching. Procedia: Social and Behavioral Sciences, 206, 232–235.
*Siegenthaler, E., Wurtz, P., Bergamin, P., & Groner, R. (2011). Comparing reading
processes on e-ink displays and print. Displays, 32, 268–273.
*Singer, L. M., & Alexander, P. A. (2017). Reading across mediums: Effects of reading
digital and print texts on comprehension and calibration. Journal of Experimental
Education, 85, 155–172.
Singer, L. M., Alexander, P. A., & Berkowitz, L. E. (2017). Effects of processing time
on comprehension and calibration in print and digital mediums. Manuscript submit-
ted for publication.
Slavin, R. E. (1986). Best-evidence synthesis: An alternative to meta-analytic and tra-
ditional reviews. Educational Researcher, 15(9), 5–11.
Soe, K., Koki, S., & Chang, J. M. (2000). Effect of computer-assisted instruction (CAI)
on reading achievement: A meta-analysis. Honolulu, HI: Pacific Resources for
Education and Learning.
Spencer, C. (2006). Research on learners’ preferences for reading from a printed text
or from a computer screen. International Journal of E-Learning & Distance
Education, 21, 33–50.
Stadtler, M., & Bromme, R. (2007). Dealing with multiple documents on the WWW:
The role of metacognition in the formation of documents models. International
Journal of Computer-Supported Collaborative Learning, 2, 191–210.
Stakhnevich, J. (2002). Reading on the Web: Implications for ESL professionals. The
Reading Matrix, 2(2), 7–19.
Stephens, M. (2014). Beyond news: The future of journalism. New York, NY: Columbia
University Press.
*Stern, P., & Shalev, L. (2013). The role of sustained attention and display medium in
reading comprehension among adolescents with ADHD and without it. Research in
Developmental Disabilities, 34, 431–439.
*Stoop, J., Kreutzer, P., & Kircz, J. (2013). Reading and learning from screens versus
print: A study in changing habits: Part 1-reading long information rich texts. New
Library World, 114 , 284–300.
Tanner, M. J. (2014). Digital vs. print: Reading comprehension and the future of the
book. iSchool Student Research Journal, 4(2), 6–13.
Reading on Paper and Digitally
35
Tinker, M. A. (1958). Recent studies of eye movements in reading. Psychological
Bulletin, 55, 215.
Topping, K. (1997). Electronic literacy in school and home: A look into the future.
Reading Online, 1, 1–27.
Tyner, K. (2014). Literacy in a digital world: Teaching and learning in the age of
information. New York, NY: Routledge.
Underwood, G., Underwood, J. D., & Farrington-Flint, L. (2015). Learning and the
eGeneration. Chichester, England: John Wiley.
Usluel, Y. K. (2016). Social network usage. In Social Networking and Education
(pp. 213–222). Springer International Publishing.
*Verdi, M. P., Crooks, S. M., & White, D. R. (2014). Learning effects of print and
digital geographic maps. Journal of Research on Computing in Education, 35,
290–302.
Wästlund, E. (2007). Experimental studies of human-computer interaction: Working
memory and mental workload in complex cognition. Goteborg, Sweden: Gothenburg
University, Department of Psychology.
*Wästlund, E., Reinikka, H., Norlander, T., & Archer, T. (2005). Effects of VDT and
paper presentation on consumption and production of information: Psychological
and physiological factors. Computers in Human Behavior, 21, 377–394.
*Young, J. (2014). A study of print and computer-based reading to measure and com-
pare rates of comprehension and retention. New Library World, 115 , 376–393.
*Zambarbieri, D., & Carniglia, E. (2012). Eye movement analysis of reading from
computer displays, eReaders, and printed books. Ophthalmic and Physiological
Optics, 32, 390–396.
Zickuhr, K., & Rainie, L. (2014, January 16). E-reading rises as device ownership
jumps. Retrieved from http://www.pewinternet.org/2014/01/16/e-reading-
rises-as-device-ownership-jumps/
Zickuhr, K., Rainie, L., Purcell, K., Madden, M., & Brenner, J. (2012). Younger
Americans’ reading and library habits. Washington, DC: Pew Research Centers
Internet & American Life Project.
Authors
LAUREN M. SINGER is a doctoral candidate in the Department of Human Development
and Quantitative Methodology at the University of Maryland in College Park, Maryland;
email: lsinger@umd.edu. Her research interests include the nature, context, and under-
lying processes of text-based learning.
PATRICIA A. ALEXANDER is the Jean Mullan Professor of Literacy and Distinguished
Scholar/Teacher in the Department of Human Development and Quantitative
Methodology at the University of Maryland in College Park, Maryland and a visiting
professor at the University of Auckland, New Zealand; email: palexand@umd.edu.
She has conducted notable research on the role of individual difference, strategic pro-
cessing, and interest in students’ learning.