Content uploaded by Julie Coiro
Author content
All content in this area was uploaded by Julie Coiro on Mar 02, 2020
Content may be subject to copyright.
1
INVITED COMMENTARY
Toward a Multifaceted Heuristic of Digital Reading to Inform Assessment, Research,
Practice, and Policy
Julie Coiro
University of Rhode Island, Kingston, USA
ABSTRACT
In this commentary, the author explores the tension between almost 30 years of work that has
embraced increasingly complex conceptions of digital reading and recent studies that risk
oversimplifying digital reading as a singular entity analogous with reading text on a screen. The
author begins by tracing a line of theoretical and empirical work that both informs and
complicates our understanding of digital literacy and, more specifically, digital reading. Then, a
heuristic is proposed to systematically organize, label, and define a multifaceted set of
increasingly complex terms, concepts, and practices that characterize the spectrum of digital
reading experiences. Research that informs this heuristic is used to illustrate how more precision
in defining digital reading can promote greater clarity across research methods and advance a
more systematic study of promising digital reading practices. Finally, the author discusses
implications for assessment, research, practice, and policy.
PLEASE CITE AS FOLLOWS:
Coiro, J. (2020, Feb. 20). Toward a multifaceted heuristic of digital reading to inform
assessment, research, practice, and policy. Reading Research Quarterly. Online first version
available at https://doi.org/10.1002/rrq.302
2
In 2003, as a budding researcher studying the nature of online reading comprehension, I
was convinced that the literacy community needed to expand its understanding of reading
comprehension to reflect the skills, strategies, and dispositions required to engage with and make
sense of information on the internet. Drawing on the well-articulated model of reading
comprehension outlined in the RAND Reading Study Group’s (RRSG; 2002) report, I made the
case for broadening our understanding of four elements—the text, the reader, the reading
activity, and the social context—to encompass both traditional comprehension practices (e.g.,
determining important ideas, making inferences, evaluating, synthesizing) and fundamentally
new reading practices prompted by the internet (see Coiro, 2003). More specifically, I argued,
the Internet provides opportunities for interacting with new text formats (e.g., hypertext and
interactive multiple media that require new thought processes); new reader elements (e.g., new
purposes or motivations, new types of background knowledge, high-level metacognitive skills);
and new activities (e.g., publishing multimedia projects, verifying credibility of images,
participating in online synchronous exchanges). Likewise, the Internet expands and influences the
sociocultural context in which a reader learns to read by providing collaborative opportunities for
sharing and responding to information across continents, cultures, and languages. (p. 459)
At the time, there were few empirical findings to support these claims. Consequently, I
pointed readers to illustrative examples of digital texts and online practices to demonstrate how
conventional understandings of the reader, the text, the activity (or task), and the context were
not always applicable in electronic and networked environments. Seventeen years later, in 2020,
a large body of theoretical and empirical work supports these claims, as I detail throughout this
commentary. Collectively, this work has the potential to increase the precision and rigor of
3
research around digital reading and clarify dimensions of online reading comprehension for
educators seeking to foster productive digital literacy practices in their classrooms.
Importantly, although there is growing evidence to support that variations in readers,
texts, tasks, and contexts indeed influence digital reading comprehension, there is far less
agreement about the terms used to define, describe, and compare digital reading practices within
and across studies. In fact, Singer and Alexander (2017b) pointed to the “lack of conceptual
clarity and specificity within the educational literature” (pp. 1009–1010) as justification for their
literature review examining how print and digital reading has been defined over the past 25
years. Indeed, their findings showed that only five of 36 empirical studies included a definition
of digital reading in any form, and only two of those 36 studies included an explicit definition of
digital reading.
Similarly, there has been little consistency in how different scholars operationalize digital
reading. For example, to conduct their systematic review, Singer and Alexander (2017b) defined
digital reading as “reading involving hypermedia technology” (p. 1011). Elsewhere, digital
reading has been conceptualized as reading on a digital screen (Baron, 2017; see also Tanner,
2014), which in some cases has been operationalized as a unitary construct that involves reading
unpaginated PDF texts on a computer screen (Mangen, Walgermo, & Brønnick, 2013). Other
scholars (Salmerón, Strømsø, Kammerer, Stadtler, & van den Broek, 2018) have conceptualized
digital reading as a multidimensional construct involving both the application of navigation,
integration, and critical evaluation processes and how these processes interact with individual
differences, task differences, and variations of digital reading interfaces. Still others have
situated definitions of digital reading along a spectrum between the endpoints of reading single
digital texts and reading in highly interactive environments; “between these endpoints are
4
conceptualizations of reading in digital environments such as the reading of multiple texts in
traditional formats, non-interactive activities as reading for entertainment or for information-
gathering on the internet, and so on” (Barzillai, Thomson, Schroeder, & van den Broek, 2018, p.
vii).
In this commentary, I posit that although there are multiple ways to view the rapid
changes in literacy emerging from new technologies (Coiro, Knobel, Lankshear, & Leu, 2008;
Labbo & Reinking, 1999), it is imperative that we work to establish both clear and ecologically
valid descriptions about what the term digital reading encompasses in order to better leverage
research findings in ways that directly impact policy, practice, and future research. More
specifically, I aim to bring to light a number of tensions and research trends emerging from a
rapidly shifting and continually growing landscape of digital literacy practices. First, I highlight
the work of numerous scholars who have recommended that we not only recognize but also
embrace the complexity of digital literacy in all of its diverse forms. I also clarify my own use of
the terms digital literacy and digital reading for this commentary. Next, I provide a historical
glimpse into the work of scholars who, for almost 30 years, have grappled with how to
characterize the changing nature of reading in the larger context of digital literacy practices.
Then, I introduce several empirical studies to provide evidence that contemporary researchers
have continued to overemphasize the medium of text delivery while ignoring variations in other
factors also likely to influence comprehension performance in digital spaces. Finally, to point the
way forward, I propose a multifaceted heuristic of four increasingly complex factors to
characterize the diverse nature of reading comprehension in digital spaces (see Figure 1), and
synthesize findings from a considerably large body of scholarship in line with these ideas.
INSERT FIGURE 1 ABOUT HERE
5
Importantly, I envision the terms and related practices within this heuristic not as a
definitive set of reading-related concepts but as a starting place from which to promote rich
public conversation about what we currently know about digital literacy practices and how that
knowledge can be used to characterize, measure, teach, and support comprehension across a
range of digital reading contexts. Overall, this commentary is a call to more explicitly define the
range of experiences that may be conceived of as digital reading or risk losing insights gained
from almost three decades of research.
What Is Digital Literacy?
In 2018, Education Week issued Special Report: The Changing Face of Literacy. The purpose of
the report was to spotlight scholar and practitioner perspectives of what digital literacy means for
schools and what literacy skills learners need for success in the workplace. In the first featured
article, titled “What Is Digital Literacy?,” Heitin (2016) pointed out that “while the word
‘literacy’ alone generally refers to reading and writing skills, when you tack on the word ‘digital’
before it, the term encompasses much, much more” (p. 2). Then, to support her claim, she
clustered views of several digital literacy scholars into one of three dimensions: finding and
consuming digital content, creating digital content, and communicating or sharing digital
content.
With respect to the first dimension, finding and consuming digital content, Heitin (2016)
highlighted work by Donald Leu, who has distinguished the practices that readers use to engage
with static text (on paper or on the screen) from more interactive and potentially challenging
digital reading practices, such as querying search engines, navigating hyperlinks, and negotiating
dynamic images (see also Leu, Kinzer, Coiro, Castek, & Henry, 2013, 2018). Similarly, building
on variations in how different readers interpret the same text (see Rosenblatt, 1978), Troy Hicks
6
has elaborated on the challenges posed by digital texts with multiple pathways that are “designed
so that no two readers experience [those texts] in the exact same way” (Heitin, 2016, p. 2).
Heitin (2016) highlighted a second group of scholars, including Renee Hobbs, who have
viewed digital literacy as moving beyond how learners consume information to also encompass
the literacy practices that readers use to turn their knowledge into action through critical media
literacy and collaborative content creation (see also Hobbs, 2017). According to this view, digital
reading and digital authorship may be conceived as reciprocal digital literacy practices (Coiro &
Hobbs, 2017) in much the same way that print-based reading and writing experiences involve
interconnected acts of composing (Tierney & Pearson, 1983).
A third dimension of digital literacy focuses on how individuals use technologies to
communicate or share content, which may or may not be connected to digital reading. According
to Heitin (2016), Spires and Bartlett, for example, suggested that “Web 2.0 tools are social,
participatory, collaborative, easy to use, and are facilitative in creating online communities” (p.
2; see also Spires & Bartlett, 2012). More recently, Spires, Himes, Paul, and Kerkhoff (2019)
extended their understanding of digital literacy practices as ways of sharing to encompass the
cosmopolitan literacies useful for digital, cross-cultural exchanges around global themes such as
poverty and climate change. Still other scholars in the Education Week (“Special Report,” 2018)
spotlight have described digital communication practices more aptly with terms such as digital
coding, digital citizenship, and digital storytelling.
At least two important ideas emerged from my reflection about the content and structure
of Heitin’s (2016) curated attempts to define digital literacy. First, it is clear that there are
multiple, diverse, and overlapping conceptions of digital literacy among literacy scholars and
educational practitioners; each conception is worthy of focused study and further discussion.
7
Second, it is important to note how Heitin clustered views of digital literacy into three
dimensions; this helps readers not only characterize what is similar about each scholar’s unique
perspective but also concretely differentiate one dimension of digital literacy from another. As
Heitin explained, “the term [digital literacy] is so broad that some experts even stay away from
it, preferring to speak more specifically about particular skills at the intersection of technology
and literacy” (p. 2). It is the particularity of this broad set of competencies, especially those that
reflect digital reading, that I believe needs more attention.
In this commentary, I use the terms digital literacy and digital reading. Digital literacy is
used to conceptualize digital reading in the broader framework of reading as literacy that
involves a process of integration and construction situated in social and cultural practices (see
Frankel, Becker, Rowe, & Pearson, 2016). A framework of digital literacy helps emphasize that
digital reading is enacted in particular contexts, while also reflecting how many literacy scholars
have situated their own research involving digital technologies. Later, I use the term digital
reading to draw attention to studies particularly focused on reading comprehension and the
varied dimensions likely to influence comprehension performance in digital spaces. A
framework of digital reading sets the context in which to propose a multifaceted heuristic that
directly reflects these ideas. A focus on digital reading also helps explain why other digital
literacy practices (e.g., digital writing and composition) do not receive equal amounts of
attention in this commentary. Finally, in line with others who have acknowledged a spectrum of
reading practices in digital environments, I define digital reading as a range of multifaceted
meaning-making experiences whereby readers engage with multiple texts for particular purposes
that are situated in diverse contexts. Each experience can then be operationalized more
8
specifically to characterize how one digital reading experience is similar to and different from
another.
Thus, in this commentary, I put forth two claims. First, I make the case that researchers in
the literacy community need to engage in efforts to more specifically define and operationalize
particular terms that most align with varied views of digital reading, especially those that involve
sets of multifaceted practices linked to consuming, making sense of, creating, and using digital
content. Second, I argue that more attention needs to be paid to how to more clearly articulate the
complex variations in readers, texts, activities, and contexts that research has suggested are likely
to influence performance in digital reading comprehension. Only then can we expect to more
validly capture and document changes in elements that influence or serve as outcomes of reading
in digital spaces.
A Shifting Landscape of Digital Literacy Practices and Perspectives
From my perspective, much of the literacy community’s focus on digital literacy began with
ideas put forth in the first volume of the Handbook of Literacy and Technology: Transformations
in a Post-Typographic World (Reinking, McKenna, Labbo, & Keiffer, 1998). This book’s
subtitle captures the essence of its premise, which was to advance thinking about how to
characterize reading and writing in a world in which printed texts are no longer dominant. In the
Introduction, Reinking (1998) proposed that “the transformations of literacy that are beginning to
become evident are major threads running through the fabric of daily life” (p. x), including facets
of law, media, government and international relations, economics, and communication. Fueled
by national efforts to improve literacy in elementary and secondary schools (see Alvermann &
Guthrie, 1993), authors in this handbook were driven by a concern for the educational
implications of these literacy transformations, as well as their vision “that digital forms of
9
reading and writing represent a powerful stimulus for transforming educational structures and
practices” (Reinking, 1998, p. xi).
Of critical importance in this handbook was Reinking’s (1998) careful efforts to
introduce and deconstruct, at length, key terms such as literacy, technological, transformations,
and post-typographic before launching into chapters that provide evidence of at least six ways
that literacy might be transformed by new digital technologies: technological transformations
affecting texts, readers and writers, schools and classrooms, instruction, society, and literacy
research. Reinking’s careful deconstruction of these terms enabled the literacy community to
study changes in these concepts moving forward. He also put forth four conclusions across the
20 chapters that sound surprisingly relevant even in today’s context:
1. “Electronic and printed texts are qualitatively different” (p. xxiv).
2. “There is an important sociocultural and historical dimension to considering the
relation between technology and literacy” (p. xxv).
3. “The new technologies of electronic reading and writing are slowly but steadily
transforming classrooms, schools, and instruction” (p. (xxv).
4. “There is a dearth of research and scholarship available to understand and guide
technological transformations of literacy” (p. xxvii).
Although these ideas were compelling in 1998, I find it frightfully telling that the very same
ideas merit attention more than 20 years later, as rapid technological transformations continue to
challenge our ability to define and understand the nature of literacy and the implications of these
changes for policy, practice, and research. Of course, despite these challenges, the literacy
community has tried to keep pace with efforts that inform work in these arenas.
10
For example, in their report, Reading for Understanding: Towards an R&D Program in
Reading Comprehension, members of the RRSG (2002) recognized that “we now live in a
society that is experiencing an explosion of alternative texts” (p. xv). Later in their report, they
explained, “electronic texts that incorporate hyperlinks and hypermedia introduce some
complications in defining comprehension because they require skills and abilities beyond those
required for the comprehension of conventional, linear print” (p. 14). Unfortunately, the report
included little beyond these statements to more clearly articulate the nature of these
complications and related skills and abilities. Nevertheless, I have argued (see Coiro, 2003) that
the report’s developmental heuristic of reading comprehension provides a solid framework from
which to organize, expand, and characterize variations in literacy introduced by continually
transforming technologies.
This heuristic includes three elements:
• The reader who is doing the comprehending
• The text that is to be comprehended
• The activity in which comprehension is a part. (RRSG, 2002, p. 11)
Moreover, these three elements occur within the sociocultural context of the reader’s classroom,
home, and neighborhood, and they help the reader interpret information and create personal
meaning. I still believe, as I wrote in 2003, that we need to continue to broaden our conceptions
of each element in the RRSG heuristic because
some tasks on the Internet ask readers to extend their use of traditional comprehension skills to
new contexts for learning, while others, like electronic searching and tele-collaborative inquiry
11
projects, demand fundamentally different sets of new literacies not currently covered in most
language arts curriculums. (Coiro, 2003, p. 463)
Empirical evidence provided later in this commentary supports these claims.
A few years after the RAND report, the second volume of the handbook was published,
aptly titled International Handbook of Literacy and Technology (McKenna, Labbo, Kieffer, &
Reinking, 2006). This volume further expanded the conversation around technology and literacy
to a broader, and more international, circle of authors and issues. Of note, although not explicitly
intended as such, the organization of the book’s table of contents appears to reflect particularized
applications of literacy and technology in line with the variations in readers, texts, activities, and
contexts proposed in the RRSG’s (2002) report. That is, sections of the handbook were devoted
to (a) unique digital applications with specific populations of readers; (b) digital dimensions of
literacy activities and texts designed to foster emergent literacy, comprehension, fluency,
spelling, vocabulary, writing, and family literacy; (c) digital practices designed for diverse
purposes (e.g., teacher education, professional development, student engagement); and (d)
unique applications of literacy practices within diverse contexts such as digital software and the
internet.
In his discussion of these chapters in the Introduction of the handbook, McKenna (2006)
reiterated the continued tensions between print and digital environments and described “an
uneasy coexistence” (p. xvi) of print and digital reading contexts. Moreover, McKenna
concluded with an important reminder that we should continue to take heed as we venture
forward:
12
The most prudent view is not to view these transformations as transitions from one static state to
another, but to perceive them as an unending evolution. We must learn that where literacy and
technology converge, our principal concern should be the journey, not the destination. (p. xvii)
To that end, the heuristic of digital literacy experiences that I propose later in this commentary is
intended to open the door to a common language that may help guide our journey while
remaining flexible to changes that will continue to redefine reading comprehension in a digital
world.
Several other large and occasionally overlapping bodies of work have informed, and are
likely to continue to inform, our understanding of digital literacy. Three areas of scholarly work
are the tradition of New Literacy Studies, a new literacies perspective of online reading
comprehension, and models of multiple-document comprehension. Efforts put forth by
professional organizations and assessment practices also inform our thinking. Below, I briefly
synthesize changing conceptions of digital literacy in line with each area.
New Literacy Studies
Rooted in sociocultural traditions (Barton, Hamilton, & Ivanič, 2000; Street, 2003) of everyday,
print-based literacy practices and New Literacy Studies (Gee, 1990), the New London Group
(1996) shared their vision of a new approach to literacy pedagogy designed to create equal life
chances for all students to benefit from learning. This vision is encompassed by a pedagogy of
multiliteracies that seeks to leverage “the multiplicity of communications channels and media,
and the increasing saliency of cultural and linguistic diversity” (p. 63) in society. From this
perspective, literacy can and should serve to empower learners as social designers who actively
question power structures and design more equitable and fulfilling aspects of work, community,
personal life, and learning. Following the publication of this manifesto, the New London Group
13
joined with other authors to issue a call for broadened understandings of language-based texts to
encompass multiple modes of meaning making that “differ according to culture and context, and
have specific cognitive, cultural, and social effects” (Cope & Kalantzis, 2000a, p. 5). In the
collection of essays titled Multiliteracies: Literacy Learning and the Design of Social Futures
(Cope & Kalantzis, 2000b), various scholars discussed, among other topics, the effects of
technological change on conceptions of literacy, teaching, and the role of schools.
More recently, Mills (2010) documented the significant digital shift in New Literacy
Studies, in what she labeled “the ‘digital turn’—that is, the increased attention to new literacy
practices in digital environments across a variety of social contexts, such as workplaces and
educational, economic, and recreational sites” (pp. 246–247). Mills’s review uncovered many
promising practices as she synthesized a decade of largely ethnographic studies investigating a
wide range of digital literacy practices across in-school, after-school, and out-of-school contexts,
as well as efforts to connect literacy practices across home and school settings. Mills also
highlighted difficulties in defining and “limiting what constitutes ‘literacies’ in a changing
communications environment” (p. 250) while pointing to the increasing role of digital
technologies as necessitating a broader conception of texts across multiple modes, “opening up a
wider range of meaning potential” (p. 251). Among her recommendations was a call to reform
conventional, print-based performance indicators by defining and disseminating innovative
models of digital and multimodal reading for these new times. In addition, she argued for an
increase in design-based research methodologies that promote formative, flexible, and
contextually situated conceptions of what counts as digital literacy.
Over the years, Lankshear and Knobel (2003, 2006, 2007, 2011) have also kept
researchers and practitioners abreast of the rapidly evolving new literacies and social practices
14
that accompany changes in technology from the lens of New Literacy Studies. Mills (2016) has
continued to push the field to consider new lenses (e.g., social, critical, multimodal, spatial,
material, and sensory theories) that further broaden our understanding of digital literacy
practices. Finally, Serafini and Gee’s (2017) publication brought together literacy scholars to
celebrate the 20th anniversary of the New London Group’s (1996) manifesto. Serafini and Gee’s
edited collection skillfully synthesizes the varied, complex, and still evolving ways that literacy
scholars imagine the future of multiliteracies pedagogy and its implications for literacy
education. Surely, these variations and complexities should be represented in our conceptions of
digital literacy moving forward.
A New Literacies Perspective of Online Reading Comprehension
Scholarship in New Literacy Studies is not to be confused with another community of
researchers (myself included) who argue that “new technologies such as the Internet and other
ICTs [information and communications technologies] require additional social practices, skills,
strategies, and dispositions to take full advantage of the affordances each contains” (Leu et al.,
2013, p. 1159). Although Leu and colleagues (2013) supported work focused on how technology
impacts everyday and out-of-school literacies, they also claimed that not enough attention has
been paid to understanding how individuals develop and demonstrate the literacies needed to
read and use online informational texts in formal school and work settings. In that context,
research grounded in a new literacies perspective of online reading comprehension defines online
reading comprehension as “a self-directed process of constructing texts and knowledge while
engaged in several online reading practices: identifying important problems, locating
information, critically evaluating information, synthesizing information, and communicating
information” (Leu et al., 2013, p. 1163). Similar to print-based reading experiences,
15
comprehension in internet-based reading contexts can take place individually but often appears
to be enhanced when it takes place collaboratively.
Early quantitative research grounded in this more social constructivist definition of new
literacies suggests that both print-based and digital reading comprehension skills make
significant and independent contributions to online reading performance across different contexts
(Coiro, 2011). Elsewhere, Coiro and Dobler (2007) found that different digital reading purposes
and contexts (e.g., reading a list of search engine results vs. reading within a multilevel website)
appear to elicit different cognitive reading processes and varied sources of prior knowledge that
will be important to operationalize in future studies. Similarly, variations in digital reading
contexts, such as working individually versus working with a partner in face-to-face or remote
conditions, introduce additional features that appear to influence the quality of comprehension
and/or decision making during online inquiry (Coiro, Castek, & Guzniczak, 2011; Kiili, Coiro, &
Räikkönen, 2019). For example,
the lack of familiarity with real-time collaborative environments…and working and talking
together remotely in a digital platform with someone you have not met previously may introduce
additional challenges beyond working and talking together with someone new in more familiar
face-to-face situations. (Coiro et al., 2019, p. 287)
Other studies of digital reading comprehension have suggested that the processes used by
skilled readers to comprehend online text are both similar to and more complex than what
previous research has suggested is required to comprehend offline informational text (see, e.g.,
Afflerbach & Cho, 2009; Kingsley & Tancock, 2014). “The accumulation of many small and
large differences of frequency, degree, and speed has indeed produced a qualitative change and a
new kind of cognitive challenge for comprehending online” (Hartman, Morsink & Zheng, 2010,
16
p. 132). Still other theoretical collections of new literacies (e.g., Baker, 2008) and more practice-
based classroom applications of new literacies (e.g., Dobler & Eagleton, 2015; Moss & Lapp,
2009, 2010) have provided us with a solid grounding from which to consider direct implications
of new literacies for teaching, learning, assessment, and professional development. Across this
body of work, as is the case with print-based reading comprehension (RRSG, 2002), variations in
readers, texts, activities, and contexts continue to reveal themselves as playing an important role
in how meaning is constructed in digital spaces.
Models of Multiple-Document Comprehension
A third area of scholarship that informs our understanding of digital literacy, and digital reading
in particular, is that of multiple-document comprehension (Goldman, Lawless, & Manning,
2013; Rouet & Britt, 2011). Scholars in this arena have recognized the diverse reading practices
required to interpret a variety of task purposes, select relevant and reliable digital sources,
analyze and integrate information within and across multiple print and digital documents, and
then apply this information to achieve specific task goals.
Yet, even within this relatively narrow scope of consuming and using information as a set
of digital reading practices, researchers have grappled with defining the complexities of
multiple-source use, or the ability to select, process, and use information from multiple
information sources (see Braasch, Bråten, & McCrudden, 2018). As such, Braasch and
colleagues (2018) made the case for presenting research involving multiple-source use in a
manner that accurately reflects conceptions of sourcing that vary from general to more specific.
Further, they were hopeful that efforts toward conceptual clarity will help promote a common
language from which to systematically study and draw conclusions across different lines of
research about “when, how, and why readers use multiple sources” (p. 5). Informed and inspired
17
by this agenda, I propose that parallel efforts should be made among those seeking to understand
the nuanced complexities of digital reading, in all of its varied forms and platforms, in order to
more systematically study and draw conclusions about the impact of technology use on reading
comprehension.
Digital Reading and Professional Organizations
In line with developments in literacy theory and research, professional organizations also have
recognized the increasingly varied and digital nature of reading. Their perspectives are important
(and hence belong in a review of theoretical and empirical scholarship about digital reading)
because professional organizations will be the catalysts in transforming our research findings
into practices, outreach, and professional development that will bring scholarship into formal and
informal educational settings.
Both the International Literacy Association (International Reading Association, 2002)
and the National Council of Teachers of English (2019) have recognized the multiple, dynamic,
and malleable literacies required to locate, manage, analyze, critique, evaluate, synthesize,
curate, collaborate, design, create, share, and publish texts in digital spaces for the purposes of
solving problems and strengthening independent thought. Both organizations also have
emphasized the importance of working collaboratively and advocating equitable access as part of
digital reading and learning. Similarly, the American Library Association (2013) has continued
to expand their definition of both cognitive and technical digital literacy skills as part of efforts
to promote teaching and learning in a digital age. Most recently, the American Library
Association’s sister organization, the American Association of School Librarians (2018; AASL),
issued its “AASL Standards Framework for Learners,” which characterizes digital literacy skills
across a matrix of six integrated frameworks (inquire, include, collaborate, curate, explore, and
18
engage) and four domains (think, create, share, and grow). Across these elements, the AASL
authors proposed that “reading is the core of personal and academic competency” (p. 3). With
developments like these being issued by several professional organizations, today’s students and
teachers deserve more clarity in how to articulate the varied dimensions of digital reading and
how each may promote or complicate learning and problem solving in a digital age.
Digital Reading and Large-Scale Assessments
Finally, frameworks and items on national and international assessments have begun to reflect
more complex and particularized conceptions of reading in digital spaces. In 2017, the National
Assessment of Educational Progress (NAEP) reading framework included an expanded
definition of reading to include digital elements as part of “using meaning as appropriate to type
of text, purpose, and situation” (National Assessment Governing Board [NAGB], 2017, p. 2).
This change made it possible to operationalize the framework’s definition of reading more fully
through the introduction of digital elements that were not possible in the paper assessment.
Between 2016 and 2018, NAEP included a piloted series of digitally based reading
assessments with a small sample of students and formally administered digital assessments in
civics, geography, U.S. history, and technology and engineering literacy (National Center for
Education Statistics, 2019). (Sample questions and item maps may be viewed at
https://nces.ed.gov/nationsreportcard/about/booklets.aspx.) Researchers have begun to notice
how digital literacy may likely play a role in performance on these assessments of diverse
disciplinary knowledge (see, e.g., Morsink, 2019). Large-scale implementation of digital reading
assessments that incorporate dynamic texts, videos, animation, and innovative item types and
formats occurred in 2019 (National Center for Education Statistics, 2019). A clear articulation of
features that characterize varied digital texts, activities, and purposes and how these features
19
interact in different assessment situations and with different readers will be critical to the success
of future large-scale efforts like these.
Notably, OECD’s (2015) Programme for International Student Assessment of digital
reading provides a tangible starting place for how to operationalize digital texts and digital
reading activities. OECD’s framework “treats digital and print reading as a single domain, while
acknowledging the differences between reading on paper and reading on digital platforms” (p.
83). Thus, assessment items are designed to reflect differences in texts and tasks in print and
digital reading mediums. In addition, OECD’s digital framework puts less emphasis on narrative
texts while recognizing the prevalence of informational, personal communication, and
“transaction texts” (p. 83) designed to achieve a specific purpose in digital reading spaces.
Further, the OECD (2015) framework authors explained how digital texts introduce
additional complexities to a number of digital reading activities. These complexities make it
harder for learners to perform at least three kinds of reading tasks: access and retrieval tasks,
which require use of searching skills in more abstract spaces; integrate and interpret tasks, which
require more reliance on short-term memory to simultaneously read across multiple documents;
and reflection and evaluation tasks, because fewer filters demand critical reading skills to
establish the credibility of content needed to solve even simple reading tasks. Here, we begin to
see complex overlaps among texts, activities, and reading purposes that have implications for
comprehension.
Finally, whereas current NAEP assessments intentionally minimize reading tasks that
require navigation skills to maintain alignment with its current definition of reading, OECD’s
(2015) report includes a chapter devoted to findings that highlight the importance of navigation
as part of online reading. The OECD authors argued that
20
knowledge of some techniques of navigation and some navigation tools (e.g. hyperlinks, tabs,
menus, the ‘back’ button) are part of being literate in the digital medium. Such skills and
knowledge should be regarded as ICT skills that are measured, together with the mastery of
reading processes, in the assessment of digital reading. (p. 84)
Other assessments are designed to measure naturally reciprocal digital literacy processes
in the context of application tasks that integrate the consumption (reading), production
(writing/creation), and communication (sharing, presenting, and publishing) of multimedia texts
in digital platforms. This distinction is important for developing assessments that measure not
only comprehension but also what one can do with the fruits of one’s comprehension. Items on
Learning.com’s (n.d.) Digital Literacy Assessment, for example, are designed to assess a
complex range of learning targets aligned with the International Society for Technology in
Education’s Standards for Students, which include students’ ability as knowledge constructors
able to plan, explore, locate, and curate meaningful connections between ideas; innovative
designers able to test and refine their research process while creating innovative artifacts and
solving authentic problems; creative communicators able to repurpose, remix, create, publish, or
present ideas; and global collaborators able to connect and engage with others constructively and
collaboratively while working toward a common goal.
A final series of assessments that indicate the definition of digital reading is expanding in
diverse and particularized ways are those that highlight collaboration and social deliberation as
part of meaning construction in digital spaces (for a review, see Coiro, Sparks, & Kulikowich,
2018). These assessments emphasize cognitive and social skills required for successful
comprehension (e.g., Sabatini, O’Reilly, & Doorey, 2018) and collaborative problem solving
(e.g., Griffin & Care, 2015; OECD, 2017). Some assessments are designed to tap integrated
21
performances spanning multiple digital literacy competencies, such as locating, evaluating, and
synthesizing information across multiple documents and writing a source-based argumentative
essay (see Coiro et al., 2019). Others involve the sequencing or decomposition of digital reading
tasks (e.g., identifying multiple perspectives, judging source reliability), which permits
estimations of proficiency with or relations among key reading components such as literal
comprehension, inference generation, summarization, or reasoning about text information (e.g.,
Goldman et al., 2019; Sabatini, O’Reilly, Halderman, & Bruce, 2014).
Both types of assessments provide psychometrically strong exemplars of how to
operationalize multiple digital reading competencies and related learning targets as part of
collaborative problem solving. In addition, both types of assessments are ecologically valid
reminders that digital reading can involve multiple and varied numbers of readers and diverse
types of texts, activities, purposes, and contexts. Clearly, across the current landscape of
scholarly work, a vague and narrow definition of digital reading as a singular practice that one
individual engages in on one type of digital device will not suffice.
Tensions in the Study of Digital Reading
To address the call for more research that embraces the complexity of digital reading, it might be
argued that there has been little scholarly work to inform our understanding of digital reading
and key indicators likely to influence comprehension. Yet, this is not the case, as I illustrate in
the remainder of this commentary. Indeed, almost 30 years ago, Dillon (1992) highlighted the
challenges of determining how to validly measure comprehension, in an extensive and critical
review of empirical studies published between 1977 and 1991. Even then, however, Dillon
identified more than 18 areas in which to classify issues to deal with the precise nature and
extent of differences between reading from paper and reading from screens; many related to
22
differences in text, activity, reader, or context. These included possible differences in outcomes
such as speed, accuracy, fatigue, comprehension, and preference, as well as process differences
related to eye movements, manipulation (i.e., turning pages, browsing contents of a document),
and navigation. Even as hypertext features in electronic texts were just beginning to emerge,
Dillon pointed out the shortcomings of research that limited and distorted reading by controlling
so many variables that the resulting “task bears little resemblance to the activities most of us
routinely perform as ‘reading’” (p. 1322).
Despite the prevalence of scholarship highlighting the complexity of digital reading, and
recommendations for studies involving more authentic digital reading practices, researchers have
continued to limit large empirical studies to those that emphasized the medium of text delivery
(print or digital) rather than moving toward the hard work of conceptualizing and measuring the
wide range of outcome and process-based indicators likely to impact comprehension
performance in a digital world. For example, in one year alone, amid the explosion of hypertext
and internet use in school (Rainie, 2005), three large meta-analyses were limited to studies with
controlled experiments comparing performance on narrowly defined reading activities (e.g.
standardized reading tasks, surveys, adaptive testing situations) administered on paper or on-
screen (see Kingston, 2008; Noyes & Garland, 2008; Wang, Jiao, Young, Brooks, & Olson,
2008). Of note, findings across all three reviews showed that differences in test format and
content (texts and activities) rather than differences in the medium itself predicted reading
outcomes. Moreover, the authors concluded with recommendations to explore additional
potential moderators in order to compare comprehension across different reading contexts.
Unfortunately, almost a decade after these meta-analyses were published, more
contemporary researchers still continued to narrow the scope of their reviews to studies in which
23
readers engage with comparable texts on both paper and on a digital screen rather than to
systematically review studies exploring the complexities of readers engaging with diverse digital
texts and tasks. For example, in their review of 25 years of work exploring digital reading,
Singer and Alexander (2017b) excluded any study of digital reading that did not measure
comprehension performance in both print and digital texts. In another meta-analysis exploring
effects of reading media on reading comprehension in studies published between 2000 and 2017,
Delgado, Vargas, Ackerman, and Salmerón (2018) confined their search procedures to studies
comparing the reading of comparable texts on paper and digital devices, or “texts displayed on
digital screens, including computers, tablets, mobiles phones, and e-readers” (p. 26). Delgado
and colleagues chose to exclude specific features of digital environments such as hyperlinks and
web navigation so reading materials “were comparable across media in terms of text content,
structure, and presence of images” (p. 26).
A third meta-analysis of 17 studies dating from 2000–2016 was also limited to studies
that compared differences between reading on screen and reading on paper in the same study
(Kong, Seo, & Zhai, 2018). Other widely cited studies were confined to comparing outcomes
after reading two texts in print to reading the same two texts on a computer screen as PDFs
(Mangen et al., 2013) or those comparing differences in comprehension when students read both
digital and print versions of newspaper articles and book excerpts on a researcher-defined topic
(Singer & Alexander, 2017a). Like other researchers, to control for differences across the texts,
Singer and Alexander (2017a) employed a very narrow definition of digital reading, limiting
their selection of digital texts to those that were “fully available to readers” (p. 157; i.e., did not
include hyperlinks or require scrolling).
24
Findings from these studies point to some advantages in comprehension when reading
paragraphs of static text on paper as compared with reading comparable text on a computer or
other type of digital device. Collectively, these studies provided important information about
print- and screen-based differences when reading static paragraphs of text. Yet, large-scale
efforts that simplify the challenges inherent in digital reading and response impede our
understanding of other authentic digital reading practices moving forward (Seaboyer & Barnett,
2019; Wolf & Barzillai, 2009). Future research efforts must now focus on naming, defining,
categorizing, and researching other indicators beyond the medium of text delivery that are likely
to influence comprehension processes and outcome measures in digital reading contexts.
Patterns Emerging Across Studies of Digital Reading
As literacy scholars continue to study the nature of digital reading, at least three important ideas
can inform the work that lies ahead. First, numerous studies have confirmed that digital reading
involves complex and overlapping comprehension processes, such as navigation, evaluation, and
integration, that are influenced by individual differences in competence and motivation, the
design of digital reading interfaces, and differences in task and purpose (e.g., Cho, Woodward,
Li, & Barlow, 2017; Coiro & Dobler, 2007; Goldman, Braasch, Wiley, Graesser, &
Brodowinska, 2012; Kiili et al., 2019; Salmerón et al., 2018). In some ways, these processes are
similar to those of print-based reading, but we can no longer ignore new digital reading
competencies that need to be clearly defined. Wolf (2010), who highlighted the extraordinary
range of processes involved in traditional paper-based reading, advised that to leverage the
potential of digital text, we must now “bring our best thought and research to preserving what is
most precious about the present reading brain, as we add the critically important new capacities
of its next iteration” (p. 40). In fact, “the medium itself may provide us with new ways of
25
teaching and encouraging young readers to be purposeful, critical, and analytical about the
information they encounter” (Wolf & Barzillai, 2009, p. 36).
Second, we know that reading competently across digital spaces requires the ability to
effectively move back and forth across multiple media, modes, purposes, and contexts. That is,
digital reading demands skills in using and producing media, or what is known as multimediating
(Doneman, 1997; see also Lankshear & Knobel, 2003), in addition to what Wolf (2018) coined
as biliteracy, or the ability to shift between reading for information that involves browsing,
linking, and scanning with efficiency and the deep, reflective reading that happens when readers
slow down and think more critically. Seaboyer and Barnett (2019) highlighted the importance of
fostering the coexistence of expertise for both reading purposes. Ultimately, our conception of
digital reading should encompass both cognitive and affective purposes for reading that merge
social and academic settings (Moje, Dillon, & O’Brien, 2000; O’Brien & Bauer, 2005).
Third, Alexander and the Disciplined Reading and Learning Research Laboratory (2012)
reminded us that reading competence in the 21st century is multidimensional, developmental,
and goal directed. Moving forward, it is imperative that we blend diverse perspectives of reading
to rightfully acknowledge “the expanded number of contextual nuances, knowledge sources, and
interactive elements observed during each unique online reading experience” (Coiro, 2015, p.
58). Thus, it makes sense to embrace digital reading in all of its complexities, finding ways to
emphasize attributes that remain constant while also allowing for flexibility in how these
attributes are combined and reimagined as one digital reading practice transforms into others.
These efforts will enable the literacy community to build on the transformative nature of reading
recognized more than 20 years ago (Reinking, 1998) while attempting to integrate the dramatic
changes likely to emerge in the next 20 years.
26
Amid the complexity, an important challenge becomes what to focus on next. Reflecting
on her own review of research involving print and digital reading (see Singer & Alexander,
2017b), Alexander (cited in Sawchuk, 2017) suggested that “it’s not so much a question of a
‘horse race’ between reading in print or reading digitally that needs exploration....Rather,
knowing ‘when it matters, for whom, and under what conditions is the question that constantly
needs to be examined, again and again’” (para. 5). I agree with Alexander’s proposition.
However, I believe that a more pressing question needs to be answered first: How should
we define digital reading in order to validly and reliably determine if and when it matters and for
whom? Although there is substantial evidence to support the notion that variations in readers,
texts, activities, and contexts may influence reading comprehension, there is far less agreement
about the terms used to define and describe digital reading across studies to be able to capitalize
on research findings more systematically and achieve clarity on the relative constraints and
affordances of various media that readers may encounter in different studies.
Cuban (2018) warned of the dangers of skipping ahead to look at teaching or learning
outcomes while using digital technologies before coming to agreement about what is actually
being studied; this common agreement is crucial in order to implement treatments and test with
fidelity. I propose that the inconsistent results emerging from contemporary studies of how
digital reading influences comprehension are partly due to the unspecified variations in reader,
text, activity, and context. Consequently, we must decide how to flexibly operationalize digital
reading in ways that carry over to multiple studies before we can reliably examine how digital
reading practices impact comprehension and learning. Common constructs in any field enable
researchers, educators, and policymakers to capitalize on research findings in more systematic
ways.
27
A Multifaceted Heuristic to Characterize the Complexity of Digital
Reading
Lauer (2009) made the case that “defining terms is a situated activity that involves determining
the collective interests and values of the community for which the definition matters” (p. 225).
For this reason, I propose to delineate important terms associated with digital reading as part of a
multifaceted heuristic grounded in the reading community’s collective understanding that
comprehension in any form or medium involves one or more varied texts, activities, and readers
in the context of a particular location and for a particular purpose (RRSG, 2002). Despite the
changes to reading since it was first conceived, this four-part heuristic still provides a solid
framework from which to organize and flexibly characterize the complex nature of reading
comprehension in digital spaces.
In the sections that follow, I aim to summarize contemporary scholarship that has
illuminated the diversity of examples within and across elements in a heuristic of digital reading
(see Figure 1), while occasionally interjecting evidence-based reminders of the iterative
interrelations among the four elements. In the center of the figure, the variety of texts and
reading activities is intentionally situated along a circular sequence of examples that increase in
complexity as readers move within and across digital mediums. Four categories of reader
attributes are used to conceptualize how readers may vary from one another, and four sets of
contextual elements serve to characterize the broad variation of situations in which readers may
engage with texts and activities as part of any reading experience. Although it is impossible to
capture every digital reading experience in one figure, I invite you to appreciate how this four-
part heuristic may indeed provide a starting place for flexibly, but systematically,
28
conceptualizing commonalities and distinctions in how digital reading may vary from one
experience to the next.
Text
With respect to textual variations that readers may encounter in today’s world, most texts may be
classified as literary, informational (NAGB, 2017), or hybrid, which uses a combination of
literary and informational text structures (Bintz & Ciecierski, 2017). In the last decade,
increasing focus has been placed on comprehending persuasive texts, which put forward a point
of view and seek to persuade the reader into taking that proposed view.
Other types of texts include multimedia texts, which convey meaning through text and
graphics (Mayer, 2001), and multimodal texts, which “exceed the alphabetic and may include
still and moving images, animations, color, words, music and sound” (Selfe, 2007, p. 1).
Although the terms multimedia and multimodal are sometimes used interchangeably (Moreno &
Mayer, 2007), using them separately may help differentiate between multimedia texts that are
primarily static (printed words and graphics) and multimodal texts that are more dynamic, in
ways that are more typical of digital texts. Dynamic features further complicate the
comprehension of multimedia and multimodal texts (Dalton & Proctor, 2008; Magliano, Higgs,
& Clinton, 2019). Arrows used to connect ideas in the text region of Figure 1 reflect the
increasing complexity that each diverse text type likely introduces to the meaning-making
process.
Digital texts, in particular, may be differentiated not only by format or genre but also by
where the text is found and how readers engage with the text. Each text type introduces
important similarities and distinctions worthy of comparison along a spectrum of text complexity
(see also Barzillai et al., 2018). For example, when readers engage with text comparable to that
29
in a printed format, but the static reading experience takes place on a digital screen, these are
called on-screen texts (Baron, 2015; Dillon, 1992). Alternatively, hypertexts are designed to
digitally link textual materials and ideas (Burbules & Callister, 2000) in a range of possible
interconnections through which readers are able to construct their own personal pathways
(Landow, 1994).
Typically, hypertext contains content that is located beneath multiple layers of
hyperlinks, navigational buttons, or dynamic image maps, turning hypertexts into hypermedia,
which is a digital version of multimedia (Moos, 2014). Hypermedia requires readers to integrate
processes for decoding and interpreting images of pictures or video with a repertoire of more
foundational comprehension strategies (Kinzer & Leander, 2003). Hypertext and hypermedia
comprehension are also influenced by the level of coherence between ideas that readers
encounter within and across texts (Salmerón, Cañas, Kintsch, & Fajardo, 2005) and the extent to
which organizational devices indicate the underlying structure embedded into hypertext content
(Al-Seghayer, 2007).
In this commentary, both hypertext and hypermedia refer to digitally networked texts
found within a closed or bounded digital environment with one organizational structure (e.g.,
CD-ROM encyclopedias, library databases, digital storybooks). An extension of hypertext has
been called internet text (Coiro & Dobler, 2007), which refers to hypertexts or otherwise that are
found within the open-ended networked system of the internet (Hill & Hannafin, 1997) that
changes daily in structure, form, and content (Zakon, 2018). Multimodal internet texts, internet
search tools, and animated digital advertisements alongside on-screen static texts create a
dizzying array of possibilities for intertextual and multimodal connections and intercultural
negotiations across hidden social, economic, and political agendas (Cope & Kalantzis, 2000b).
30
The combination of multiple text types further complicates how to characterize text as part of
digital reading in online spaces (Hartman et al., 2010). Nuanced affordances such as
segmentability, juxtaposibility, malleability, multiauthorability, and responsiveness continue to
compound the challenges of characterizing digital texts (Hartman & Morsink, 2018).
Finally, augmented reality texts, or those that combine real and computer-generated
images in real time, enable readers to engage with digital information (text, audio, video, and 3-
D objects) without isolating readers from the physical environment (Tzima, Styliaras, &
Bassounas, 2019). In fact, some augmented reality texts integrate video chat and screen-sharing
capabilities that enable readers to engage with both the on-screen text and other readers in a
different place at the same time (see, e.g., Hardman, 2018). Augmented reality texts are
particularly interesting because they bring a continuum of complex texts full circle to exemplify
how printed and digital texts can exist in the same space (Hering, 2019). This type of text raises
additional questions about how best to characterize digital reading. Thus, in Figure 1, an arrow
links augmented text back to literary text to symbolize an overlapping ring of possible text
features available in digital spaces.
Surely, in 2020, with the possibility that readers may encounter at least five unique kinds
of texts overlapping in a given digital space (on-screen text, hypertext, hypermedia, internet text,
and augmented reality text) that may integrate one or more of five traditional text types (literary,
informational, hybrid, multimedia, and multimodal), it is no longer valid to suggest that digital
reading involves only one kind of text.
Activity
A second element in the RRSG (2002) comprehension heuristic is the reading activity, which
entails the purpose, process, and consequences of an activity. Reading activities can be
31
differentiated as word- and sentence-level activities and those involving single and multiple texts
(Britt, Goldman, & Rouet, 2012). Word- and sentence-level reading activities typically involve
decoding, reading aloud with fluency and expression, and vocabulary work, such as defining and
using words in isolation or focusing on selected words in the context of one or more reading
passages. As Rosenblatt (1978) asserted, comprehension activities involving narrative or
informational text may be classified for the purpose of efferent reading and response (deriving
information from text) or aesthetic reading and response (enjoying a text’s characteristics
through personal experience and interpretation). NAEP’s reading framework (NAGB, 2017)
situates comprehension for both efferent and aesthetic purposes in increasingly challenging
activities that require readers to locate and recall, integrate and interpret, or critique and evaluate
ideas from literary, informational, or persuasive texts.
To complete comprehension activities, readers may engage with ideas found within a
single text or ideas located across multiple (two or more) texts, which are sometimes referred to
as documents or sources (see Britt et al., 2012). Activities across texts, documents, or sources
may be printed, digital, or both, suggesting a direct overlap between reading activity and text
type, which further complicates our ability to define comprehension-related activities involving
multiple texts (read more in Braasch et al., 2018). At least four models of multiple-document
comprehension (see List & Alexander, 2017) have been proposed to describe additional
structures and complex mechanisms that account for comprehension in reading situations
involving multiple texts. Again, arrows in the activity region of Figure 1 symbolize the
increasing complexity likely to characterize each new type of activity.
Comprehension activities in open digital spaces require readers to navigate, evaluate, and
integrate ideas that they encounter from narrative or informational hypertext, hypermedia, or
32
internet text. “In such scenarios, the reader has to cope with (a) the constantly growing number
of available information sources, (b) the different formats in which digital information is
presented, and (c) the varying quality of the information available” (Salmerón et al., 2018, p. 91).
Informational digital reading comprehension activities, in particular, may ask readers to engage
with multiple texts to learn more about a topic or to take a position on a set of given conflicting
claims (Coiro, Coscarelli, Maykel, & Forzani, 2015). Further, making sense of some online
resources involves confirming the credibility of information, whereas other, more one-sided
resources may require readers to apply different evaluation practices to question the overall
credibility of claims being made (Kiili et al., 2018).
Other reading activities expect readers to not only comprehend but also competently act
on new knowledge gained from reading experiences in ways that promote social justice and
critical media literacy. These critical media literacy activities (Funk, Kellner, & Share, 2016)
typically involve critically analyzing relationships between media and audiences, or information
and power, across multiple sources. One example is Mind Over Media (Media Education Lab,
2015), a web-based digital platform with crowdsourcing features that invite readers to exchange
ideas about how to recognize, interpret, respond to, and assess the impact of 21st-century
propaganda examples in their community.
Online research and inquiry activities (Castek, Coiro, Henry, Leu, & Hartman, 2015; Leu
et al., 2013) push readers further into ever-changing open networked environments to identify
and then solve problems by searching, locating, evaluating, synthesizing, and communicating in
digital and/or nondigital contexts about what they learned. Responses to online research and
inquiry activities or even printed reading research activities may lead to digital creation
activities, during which readers “gain knowledge and demonstrate competencies by working with
33
a variety of symbolic systems and a variety of genres to inform, persuade, and entertain target
audiences” (Hobbs, 2017, p. 1). At the time of publication, Hobbs (2017) identified at least nine
media forms of digital creation: blogs and websites, digital audio and podcasting, images,
infographics and data visualization, vlogs and screencasts, video production, animation, remix
production, and social media.
As researchers and educators continue to combine and adapt diverse types of texts and
activities, unique and specialized reading activities will continue to evolve. Two such examples
are digital reading activities that mobilize sound and critical literacy as integral components of
digital inquiry (Wargo, 2019) and reading activities that integrate creative T-shirt design with
digital inquiry in a makerspace context to promote and practice data literacy skills (Stornaiuolo,
2018). Once again, confining conceptions of comprehension to a simplistic and binary definition
of print or digital reading continues to ignore the immense variation of meaning-making
activities in which readers are likely to engage across the spectrum of print and digital texts.
Reader
A third element in the RRSG’s (2002) conception of comprehension is that of the reader and
differences in variables linked to cognitive capabilities, reading competencies, reading
dispositions, and sociocultural identities. These variables may interact with one another and with
the texts a reader engages with to influence comprehension performance on a particular reading
activity. My earlier argument still applies today as new digital texts and reading activities
continue to emerge: “If we expand our definition of...texts [and activities] as previously
described, then we must also consider how these texts, and prior experiences with them,
compound the variability in readers” (Coiro, 2003, p. 462). Although space does not allow for a
comprehensive review in this area here, it is important to recognize how empirical work
34
continues to broaden our understanding of reader differences while also reaffirming their
influence on comprehension as individuals engage with different texts and activities.
Cognitive Capabilities
Much like studies of print-based comprehension, studies of how readers process digital texts (in
the context of a particular reading activity) have indicated that variation in at least five cognitive
processes influence comprehension in digital spaces. These are the ability to do the following:
• Attend to and remember information (Andresen, Anmarkrud, & Bråten, 2019; Baron,
2017)
• Monitor and self-regulate one’s understanding of information (Afflerbach & Cho, 2009;
Coiro & Dobler, 2007; Goldman et al., 2012)
• Critically evaluate information for a number of purposes (Barzilai & Zohar, 2012;
Bråten, Strømsø & Britt, 2009)
• Integrate and synthesize information (Kiili & Leu, 2019; Salmerón et al., 2018)
• Process information at deep levels (Singer & Alexander, 2017a, 2017b; Wolf, 2018)
Often, researchers have pointed out important relations between one or more cognitive
capabilities that influence comprehension. One recent study of 426 sixth graders, for instance,
found that skills in nonverbal reasoning (as well as foundational reading skills in word
identification, fluency, and written spelling) are correlated with comprehension as predictors of
online research and comprehension performance (Kanniainen, Kiili, Tolvanen, Aro, &
Leppänen, 2019). In another study, differences in working memory among 44 high school
students with and without dyslexia influenced readers’ ability to integrate information across
multiple webpages and representations (Andresen et al., 2019).
35
Other researchers have turned their attention to special populations of readers to further
broaden our understanding of how differences in cognitive capabilities influence comprehension
in digital spaces. Findings from one review, for example, highlighted the nature of higher level
comprehension strategy use among second-language learners as they engaged with digital texts
and new technologies (Liaw & English, 2017), and another study identified important
considerations among young adult readers with intellectual disabilities asked to read critically on
the internet (Delgado, Ávila, Fajardo, & Salmerón, 2019). Elsewhere, Schirmer, Bailey, and
Lockman (2004) outlined the processes of young deaf students as they read and comprehended
different kinds of texts; Crow (2008) reported how four types of disabilities (visual, hearing,
motor, or cognitive impairments) impacted students’ comprehension of online reading materials;
and Mann, O’Neill, and Thompson (2018) have begun to characterize differences in
comprehension between deaf and hearing children as they search for and read online hypertexts.
Other studies have indicated that certain kinds of texts and tasks may enhance cognitive
capabilities for readers who struggle in traditional print-reading environments. This research has
included, for example, work exploring the affordances of computer-assisted instruction for
students with attention deficit disorders (Raggi & Chronis, 2006), learning disabilities (Hall,
Hughes, & Filbert, 2000), or intellectual disabilities (Snyder & Huber, 2019), as well as studies
applying principles of Universal Design for Learning to support diverse readers with flexible
learning environments and accessible content (Coyne, Pisha, Dalton, Zeph, & Smith, 2012;
Dalton, Proctor, Uccelli, Mo, & Snow, 2011). Still others have focused on the collaborative use
of graphic representational tools (Kiili, Coiro, & Hämäläinen, 2016) or the metacognitive
scaffolding of online information search practices (Zhou & Lam, 2019) to support the cognitive
36
processing abilities of diverse learners in digital spaces. Findings from these studies again
highlight the interplay among text, activity, and reader variables in digital spaces.
Reading and Language Competencies
In addition to variation in general cognitive capabilities, readers often vary in their knowledge of
and experience with varied types of texts, which in turn influences how they move through and
construct meaning from text in digital spaces. For example, individuals with varying levels of
prior domain knowledge who read hypertext in less or more coherent order obtained different
learning outcomes related to their construction of the textbase or of a situation model (Salmerón
et al., 2005). These findings replicate what we know about the effects of reader knowledge and
text coherence in linear printed texts (e.g., Goldman & Saul, 1990; McNamara & Kintsch, 1996).
Elsewhere, similar to studies of print-based comprehension performance, word-level
reading skills (Bråten, Ferguson, Anmarkrud, & Strømsø, 2013; Sabatini et al., 2014) and
language skills (Al-Seghayer, 2007; Coscarelli, 2018) have continued to contribute to differences
in comprehension and learning as readers of varying levels of ability and linguistic competency
engaged with multiple texts in varied digital reading activities. Skills in core academic language,
perspective taking, and complex reasoning strongly predicted deep reading comprehension on
computer-based assessments involving a variety of digital texts (e.g., blog, website, email, news
article, textbook excerpt; LaRusso et al., 2016). Further, multilingual readers able to leverage the
dynamic affordances of digital spaces enjoyed a broader range of opportunities to represent their
comprehension of and engagement with multimodal text (Lotherington & Janson, 2011).
Other studies have highlighted how particular types of reading strategy use influences
hypertext and internet text comprehension (Afflerbach & Cho, 2009; Cho & Afflerbach, 2017).
Proficient readers of internet texts strategically leveraged their knowledge of topics and digital
text types (e.g., search engines, websites), constructed critical questions to focus their inquiry,
37
and monitored both their reading pathways and their understanding of content in open online
spaces (Cho & Afflerbach, 2017; Coiro & Dobler, 2007). Differences in navigational decisions
and metacognitive reading strategy use also significantly predicted comprehension in digital
reading spaces (Lawless & Kulikowich, 1996; Lim & Jung, 2019; OECD, 2017; Salmerón et al.,
2018). Further, skilled readers of internet texts demonstrated proficiency in using a variety of
source characteristics (e.g., author credentials, content, publisher, document type) to judge the
quality of information (Coiro et al., 2015; Goldman et al., 2012) while applying critical
evaluation processes that involved both questioning and confirming the credibility of information
encountered on the internet (Kiili et al., 2019).
Reading Dispositions and Motivations
Several studies have highlighted how a reader’s varied motivations, interests, and attitudes in
reading print and digital texts impacted comprehension (e.g., Guthrie, Wigfield, & Perencevich,
2004; Jang & Henretti, 2019; Lim & Jung, 2019; Lupo, Jang, & McKenna, 2017). Empirical
work continues to point to reader dispositions (or their attitudes, mind-sets, and beliefs) about
reading, about themselves as readers, and about the nature of knowledge and knowing that
influence comprehension in digital spaces. Coiro (2009), for example, found that readers who
believed that the internet is useful, valuable, and engaging were willing to endure the challenges
of navigating and reading across internet texts, whereas those who viewed online inquiry as a
source of frustration tended to avoid using independent reading strategies and instead sought
help from others (see also Coiro & Moore, 2012). Other researchers (O’Byrne & McVerry, 2009;
Putman, 2014) have reported the significant role of dispositional variables, such as reflection,
persistence, collaboration, anxiety, interest, and self-efficacy, and their interaction with other
contextual and reader factors related to experience with technology and the internet. Similarly,
differences in reader preferences (Baron, 2017) and mind-sets (Wolf, 2018) and in purposes for
38
reading (e.g., user-generated purposes vs. purposes generated by others; see List & Alexander,
2019; Schwan & Cress, 2017) can greatly impact one’s ability to process and construct meaning
from print and digital texts.
Another important area of research has explained how reader variation in epistemic
beliefs significantly predicted students’ evaluation of information sources, after controlling for
prior knowledge and text comprehensibility (Strømsø, Bråten, & Britt, 2011). Further, Barzilai
and Zohar (2012) differentiated between the complex roles of cognitive and metacognitive levels
of epistemic knowledge as learners engaged in online inquiry activities while reading internet
texts. Epistemic beliefs significantly impacted readers’ ability to calibrate their learning to the
complexity of content encountered in hierarchical hypertext (Pieschl, Stahl, & Bromme, 2008),
as well as their ability to successfully engage in online research and inquiry activities, such as
judging information sources, monitoring their knowing processes, and regulating their
knowledge-seeking actions (Cho et al., 2017).
Sociocultural Identities
A fourth dimension of reader differences relates to the varied sociocultural identities that readers
adopt in the context of specific print or digital reading environments and how these interact with
other reader variables. Lee, Park, Jang, and Cho (2019) proposed a triangulation framework that
integrates sociocultural, affective, and cognitive perspectives on digital literacies to better
describe the multifaceted nature of youth digital literacies. This framework extends thinking to
include varied social and cultural nuances in how readers see themselves and digital
environments as spaces for engagement and participation. Given the important role of reader
attitudes and beliefs described earlier, new identities that readers take on in different affinity
spaces in both in-school and out-of-school contexts are also likely to influence comprehension as
part of print and digital reading (see McCarthey & Moje, 2002).
39
Indeed, in Learning and Practice: Agency and Identities, Murphy and Hall (2008) bring
together several chapters that suggest the nature of texts and activities can situate certain readers
in ways that might be conducive or detrimental to their interests, depending on their self-
constructed identities and perceived roles in different spaces (see also Jenkins, 2014). For over a
decade, collections of studies (see Buckingham, 2007; Warburton & Hatzipanagos, 2012) have
described the impact that participating in networked digital spaces had on reader identity and
how readers’ identity influenced their reasons for engaging with (or avoiding) particular digital
texts and activities. Nash (2019) encouraged educators and students alike to consider how their
own cultural beliefs, views, and practices influence the choices that they make about what to
click on and how to react to diverse internet texts and perspectives as part of online reading. I
propose that reading researchers should also be considering nuances of sociocultural identity as
part of how to conceptualize reader variation in digital reading contexts.
Context
Finally, at least four sets of contextual elements can be useful for characterizing the broader
situations in which particular readers engage with particular text types as part of particular
reading activities. As depicted along the left of the outside ring of Figure 1, the first contextual
element involves the medium or platform in which print or digital texts are found. This set of
contextual considerations is positioned near the text element in the figure as a reminder of the
context(s) in which certain kinds of texts may be found. Printed texts, for instance, may be found
on a single page of paper or printed on an object such as a sticky note, index card, scroll, sign,
blackboard, chart paper, or wall mural. Multiple printed pages may be shared as a loose
collection of pages that can be flexibly reorganized by the reader or bound together in book
40
format in a particular sequence determined by the creator (e.g., magazine, brochure, paperback or
hardcover book).
As readers transition into digital spaces, one or more text forms (e.g., narrative,
information, multimodal, hypertext) may be located in a myriad of different digital platforms.
Depending on the situation, texts can be viewed on a range of digital devices (computer, e-
reader, tablet, or mobile phone), and readers may encounter multiple digital texts contained
within a specific software program, mobile app, digital textbook, or online virtual world. Readers
may also access texts in the context of augmented reality situations (Billinghurst & Dünser,
2012; Davis, 2015) or as part of an immersive headset-based virtual or mixed-reality experience
(Amin, Arantha, & D’souza, 2018; Yang, 2019).
Each type of digital medium/platform is likely to have unique features, with the potential
to hinder or support comprehension. This is especially true when these features interact with
reader characteristics, such as preference or prior experience, and type of comprehension activity
(e.g., accessing and evaluating digital texts vs. integrating and responding to digital texts).
Indeed, researchers have begun to characterize features of different digital platforms and their
relations to reading performance and reading affect. Findings illustrate both affordances and
constraints of digital reading on platforms such as e-books (Larson, 2010), Universal Design for
Learning e-texts (Dalton et al., 2011), e-textbooks (Dobler, 2015), collaborative digital textbooks
(Kempe & Grönlund, 2019), mobile phones (Çakmak, 2019), iPads (Hutchison, Beschorner, &
Schmidt-Crawford, 2012), and augmented reality applications (Bursali & Yilmaz, 2019). Efforts
are now needed to summarize similarities and differences across findings from studies situated in
a range of different contexts to guide the selection of digital platforms in future research and
practice.
41
Second, when readers are asked to demonstrate their comprehension of texts while
engaged in various activities, responses may take various formats, as depicted across the top of
Figure 1. According to the NAGB (2017), using multiple types of response formats for
demonstrating comprehension “affirms the complex nature of the reading process because it
recognizes that different kinds of information can be gained from each item type. It also
acknowledges the real-world skill of being able to write about what one has read” (p. 41). Thus,
different item formats possess different affordances that allow evaluators access to different
facets of students’ understanding.
In classroom settings, learners might be asked to write, draw, and/or orally explain what
they have learned from their reading. On reading assessments, readers may encounter selected
response items (e.g., multiple choice, true/false), short constructed-response items (writing on
paper or typing one or two phrases or sentences in the digital interface), or extended constructed-
response items (writing or typing longer, more elaborated answers of a paragraph or two)
(NAGB, 2017). As large-scale reading assessments transition into digitally based assessments, a
variety of new questions and task types have been introduced to capture outcome and process
data, including scenario-based tasks, interactive computer tasks, and hybrid hands-on tasks
(National Center for Education Statistics, 2019).
Educators have also been exploring new digital formats for students to demonstrate their
comprehension, using technology to screencast think-alouds (White, 2016), record podcasts
(Morgan, 2015), and digitally annotate texts that they read (Léger et al., 2019). Of course, there
are ways of combining digital response formats, such as integrating digital portfolios and
Flipgrid video reflection to demonstrate comprehension of complex concepts (Johnson &
Skarphol, 2018), or choosing one or more of the creative response options outlined in Hobbs’s
42
(2017) book Create to Learn: Introduction to Digital Literacy. In each case, educators reported
increases in student engagement and comprehension, validating the role of response type as an
important indicator to consider in any study of digital reading.
A third collection of design considerations may help characterize other contextual
features related to the reading activity (as depicted along the right of the outside ring in Figure
1). For example, studies involving digital reading have explored quantitative or qualitative
differences related to whether the reading task was timed or untimed (Colwell, 2013); whether
readers worked individually, with a partner, or in small or large groups (Hampel, 2006); whether
readers read to accomplish personal or task goals (List & Alexander, 2019); and/or whether
readers engaged with texts, other people, or dynamic avatars in face-to-face situations (Kiili et
al., 2019), collaborative online documents (Abrams, 2019), or virtual worlds (see Coiro et al.,
2019). In each of these studies, contextual design features had an impact on interaction,
communication, and/or learning. Without a doubt, this set of contextual design features will
continue to grow and change with new technologies that introduce new contextual affordances
and constraints; in turn, many of these new features are likely to directly or indirectly influence
digital reading performance.
A fourth set of contextual considerations involves features of the community in which the
reading activity takes place. In her review of recent work involving students’ literacies learning
with digital applications or through interactions with digital devices, Hagerman (2019) argued
that context and “the local situatedness of technology use” (p. 116) are central to meaning
making. Further, Hagerman cited work suggesting that digital interfaces and other
43
placed resources (Prinsloo, 2005; Rowsell, Saudelli, Scott, & Bishop, 2013)…take on and enable
the creation of meaning in relation to where they are used, when they are used, how they are used,
by whom and for what situated purposes (Prinsloo & Rowsell, 2012).
Thus, this set of contextual considerations (pictured at the lower right of Figure 1) is positioned
in near proximity to the reader element as a reminder that readers may take on different identities
and bring diverse competencies and motivations to different communities of practice.
Although a summary of studies conceiving digital reading in these ways is beyond the
scope of this commentary, at the very least, differences related to the location of a particular print
or digital reading experience (e.g., at school, after-school location, at home, elsewhere in the
community) are likely to interact with differences in reader factors to influence reading
comprehension performance in important ways (see, e.g., Moje, Young, Readance, & Moore,
2000; Tucker-Raymond & Gravel, 2019). Hagerman’s (2019) review brought to light how
reading activities situated in contexts that reflect inclusivity, creativity, content knowledge
learning, analytical and problem-solving skills, and interpersonal skills reflect authentic and
equitable conceptualizations of digital literacy that we ought to be striving for in today’s diverse
world.
Where Do We Go From Here?
Overall, contemporary theories and research from national and international literacy scholars
point to a wide variety of indicators that can be used to characterize comprehension in digital
spaces. Unfortunately, too many studies have been dominated by definitions of digital reading
that range from nonexistent to those with varying degrees of specificity. A failure to clearly
define and then control for the effects of differences in texts, activities, readers, and/or contexts
may explain the inconsistent results found in many studies of digital reading. Moreover, large
44
empirical studies focused solely on positioning print and digital reading as isolated comparable
activities have done little to advance work identifying and tracking important commonalities and
differences in comprehension processes and outcomes across the spectrum of print and digital
spaces.
As Dillon (1992) concluded almost 30 years ago, if we are to leverage the affordances of
digital texts and tools rather than just match digital reading to the features and outcomes of print
reading, a broader and more realistic conceptualization of human reading is required. The
multifaceted comprehension heuristic proposed in this commentary is designed to invite
members of the literacy community to embrace and unpack the complexity of digital reading
while also working to promote greater clarity around important dimensions of reading worthy of
consideration in the future. Organizing dimensions of digital comprehension in RRSG’s (2002)
framework can help further expand our thinking by validating familiar understandings of text,
activity, reader, and context as part of comprehension while also highlighting important
differences within and across these elements as technology continues to transform our
conceptions of reading in the years ahead. Researchers might, for example, refer to the heuristic
to help explicate the constraints and affordances of different arrangements as different
constellations of reading in this multivariate space.
Moving forward, use of the proposed heuristic of digital reading can promote a broad and
complex understanding of reading comprehension that has implications for assessment, research,
practice, and policy. In terms of assessment, Mislevy (2016) called for more attention to a
myriad of considerations that arise when assessing performance in complex tasks. Consequently,
psychometricians might select different combinations of features represented in the proposed
heuristic to accurately measure the complex integration of comprehension skills and practices
45
required in particularized digital reading environments. In turn, recent advances in technology
can be leveraged to capture and track detailed evidence of both content understanding and
targeted comprehension processes in digital environments (see, e.g., Coiro et al., 2019; Kerr,
Andrews, & Mislevy, 2017).
As these and other assessment instruments begin to provide psychometrically valid and
reliable data, literacy researchers may also refer to this expanded heuristic to systematically
explore relations between complex products and interactive performances among different kinds
of readers within and across certain kinds of digital texts, activities, and contexts. Researchers
might also draw from the terms in this heuristic to provide rich and explicit descriptions of the
particular readers, texts, activities, and contexts most relevant in their work. Efforts to
systematically name and operationalize important similarities and differences in these elements
enable readers to more systematically draw conclusions across studies.
A common set of terms and definitions would also address the lack of conceptual clarity
and specificity reported in most studies of digital reading (see Singer & Alexander, 2017b). In
turn, conceptual clarity in studies of digital reading can pave the way for systematic comparisons
and comprehensive literature reviews of how different kinds of readers perform in a range of
authentic digital reading situations. In addition, complexity represented in this heuristic may
inform efforts to move beyond studying digital reading in carefully controlled situations toward
more design-based research methodologies that conceptualize digital reading in more flexible
and contextually situated ways.
With respect to educational practice, classroom teachers, school librarians, and
makerspace leaders are invited to reflect on the range of digital reading experiences apparent in
their contexts and the extent to which what they observe is present or absent from the proposed
46
heuristic. Surely, important insights can be gained from aligning conceptions of digital reading
among educators in formal and informal learning environments with those emerging among
assessment designers and researchers. Formative studies that involve collaborations between
educators and researchers are also likely to enhance our understanding of how to authentically
assess indicators of comprehension and learning while diagnosing and supporting readers who
struggle in complex digital spaces.
Finally, the proposed broadened heuristic of reading can inform literacy organizations
and policymakers seeking to articulate their own values and positions about what counts as
reading in a digital world. By formulating their thinking in line with important insights about
digital texts, digital activities, digital readers, and digital contexts, organizations can ground their
recommendations for integrating reading and technology use in research-based indicators linked
to growth in comprehension, engagement, and creative response.
Perhaps most importantly, policymakers should be cautious about finalizing funding
priorities and curricular decisions in the absence of conclusive findings about which kinds of
students benefit most from which kinds of digital reading activities. As Baron (2017) reminded
us, “digital technology is still in its relative infancy. We know it can be an incredibly useful
educational tool, but we need much more research before we can draw firm conclusions about its
positive and negative features” (Implications for Educators section, para. 6).
Agencies supporting efforts to improve digital reading are advised to consider elements
of the proposed comprehension heuristic in their decisions about what kinds of research is
needed in the years ahead. Only after we systematically embrace and unpack a more complex
definition of digital reading will the literacy community be able to further advance the vigorous
47
and cumulative research and development program envisioned by the RRSG (2002) almost 20
years ago.
References
Abrams, Z.I. (2019). Collaborative writing and text quality in Google Docs. Language Learning
& Technology, 23(2), 22–42. https://doi.org/10125/44681
Afflerbach, P., & Cho, B.-Y. (2009). Identifying and describing constructively responsive
comprehension strategies in new and traditional forms of reading. In S.E. Israel & G.G.
Duffy (Eds.), Handbook of research on reading comprehension research (pp. 69–90).
Mahwah, NJ: Erlbaum.
Alexander, P.A., & The Disciplined Reading and Learning Research Laboratory. (2012).
Reading into the future: Competence for the 21st century. Educational Psychologist,
47(4), 259–280. doi:10.1080/00461520.2012.722511
Al-Seghayer, K. (2007). The role of organizational devices in ESL readers’ construction of
mental representations of hypertext content. CALICO Journal, 24(3), 531–559.
doi:10.1558/cj.v24i3.531-559
Alvermann, D.E., & Guthrie, J.T. (1993). Themes and directions of the National Reading
Research Center. Athens, GA & College Park, MD: National Reading Research Center.
American Association of School Librarians. (2018). AASL standards framework for learners.
Chicago, IL: Author. Retrieved from https://standards.aasl.org/wp-
content/uploads/2017/11/AASL-Standards-Framework-for-Learners-pamphlet.pdf
American Library Association. (2013). Digital literacy, libraries, and public policy: Report of
the Office for Information Technology Policy’s Digital Literacy Task Force. Washington,
DC: Author.
48
Amin, S., Arantha, S., & D’souza, A. (2018). Project report on V-learning: Virtual reality based
games for dyslexia improvement. Retrieved from
http://dspace.dbit.in/jspui/bitstream/123456789/5653/1/blackbook_V-
Learning_FINAL.pdf
Andresen, A., Anmarkrud, Ø., & Bråten, I. (2019). Investigating multiple source use among
students with and without dyslexia. Reading and Writing, 32(5), 1149–1174.
doi:10.1007/s11145-018-9904-z
Baker, E.A. (2008). The new literacies: Multiple perspectives on research and practice. New
York, NY: Guilford.
Baron, N.S. (2015). Words onscreen: The fate of reading in a digital world. New York, NY:
Oxford University Press.
Baron, N.S. (2017). Reading in a digital age. Phi Delta Kappan, 99(2), 15–20.
http://kappanonline.org/reading-digital-age/
Barton, D., Hamilton, M., & Ivanič, R. (Eds.). (2000). Situated literacies: Reading and writing in
context. London, UK: Routledge.
Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online
sources. Cognition and Instruction, 30(1), 39–85. doi:10.1080/07370008.2011.636495
Barzillai, M., Thomson, J., Schroeder, S., & van den Broek, P. (2018). Introduction. In M.
Barzillai, J. Thomson, S. Schroeder, & P. van den Broek (Eds.), Learning to read in a
digital world (pp. vii–x). Amsterdam, The Netherlands: John Benjamins.
Billinghurst, M., & Dünser, A. (2012). Augmented reality in the classroom [Video].
Christchurch: Human Interface Technology Laboratory New Zealand, University of
49
Canterbury. Retrieved from
https://www.youtube.com/watch?v=ndUjLwcBIOw&feature=youtu.be
Bintz, W.P., & Ciecierski, L.M. (2017). Hybrid text: An engaging genre to teach content area
material across the curriculum. The Reading Teacher, 71(1), 61–69. doi:10.1002/trtr.1560
Braasch, J.L.G., Bråten, I., & McCrudden, M.T. (2018). Introduction. In J.L.G. Braasch, I.
Bråten, & M.T. McCrudden (Eds.), Handbook of multiple source use (pp. 1–13). New
York, NY: Routledge.
Bråten, I., Ferguson, L.E., Anmarkrud, Ø., & Strømsø, H.I. (2013). Prediction of learning and
comprehension when adolescents read multiple texts: The roles of word-level processing,
strategic approach, and reading motivation. Reading and Writing, 26, 321–348.
doi:10.1007/s11145-012-9371-x
Bråten, I., Strømsø, H.I., & Britt, M.A. (2009). Trust matters: Examining the role of source
evaluation in students’ construction of meaning within and across multiple texts. Reading
Research Quarterly, 44(1), 6–28. doi:10.1598/RRQ.44.1.1
Britt, M.A., Goldman, S.R., & Rouet, J.-F. (Eds.). (2012). Reading—from words to multiple
texts. New York, NY: Routledge.
Buckingham, D. (Ed.). (2007). Youth, identity, and digital media. Cambridge, MA: MIT Press.
Burbules, N.C., & Callister, T.A. (2000). Watch IT: The risks and promises of information
technologies for education. Boulder, CO: Westview.
Bursali, H., & Yilmaz, R.M. (2019). Effect of augmented reality applications on secondary
school students’ reading comprehension and learning permanency. Computers in Human
Behavior, 95, 126–135. doi:10.1016/j.chb.2019.01.035
50
Çakmak, F. (2019). Mobile learning and mobile assisted language learning in focus. Language
and Technology, 1(1), 30–47.
Castek, J., Coiro, J., Henry, L.A., Leu, D.J., & Hartman, D.K. (2015). Research on instruction
and assessment in the new literacies of online research and comprehension. In S.R. Parris
& K. Headley (Eds.), Comprehension instruction: Research-based best practices (3rd
ed., pp. 324–344). New York, NY: Guilford.
Cho, B.-Y., & Afflerbach, P. (2017). An evolving perspective of constructively responsive
reading comprehension strategies in multilayered digital text environments. In S.E. Israel
(Ed.), Handbook of research on reading comprehension (2nd ed., pp. 109–134). New
York, NY: Guilford.
Cho, B.-Y., Woodward, L., Li, D., & Barlow, W. (2017). Examining adolescents’ strategic
processing during online reading with a question-generating task. American Educational
Research Journal, 54(4), 691–724. doi:10.3102/0002831217701694
Coiro, J. (2003). Reading comprehension on the internet: Expanding our understanding of
reading comprehension to encompass new literacies. The Reading Teacher, 56(5), 458–
464.
Coiro, J. (2009, May). Promoting online reading success: Understanding students’ attitudes
toward reading on the internet. Paper presented at the 54th annual meeting of the
International Reading Association, Minneapolis, MN.
Coiro, J. (2011). Predicting reading comprehension on the internet: Contributions of offline
reading skills, online reading skills, and prior knowledge. Journal of Literacy Research,
43(4), 352–392. doi:10.1177/1086296X11421979
51
Coiro, J. (2015). Purposeful, critical, and flexible: Key dimensions of online reading and
learning. In R.J. Spiro, M. DeSchrvyer, M.S. Hagerman, P.M. Morsink, & P. Thompson
(Eds.), Reading at a crossroads? Disjunctures and continuities in current conceptions
and practices (pp. 53–64). New York, NY: Routledge.
Coiro, J., Castek, J., & Guzniczak, L. (2011). Uncovering online reading comprehension
processes: Two adolescents reading independently and collaboratively on the internet. In
P.J. Dunston, L.B. Gambrell, K. Headley, S.K. Fullerton, P.M. Stecker, V.R. Gillis, &
C.C. Bates (Eds.), Sixtieth yearbook of the Literacy Research Association (pp. 354–369).
Oak Creek, WI: Literacy Research Association.
Coiro, J., Coscarelli, C., Maykel, C., & Forzani, E. (2015). Investigating criteria that seventh
graders use to evaluate the quality of online information. Journal of Adolescent & Adult
Literacy, 58(7), 546–550. https://doi.org/10.1002/jaal.448
Coiro, J., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by
sixth-grade skilled readers to search for and locate information on the internet. Reading
Research Quarterly, 42(2), 214–257. doi:10.1598/RRQ.42.2.2
Coiro, J., & Hobbs, R. (2017, April). Digital literacy as collaborative, transdisciplinary, and
applied. Paper presented at the annual meeting of the American Educational Research
Association, San Antonio, TX.
Coiro, J., Knobel, M., Lankshear, C., & Leu, D.J. (Eds.). (2008). Handbook of research in new
literacies. Mahwah, NJ: Erlbaum.
Coiro, J., & Moore, D.W. (2012). Research connections: An interview with Julie Coiro. Journal
of Adolescent & Adult Literacy, 55(6), 551–553. doi:10.1002/JAAL.00065
52
Coiro, J., Sparks, J.R., Kiili, C., Castek, J., Lee, C., & Holland, B.R. (2019). Students engaging
in multiple-source inquiry tasks: Capturing dimensions of collaborative online inquiry
and social deliberation. Literacy Research: Theory, Method, and Practice, 68(1), 271–
292. https://doi.org/10.1177/2381336919870285
Coiro, J., Sparks, J.R., & Kulikowich, J. (2018). Assessing online collaborative inquiry and
social deliberation as learners navigate multiple sources and perspectives. In I. Bråten, J.
Braasch, & M. McCrudden (Eds.), Handbook of multiple source use (pp. 485–501).
London, UK: Routledge.
Colwell, N.M. (2013). Test anxiety, computer-adaptive testing, and the Common Core. Journal
of Education and Training Studies, 1(2), 50–60. doi:10.11114/jets.v1i2.101
Cope, B., & Kalantzis, M. (2000a). Introduction: Multiliteracies: The beginnings of an idea. In
B. Cope & M. Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of
social futures (pp. 3–8). New York, NY: Routledge.
Cope, B., & Kalantzis, M. (Eds.). (2000b). Multiliteracies: Literacy learning and the design of
social futures. New York, NY: Routledge.
Coscarelli, C.V. (2018). Cultural perspectives for the use of digital technologies and education.
Revista Brasileira de Alfabetização, 1(8), 33–56.
Coyne, P., Pisha, B., Dalton, B., Zeph, L.A., & Smith, N.C. (2012). Literacy by design: A
Universal Design for Learning approach for students with significant intellectual
disabilities. Remedial and Special Education, 33(3), 162–172.
doi:10.1177/0741932510381651
Crow, K.L. (2008). Four types of disabilities: Their impact on online learning. TechTrends,
52(1), 51–55. doi:10.1007/s11528-008-0112-6
53
Cuban, L. (2018). The flight of a butterfly or the path of a bullet? Using technology to transform
teaching and learning. Cambridge, MA: Harvard Education Press.
Dalton, B., & Proctor, C.P. (2008). The changing landscape of text and comprehension in the age
of new literacies. In J. Coiro, M. Knobel, C. Lankshear, & D.J. Leu (Eds.), Handbook of
research on new literacies (pp. 297–324). Mahwah, NJ: Erlbaum.
Dalton, B., Proctor, C.P., Uccelli, P., Mo, E., & Snow, C.E. (2011). Designing for diversity: The
role of reading strategies and interactive vocabulary in a digital reading environment for
fifth-grade monolingual English and bilingual students. Journal of Literacy Research,
43(1), 68–100. doi:10.1177/1086296X10397872
Davis, M. (2015, February 13). Learning spaces in augmented reality [Web log post]. Retrieved
from https://www.literacyworldwide.org/blog/literacy-daily/2015/02/13/learning-spaces-
in-augmented-reality
Delgado, P., Ávila, V., Fajardo, I., & Salmerón, L. (2019). Training young adults with
intellectual disability to read critically on the internet. Journal of Applied Research in
Intellectual Disabilities, 32(3), 666–677. doi:10.1111/jar.12562
Delgado, P., Vargas, C., Ackerman, R., & Salmerón, L. (2018). Don’t throw away your printed
books: A meta-analysis on the effects of reading media on reading comprehension.
Educational Research Review, 25, 23–38. doi:10.1016/j.edurev.2018.09.003
Dillon, A. (1992). Reading from paper versus screens: A critical review of the empirical
literature. Ergonomics, 35(10), 1297–1326. doi:10.1080/00140139208967394
Dobler, E. (2015). E-textbooks: A personalized learning experience or a digital distraction?
Journal of Adolescent & Adult Literacy, 58(6), 482–491. doi:10.1002/jaal.391
54
Dobler, E., & Eagleton, M. (2015). Reading the web: Strategies for internet inquiry (2nd ed.).
New York, NY: Guilford.
Doneman, M. (1997). Multimediating. In Digital rhetorics: Literacies and technologies in
education—current practices and future directions (Vol. 3, pp. 131–148). Canberra,
ACT, Australia: Department of Employment, Education, Training and Youth Affairs.
Frankel, K.K., Becker, B.L.C., Rowe, M.W., & Pearson, P.D. (2016). From “what is reading” to
what is literacy? Journal of Education, 196(3), 7–17. doi:10.1177/002205741619600303
Funk, S., Kellner, D., & Share, J. (2016). Critical media literacy as transformative pedagogy. In
M.N. Yildiz & J. Keengwe (Eds.), Handbook of research on media literacy in the digital
age (pp. 1–30). Hershey, PA: Information Science Research.
Gee, J.P. (1990). Social linguistics and literacies: Ideology in discourses. London, UK: Taylor &
Francis.
Goldman, S.R., Braasch, J.L.G., Wiley, J., Graesser, A.C., & Brodowinska, K. (2012).
Comprehending and learning from internet sources: Processing patterns of better and
poorer learners. Reading Research Quarterly, 47(4), 356–381.
https://doi.org/10.1002/RRQ.027
Goldman, S.R., Greenleaf, C., Yukhymenko-Lescroart, M., Brown, W., Ko, M.-L.M., Emig,
J.M., ... Britt, M.A. (2019). Explanatory modeling in science through text-based
investigation: Testing the efficacy of the Project READI intervention approach. American
Educational Research Journal, 56(4), 1148–1216.
https://doi.org/10.3102/0002831219831041
55
Goldman, S.R., Lawless, K., & Manning, F. (2013). Research and development of multiple
source comprehension assessment. In M.A. Britt, S.R. Goldman, & J.-F. Rouet (Eds.),
Reading—from words to multiple texts (pp. 180–199). New York, NY: Routledge.
Goldman, S.R., & Saul, E. (1990). Flexibility in text processing: A strategy competition model.
Learning and Individual Differences, 2(2), 181–219. doi:10.1016/1041-6080(90)90022-9
Griffin, P., & Care, E. (Eds.). (2015). Assessment and teaching of 21st century skills: Methods
and approach. New York, NY: Springer.
Guthrie, J.T., Wigfield, A., & Perencevich, K.C. (Eds.). (2004). Motivating reading
comprehension: Concept-Oriented Reading Instruction. Mahwah, NJ: Erlbaum.
Hagerman, M.S. (2019). Digital literacies learning in contexts of development: A critical review
of six IDRC-funded interventions 2016–2018. Media and Communication, 7(2), 115–
127. doi:10.17645/mac.v7i2.1959
Hall, T.E., Hughes, C.A., & Filbert, M. (2000). Computer-assisted instruction in reading for
students with learning disabilities: A research synthesis. Education & Treatment of
Children, 23(2), 173–193.
Hampel, R. (2006). Rethinking task design for the digital age: A framework for language
teaching and learning in a synchronous online environment. ReCALL, 18(1), 105–121.
doi:10.1017/S0958344006000711
Hardman, S. (2018, March 23). Read bedtime stories together, even from a distance [Web log
post]. Retrieved from https://edlab.tc.columbia.edu/blog/18828-Read-Bedtime-Stories-
Together-Even-From-a-Distance
Hartman, D.K., & Morsink, P.M. (2018, December). Toward a conceptual framework for a new
science of textual infrastructure: Cognitive affordances of technologies used for reading
56
and writing. Paper presented at the annual meeting of the Literacy Research Association,
Marco Island, FL.
Hartman, D.K., Morsink, P.M., & Zheng, J. (2010). From print to pixels: The evolution of
cognitive conceptions of reading comprehension. In E.A. Baker (Ed.), The new literacies:
Multiple perspectives on research and practice (pp. 131–164). New York, NY: Guilford.
Heitin, L. (2016, November 9). What is digital literacy? Education Week. Retrieved from
https://www.edweek.org/ew/articles/2016/11/09/what-is-digital-literacy.html
Hering, M. (2019, June 7). The iExplore series opens a whole new world for children’s non-
fiction books [Web log post]. Retrieved from https://edlab.tc.columbia.edu/blog/19905-
The-iExplore-Series-Opens-a-Whole-New-World-for-Childrens-Non-Fiction-Books
Hill, J., & Hannafin, M. (1997). Cognitive strategies and learning from the World Wide Web.
Educational Technology Research and Development, 45(4), 37–64.
doi:10.1007/BF02299682
Hobbs, R. (2017). Create to learn: Introduction to digital literacy. Hoboken, NJ: John Wiley &
Sons.
Hutchison, A., Beschorner, B., & Schmidt-Crawford, D. (2012). Exploring the use of the iPad
for literacy learning. The Reading Teacher, 66(1), 15–23. doi:10.1002/TRTR.01090
International Reading Association. (2002). Integrating literacy and technology in the curriculum
[Position statement]. Newark, DE: Author. Retrieved from
https://www.literacyworldwide.org/docs/default-source/where-we-stand/technology-
position-statement.pdf?sfvrsn=e04ea18e_6
Jang, B.G., & Henretti, D. (2019). Understanding multiple profiles of reading attitudes among
adolescents. Middle School Journal, 50(3), 26–35. doi:10.1080/00940771.2019.1603803
57
Jenkins, R. (2014). Social identity (4th ed.). London, UK: Routledge.
Johnson, M., & Skarphol, M. (2018). The effects of digital portfolios and Flipgrid on student
engagement and communication in a connected learning secondary visual arts
classroom. Retrieved from https://sophia.stkate.edu/maed/270/
Kanniainen, L., Kiili, C., Tolvanen, A., Aro, M., & Leppänen, P.H.T. (2019). Literacy skills and
online research and comprehension: Struggling readers face difficulties online. Reading
and Writing, 32(9), 2201–2222. https://doi.org/10.1007/s11145-019-09944-9
Kempe, A.-L., & Grönlund, Å. (2019). Collaborative digital textbooks—a comparison of five
different designs shaping teaching and learning. Education and Information
Technologies, 24, 2909–2941. doi:10.1007/s10639-019-09897-0
Kerr, D., Andrews, J.J., & Mislevy, R.J. (2017). The in-task assessment framework for
behavioral data. In A.A. Rupp & J.P. Leighton (Eds.), The handbook of cognition and
assessment: Frameworks, methodologies, and applications (pp. 472–507). Chichester,
UK: John Wiley & Sons.
Kiili, C., Coiro, J., & Hämäläinen, J. (2016). An online inquiry tool to support the exploration of
controversial issues on the internet. Journal of Literacy and Technology, 17(1/2), 31–52.
Kiili, C., Coiro, J., & Räikkönen, E. (2019). Students’ evaluation of information during online
inquiry: Working individually or working in pairs. Australian Journal of Language and
Literacy, 42(3), 167–183.
Kiili, C., & Leu, D.J. (2019). Exploring the collaborative synthesis of information during online
reading. Computers in Human Behavior, 95, 146–157. doi:10.1016/j.chb.2019.01.033
Kiili, C., Leu, D.J., Utriainen, J., Coiro, J., Kanniainen, L., Tolvanen, A., ... Leppänen, P.H.T.
(2018). Reading to learn from online information: Modeling the factor structure. Journal
58
of Literacy Research, 50(3), 304–334. doi:10.1177/1086296X18784640
Kingsley, T., & Tancock, S. (2014). Internet inquiry: Fundamental competencies for online
comprehension. The Reading Teacher, 67(5), 389–399. doi:10.1002/trtr.1223
Kingston, N.M. (2008). Comparability of computer- and paper-administered multiple-choice
tests for K–12 populations: A synthesis. Applied Measurement in Education, 22(1), 22–
37. doi:10.1080/08957340802558326
Kinzer, C.K., & Leander, K.M. (2003). Reconsidering the technology/language arts divide:
Electronic and print-based environments. In D. Flood, D. Lapp, J.R. Squire, & J.M.
Jensen (Eds.), Handbook of research on teaching the English language arts (pp. 546–
565). Mahwah, NJ: Erlbaum.
Kong, Y., Seo, Y.S., & Zhai, L. (2018). Comparison of reading performance on screen and on
paper: A meta-analysis. Computers & Education, 123, 138–149.
doi:10.1016/j.compedu.2018.05.005
Labbo, L.D., & Reinking, D. (1999). Negotiating the multiple realities of technology in literacy
research and instruction. Reading Research Quarterly, 34(4), 478–492.
doi:10.1598/RRQ.34.4.5
Landow, G.P. (1994). What’s a critic to do? Critical theory in the age of hypertext. In G.P.
Landow (Ed.), Hyper/text/theory (pp. 1–48). Baltimore, MD: Johns Hopkins University
Press.
Lankshear, C., & Knobel, M. (2003). New literacies: Changing knowledge and classroom
learning. New York, NY: Open University Press.
Lankshear, C., & Knobel, M. (2006). New literacies: Everyday practices and classroom learning
(2nd ed.). New York, NY: Open University Press.
59
Lankshear, C., & Knobel, M. (2007). Sampling “the new” in new literacies. In M. Knobel & C.
Lankshear (Eds.), A new literacies sampler (pp. 1–24). New York, NY: Peter Lang.
Lankshear, C., & Knobel, M. (2011). New literacies: Everyday practices and social learning
(3rd ed.). New York, NY: Open University Press.
Larson, L. (2010). Digital readers: The next chapter in e-book reading and response. The
Reading Teacher, 64(1), 15–22. doi:10.1598/RT.64.1.2
LaRusso, M., Kim, H.Y., Selman, R., Uccelli, P., Dawson, T., Jones, S., ... Snow, C. (2016).
Contributions of academic language, perspective taking, and complex reasoning to deep
reading comprehension. Journal of Research on Educational Effectiveness, 9(2), 201–
222. doi:10.1080/19345747.2015.1116035
Lauer, C. (2009). Contending with terms: “Multimodal” and “multimedia” in the academic and
public spheres. Computers and Composition, 26(4), 225–239.
doi:10.1016/j.compcom.2009.09.001
Lawless, K.A., & Kulikowich, J.M. (1996). Understanding hypertext navigation through cluster
analysis. Journal of Educational Computing Research, 14(4), 385–399.
doi:10.2190/DVAP-DE23-3XMV-9MXH
Learning.com. (n.d.). Digital literacy assessment. Retrieved from https://www.learning.com/dla
Lee, K.M., Park, S., Jang, B.G., & Cho, B.-Y. (2019). Multidimensional approaches to
examining digital literacies in the contemporary global society. Media and
Communication, 7(2), 36–46. doi:10.17645/mac.v7i2.1987
Léger, P.-M., An Nguyen, T., Charland, P., Sénécal, S., Lapierre, H.G., & Fredette, M. (2019).
How learner experience and types of mobile applications influence performance: The
60
case of digital annotation. Computers in the Schools, 36(2), 83–104.
doi:10.1080/07380569.2019.1601957
Leu, D.J., Kinzer, C.K., Coiro, J., Castek, J., & Henry, L.A. (2013). New literacies: A dual-level
theory of the changing nature of literacy, instruction, and assessment. In D.E. Alvermann,
N.J. Unrau, & R.B. Ruddell (Eds.), Theoretical models and processes of reading (6th ed.,
pp. 1150–1181). Newark, DE: International Reading Association.
Liaw, M.L., & English, K. (2017). Technologies for teaching and learning L2 reading. In C.A.
Chapelle & S. Sauro (Eds.), The handbook of technology and second language teaching
and learning (pp. 62–76). Hoboken, NJ: John Wiley & Sons.
Lim, H.J., & Jung, H. (2019). Factors related to digital reading achievement: A multi-level
analysis of international large-scale data. Computers & Education, 133, 82–93.
doi:10.1016/j.compedu.2019.01.007
List, A., & Alexander, P.A. (2017). Analyzing and integrating models of multiple text
comprehension. Educational Psychologist, 52(3), 143–147.
doi:10.1080/00461520.2017.1328309
List, A., & Alexander, P.A. (2019). Toward an integrated framework of multiple text use.
Educational Psychologist, 54(1), 20–39. doi:10.1080/00461520.2018.1505514
Lotherington, H., & Janson, J. (2011). Teaching multimodal and digital literacy in L2 settings:
New literacies, new basics, new pedagogies. Annual Review of Applied Linguistics, 31,
226–246. doi:10.1017/S0267190511000110
Lupo, S., Jang, B.G., & McKenna, M. (2017). The relationship between reading achievement
and attitudes toward print and digital texts in adolescent readers. Literacy Research:
61
Theory, Method, and Practice, 66(1), 264–278.
https://doi.org/10.1177/2381336917719254
Magliano, J.P., Higgs, K., & Clinton, J. (2019). Sources of complexity in narrative
comprehension across media. In M. Grishakova & M. Poulaki (Eds.), Narrative
complexity: Cognition, embodiment, evolution (pp. 149–173). Lincoln: University of
Nebraska Press.
Mangen, A., Walgermo, B.R., & Brønnick, K. (2013). Reading linear texts on paper versus
computer screen: Effects on reading comprehension. International Journal of
Educational Research, 58, 61–68. doi:10.1016/j.ijer.2012.12.002
Mann, W., O’Neill, R., & Thompson, R. (2018). Online reading research project. Retrieved
from http://www.ssc.education.ed.ac.uk/research/onlinereading/
Mayer, R.E. (2001). Multimedia learning. New York, NY: Cambridge University Press.
McCarthey, S.J., & Moje, E.B. (2002). Identity matters. Reading Research Quarterly, 37(2),
228–238. doi:10.1598/RRQ.37.2.6
McKenna, M.C. (2006). Introduction: Trends and trajectories of literacy and technology in the
new millennium. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.),
International handbook of literacy and technology (Vol. 2, pp. xi–xviii). Mahwah, NJ:
Erlbaum.
McKenna, M.C., Labbo, L.D., Kieffer, R.D., & Reinking, D. (Eds.). (2006). International
handbook of literacy and technology (Vol. 2). Mahwah, NJ: Erlbaum.
McNamara, D.S., & Kintsch, W. (1996). Learning from texts: Effects of prior knowledge and
text coherence. Discourse Processes, 22(3), 247–288. doi:10.1080/01638539609544975
62
Media Education Lab. (2015). Mind Over Media: Analyzing contemporary propaganda.
Retrieved from https://propaganda.mediaeducationlab.com/node/1
Mills, K.A. (2010). A review of the “digital turn” in the New Literacy Studies. Review of
Educational Research, 80(2), 246–271. doi:10.3102/0034654310364401
Mills, K.A. (2016). Literacy theories for the digital age: Social, critical, multimodal, spatial,
materials, and sensory lenses. Bristol, UK: Multilingual Matters.
Mislevy, R.J. (2016). How developments in psychology and technology challenge validity
argumentation. Journal of Educational Measurement, 53(3), 265–292.
doi:10.1111/jedm.12117
Moje, E.B., Dillon, D.R., & O’Brien, D. (2000). Reexamining roles of learning, text and context
in secondary literacy. The Journal of Educational Research, 93(3), 165–180.
doi:10.1080/00220670009598705
Moje, E.B., Young, J.P., Readance, J.E., & Moore, D.W. (2000). Reinventing adolescent literacy
for new times: Perennial and millennial issues. Reading Research Quarterly, 43(5), 400–
410.
Moos, D.C. (2014). Setting the stage for the metacognition during hypermedia learning: What
motivation constructs matter? Computers & Education, 70, 128–137.
doi:10.1016/j.compedu.2013.08.014
Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments. Educational
Psychology, 19(3), 309–326. https://doi.org/10.1007/s10648-007-9047-2
Morgan, H. (2015). Focus on technology: Creating and using podcasts promotes student
engagement and learning. Childhood Education, 91(1), 71–73.
doi:10.1080/00094056.2015.1001680
63
Morsink, P. (2019, May 24). The new NAEP assessment you probably haven’t heard of (but may
find interesting) [Web log post]. Retrieved from
https://www.literacyworldwide.org/blog/digital-literacies/teaching-with-tech/literacy-
daily/2019/05/24/the-new-naep-assessment-you-probably-haven-t-heard-of-(but-may-
find-interesting)
Moss, B., & Lapp, D. (Eds.). (2009). Teaching new literacies in grades 4–6: Resources for 21st-
century classrooms. New York, NY: Guilford.
Moss, B., & Lapp, D. (Eds.). (2010). Teaching new literacies in grades K–3: Resources for 21st-
century classrooms. New York, NY: Guilford.
Murphy, P., & Hall, K. (Eds.). (2008). Learning and practice: Agency and identities. Thousand
Oaks, CA: Sage.
Nash, B. (2019, March 21). Transactional theories of online reading comprehension [Web log
post]. Retrieved from http://howcuriousny.blogspot.com/2019/03/making-meaning-
online-some-thinking-for.html
National Assessment Governing Board. (2017). Reading framework for the 2017 National
Assessment of Educational Progress. Washington, DC: U.S. Government Printing Office.
National Center for Education Statistics. (2019). Digitally based assessments. Retrieved from
https://nces.ed.gov/nationsreportcard/dba/
National Council of Teachers of English. (2019). Definition of literacy in a digital age [Position
statement]. Urbana, IL: Author. Retrieved from https://ncte.org/statement/nctes-
definition-literacy-digital-age/
New London Group. (1996). The pedagogy of multiliteracies: Designing social futures. Harvard
Educational Review, 66(1), 60–93. doi:10.17763/haer.66.1.17370n67v22j160u
64
Noyes, J.M., & Garland, K.J. (2008). Computer- vs. paper-based tasks: Are they equivalent?
Ergonomics, 51(9), 1352–1375. doi:10.1080/00140130802170387
O’Brien, D.G., & Bauer, E.B. (2005). New literacies and the institution of old learning. Reading
Research Quarterly, 40(1), 120–131. doi:10.1598/RRQ.40.1.7
O’Byrne, W.I., & McVerry, J.G. (2009). Measuring the dispositions of online reading
comprehension: A preliminary validation study. In K.M. Leander, D.W. Rowe, D.K.
Dickinson, M.K. Headley, R.T. Jiménez, & V.J. Risko (Eds.), 58th yearbook of the
National Reading Conference Yearbook (pp. 362–375). Oak Creek, WI: National
Reading Conference.
OECD. (2015). Students, computers and learning: Making the connection. Paris, France: Author.
OECD. (2017). PISA 2015 collaborative problem solving framework. Paris, France: Author.
Pieschl, S., Stahl, E., & Bromme, R. (2008). Epistemological beliefs and self-regulated learning
with hypertext. Metacognition and Learning, 3(1), 17–37. doi:10.1007/s11409-007-9008-
7
Putman, S.M. (2014). Exploring dispositions toward online reading: Analyzing the survey of
online reading attitudes and behaviors. Reading Psychology, 35(1), 1–31.
doi:10.1080/02702711.2012.664250
Raggi, V.L., & Chronis, A.M. (2006). Interventions to address the academic impairment of
children and adolescents with ADHD. Clinical Child and Family Psychology Review,
9(2), 85–111. doi:10.1007/s10567-006-0006-0
Rainie, L. (2005). The internet at school. Washington, DC: Pew Research Center.
RAND Reading Study Group. (2002). Reading for understanding: Towards an R&D program in
reading comprehension. Santa Monica, CA: RAND.
65
Reinking, D. (1998). Introduction: Synthesizing technological transformations of literacy in a
post-typographic world. In D. Reinking, M.C. McKenna, L.D. Labbo, & R.D. Kieffer
(Eds), Handbook of literacy and technology: Transformations in a post-typographic
world (pp. x–xxxii). Mahwah, NJ: Erlbaum.
Reinking, D., McKenna, M.C., Labbo, L.D., & Keiffer, R.D. (Eds.). (1998). Handbook of
literacy and technology: Transformations in a post-typographic world. Mahwah, NJ:
Erlbaum.
Rosenblatt, L.M. (1978). The reader, the text, the poem: The transactional theory of the literary
work. Carbondale: Southern Illinois University Press.
Rouet, J.-F., & Britt, M.A. (2011). Relevance processes in multiple document comprehension. In
M.T. McCrudden, J.P. Magliano, & G. Schraw (Eds.), Text relevance and learning from
text (pp. 19–52). Charlotte, NC: Information Age.
Sabatini, J., O’Reilly, T., & Doorey, N.A. (2018). Retooling literacy education for the 21st
century: Key findings of the Reading for Understanding initiative and their implications.
Princeton, NJ: Educational Testing Service.
Sabatini, J.P., O’Reilly, T., Halderman, L.K., & Bruce, K. (2014). Integrating scenario-based and
component reading skills measures to understand the reading behavior of struggling
readers. Learning Disabilities Research & Practice, 29(1), 36–43.
doi:10.1111/ldrp.12028
Salmerón, L., Cañas, J.J., Kintsch, W., & Fajardo, I. (2005). Reading strategies and hypertext
comprehension. Discourse Processes, 40(3), 171–191. doi:10.1207/s15326950dp4003_1
Salmerón, L., Strømsø, H.I., Kammerer, Y., Stadtler, M., & van den Broek, P. (2018).
Comprehension processes in digital reading. In M. Barzillai, J. Thomson, S. Schroeder, &
66
P. van den Broek (Eds.), Learning to read in a digital world (pp. 91–120). Amsterdam,
The Netherlands: John Benjamins.
Sawchuk, S. (2017, August 1). What we still don’t know about digital reading [Web log post].
Retrieved from
http://blogs.edweek.org/edweek/curriculum/2017/08/what_we_dont_know_digital_readin
g_literacy.html
Schirmer, B.R., Bailey, J., & Lockman, A.S. (2004). What verbal protocols reveal about the
reading strategies of deaf students: A replication study. American Annals of the Deaf,
149(1), 5–16. doi:10.1353/aad.2004.0016
Schwan, S., & Cress, U. (Eds.). (2017). The psychology of digital learning: Constructing,
exchanging, and acquiring knowledge with digital media. Cham, Switzerland: Springer.
Seaboyer, J., & Barnett, T. (2019). New perspectives on reading and writing across the
disciplines. Higher Education Research & Development, 38(1), 1–10.
doi:10.1080/07294360.2019.1544111
Selfe, C.L. (Ed.). (2007). Multimodal composition: Resources for teachers. Cresskill, NJ:
Hampton.
Serafini, F., & Gee, E. (Eds.). (2017). Remixing multiliteracies: Theory and practice from New
London to new times. New York, NY: Teachers College Press.
Singer, L.M., & Alexander, P.A. (2017a). Reading across mediums: Effects of reading digital
and print texts on comprehension and calibration. Journal of Experimental Education,
85(1), 155–172. doi:10.1080/00220973.2016.1143794
67
Singer, L.M., & Alexander, P.A. (2017b). Reading on paper and digitally: What the past decades
of empirical research reveal. Review of Educational Research, 87(6), 1007–1041.
doi:10.3102/0034654317722961
Snyder, S., & Huber, H. (2019). Computer assisted instruction to teach academic content to
students with intellectual disability: A review of the literature. American Journal on
Intellectual and Developmental Disabilities, 124(4), 374–390. doi:10.1352/1944-7558-
124.4.374
Special report: The changing face of literacy. (2016, November 9). Education Week. Retrieved
from https://www.edweek.org/ew/collections/changing-literacy/index.html
Spires, H.A., & Bartlett, M.E. (2012). Digital literacies and learning: Designing a path
forward (Friday Institute White Paper Series No. 5). Raleigh: Friday Institute for
Educational Innovation, North Carolina State University.
Spires, H.A., Himes, M.P., Paul, C.M., & Kerkhoff, S.N. (2019). Going global with project-
based inquiry: Cosmopolitan literacies in practice. Journal of Adolescent & Adult
Literacy, 63(1), 51–64. doi:10.1002/jaal.947
Stornaiuolo, A. (2018, December). Developing data literacy with adolescents: Supporting youth
as authors, architects, and interpreters of data. Paper presented at the annual meeting of
the Literacy Research Association, Indian Wells, CA.
Street, B. (2003). What’s “new” in New Literacy Studies? Critical approaches to literacy in
theory and practice. Current Issues in Comparative Education, 5(2), 77–91.
Strømsø, H.I., Bråten, I., & Britt, M.A. (2011). Do students’ beliefs about knowledge and
knowing predict their judgment of texts’ trustworthiness? Educational Psychology, 31(2),
177–206. doi:10.1080/01443410.2010.538039
68
Tanner, M.J. (2014). Digital vs. print: Reading comprehension and the future of the book.
iSchool Student Research Journal, 4(2), 6–13.
Tierney, R.J., & Pearson, P.D. (1983). Toward a composing model of reading. Language Arts,
60(5), 568–580.
Tucker-Raymond, E., & Gravel, B.E. (2019). STEM literacies in makerspaces: Implications for
learning, teaching, and research. New York, NY: Routledge.
Tzima, S., Styliaras, G., & Bassounas, A. (2019). Augmented reality applications in education:
Teachers point of view. Education in Science, 9(2), article 99.
doi:10.3390/educsci9020099
Wang, S., Jiao, H., Young, M.J., Brooks, T., & Olson, J. (2008). Comparability of computer-
based and paper-and-pencil testing in K–12 reading assessments: A meta-analysis of
testing mode effects. Educational and Psychological Measurement, 68(1), 5–24.
doi:10.1177/0013164407305592
Warburton, S., & Hatzipanagos, S. (Eds.). (2012). Digital identity and social media. Hershey,
PA: IGI Global.
Wargo, J.M. (2019). Sounding the garden, voicing a problem: Mobilizing critical literacy
through personal digital inquiry with young children. Language Arts, 95(5), 275–285.
White, A. (2016). Using digital think-alouds to build comprehension of online informational
texts. The Reading Teacher, 69(4), 421–425. doi:10.1002/trtr.1438
Wolf, M. (2010). Cassandra’s thoughts about reading and time. Perspectives on Language and
Literacy, 36(1), 39–40.
Wolf, M. (2018). Reader, come home: The reading brain in a digital world. New York, NY:
Harper.
69
Wolf, M., & Barzillai, M. (2009). The importance of deep reading. Educational Leadership,
66(6), 32–37.
Yang, K.C.C. (2019). Reality-creating technologies as a global phenomenon. In K.C.C. Yang
(Ed.), Cases on immersive virtual reality techniques (pp. 1–18). Hershey, PA: IGI Global.
Zakon, R.H. (2018). Hobbes’ internet timeline v25. Retrieved from
https://www.zakon.org/robert/internet/timeline/
Zhou, M., & Lam, K.K.L. (2019). Metacognitive scaffolding for information search in K–12 and
higher education settings: A systematic review. Educational Technology Research and
Development, 67(6), 1353–1384. https://doi.org/10.1007/s11423-019-09646-7
Submitted July 6, 2019
Final revision received January 4, 2020
Accepted January 7, 2020
JULIE COIRO is an associate professor in the Alan Shawn Feinstein College of Education and
Professional Studies at the University of Rhode Island, Kingston, USA; email jcoiro@uri.edu.
Her research interests include the instruction and assessment of online reading comprehension,
collaborative knowledge building during inquiry, and effective practices for technology
integration and professional learning.
Figure 1. A Multifaceted Heuristic To Characterize The Spectrum of Digital Reading
Experiences