Content uploaded by Elizabeth Dutro
Author content
All content in this area was uploaded by Elizabeth Dutro on Aug 09, 2018
Content may be subject to copyright.
“I Like to Read, but I Know I’m Not
Good at It”: Children’s Perspectives
on High-Stakes Testing in a
High-Poverty Schoolcuri_597 340..367
ELIZABETH DUTRO
& MAKENZIE SELLAND
University of Colorado
Boulder, Colorado, USA
ABSTRACT
A significant body of research articulates concerns about the current emphasis on
high-stakes testing as the primary lever of education reform in the United States.
However, relatively little research has focused on how children make sense of the
assessment policies in which they are centrally located. In this article, we share
analyses of interview data from 33 third graders in an urban elementary school
collected as part of a larger qualitative study of children’s experiences in literacy in
high-poverty classroom. Our analysis of assessment-focused interviews focused on
two research questions related to children’s perspectives on high-stakes testing:
What patterns arise in children’s talk about high-stakes testing? What does chil-
dren’s talk about high-stakes testing reveal about their perceptions of the role of
testing in their school experiences and how they are positioned within the system of
accountability they encounter in school? Drawing on tools associated with inductive
approaches to learning from qualitative data as well as critical discourse analysis, we
discuss three issues that arose in children’s responses: language related to the adults
invested in their achievement; their sense of the stakes involved in testing; and links
between their feelings about test taking, perceptions of scores, and assumptions of
competence. We argue that children’s perspectives on their experiences with high-
stakes testing provide crucial insights into how children construct relationships to
schooling, relationships that have consequences for their continued engagement in
school.
As Sharon passed out test booklets to her third graders, the children sat
quietly, sharpened number-two pencils and one closed book of the child’s
choice laying on each of their desks. The book was there to pass the time
just in case a child finished a test section early. Molly, one of the few
children small enough to swing her legs as she sat at her desk, pushed her
long brown bangs out of her eyes, and smiled nervously at Sharon as the test
bs_bs_banner
© 2012 by The Ontario Institute for Studies in Education of the University of Toronto
Curriculum Inquiry 42:3 (2012)
Published by Wiley Periodicals, Inc., 350 Main Street, Malden, MA 02148, USA, and 9600 Garsington Road,
Oxford OX4 2DQ, UK
doi: 10.1111/j.1467-873X.2012.00597.x
was placed in front of her. The test booklet, lying next to her Frog and Toad
Are Friends “I Can Read” chapter book, created an ironic juxtaposition given
what Molly had explained to me in a conversation just a few days before.
She had told me that she loved to read and was happy that she could finally
read well enough to have chapter books. But, she said, “I know I’m not
good at it. I do bad on those tests. When we take them, I just know it will be
another low points, so the books I like, like I know they are too low for those
tests.” Molly, like all students, negotiates high-stakes testing as part of her
school experience. As high-stakes testing has become an increasingly
visible, tangible part of children’s schooling experiences in the last decade,
it would follow that children are engaged in making their own sense of this
aspect of their education and its perceived consequences.
The arguments against the current focus on high-stakes testing as a
primary focus of educational reform in the United States have increased
in volume during the past decade—and by volume we do mean both
quantity and decibel level. Researchers in assessment and policy critique
both the narrow focus of such assessments and their validity (Kiplinger,
2008; Koretz, 2008; Linn, 2000; Ravitch, 2010; Shepard, 2000; Solano-
Flores, 2006). Indeed, Nichols and Berliner (2007) argue that recent
educational policy as established within 2002’s revised Elementary and
Secondary School Act by United States federal legislators (known as No
Child Left Behind or NCLB) has fallen captive to Campbell’s Law, first
published in 1975 by social psychologist Donald Campbell, which states
that “the more any quantitative social indicator is used for social decision-
making, the more subject it will be to corruption pressures and the more
apt it will be to distort and corrupt the social processes it was intended to
monitor” (pp. 26–27). Recent reports suggest that Campbell’s Law may
be at work in many states, including high-profile claims of significant
gains on standardized achievement scores that have been at the center of
arguments for a business-model approach to increased accountability
(Lyons, 2011).
These arguments, based on analyses of various aspects and outcomes of
current policy, point to the potential of high-stakes tests to create unhealthy
environments for teaching and learning in many classrooms. Specifically,
research raises issues such as narrowing the focus of curriculum (e.g., Au,
2007; Hamilton, Berends, & Stecher, 2005), reorganizing classroom time to
prioritize tested subjects over nontested subjects (e.g., Pedulla et al., 2003),
reallocating funds toward instruction focused on tested topics (e.g., math-
ematics and English) as well as on students near the proficient cut score
(e.g., Hamilton & Berends, 2006), and inducing teachers to teach in ways
that contradict their own understandings of effective practice (Abrams,
Pedulla, & Madaus, 2003). Given concerns raised in the research literature,
we need to access as many vantage points as possible to better understand
what is at stake in test-driven reforms for teachers, schools, the public
school system in the United States and, most importantly, for children.
341CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
In this article, we argue that one crucial lens on the impact of policy is
children’s perspectives. Researchers have provided important insight into
teachers’ and parents’ perspectives on standardized testing and students’
academic and social positioning in relation to high-stakes assessments (e.g.,
Barksdale-Ladd & Thomas, 2000; Enciso, 2001; Popham, 2001; Roderick &
Engel, 2001), and some research has focused on how children and youth
make sense of the assessment policies in which they are centrally located
(e.g., Filer & Pollard, 2000; Wheelock, Bebell, & Haney, 2000a). Here, we
seek to build on this growing literature by drawing on interview data from
33 third graders in an urban elementary school to examine children’s talk
about their understandings of high-stakes testing, understandings that we
contend inform the narratives they can construct about their potential as
learners and, thus, hold consequences for children’s engagement and
investment in school (Madaus & Clarke, 2001; Perna & Thomas, 2009;
Vasquez-Heilig & Darling-Hammond, 2008).
THEORETICAL ORIENTATION: POSITIONING AND THE
DISCOURSES OF HIGH-STAKES TESTING
Although viewed by some policy makers as objective and neutral measures
of competence, high-stakes tests are far from neutral in the experiences of
children and teachers (Mathison & Freeman, 2003; McNeil, 2000; Whee-
lock et al., 2000a, 2000b; Zacher-Pandya, 2011). Indeed, our critical theo-
retical orientation assumes that all aspects of schooling—and certainly
testing—are constructed through discourses that are informed by and
embedded within power relations that benefit some individuals and groups
more than others. Because the scores children receive construct them as
either proficient or less than proficient, the policy emphasis on high-stakes
testing is implicated in how children construct identities in relation to
schooling. In Davies and Harré’s (1990) words, “binary logic constitutes the
world in hierarchical ways through its privileging of one term or category
within the binary...being positioned as one who belongs in or is defined
in terms of the negative or dependent term can lock people into repeated
patterns of powerlessness” (p. 107). They argue further that one’s sense of
the world is interpreted, in part, from who we consider ourselves to be. We
experience our “selves” as a personal production, while remaining unaware
of how our taking up of certain discursive practices can shape the narratives
we tell about ourselves and those others tell about us. Although there is
always room to accept, reject, or transform the positions offered us by the
institutions and communities of which we are a part (and, indeed, students
read and discuss their sense of their academic competence in many con-
texts in addition to testing environments), once a person ascribes to a
particular position as his or her own, that person begins to see the world in
terms of the storylines that are made available and relevant within the
342 ELIZABETH DUTRO & MAKENZIE SELLAND
discursive practices in which they are positioned (Harré & van Langenhove,
1998).
As several scholars have emphasized, the language surrounding the
reforms stemming from NCLB and Race to the Top, the Obama adminis-
tration’s more recent and wide-sweeping education reform incentive plan,
is embedded with unexamined assumptions about children and achieve-
ment, as well as race, class, and ability (e.g., Dutro, 2010; Campano, 2007;
Collins & Valente, 2010; Fusarelli, 2004; Hicks, 2005; Sleeter, 2004). As
critically oriented scholars argue, one of the results of current policy is the
inscription of an instrumental view of learning that points to acquisition of
discrete skills as the unambiguous remedy for the struggle of poor children
and children of color to thrive in some public schools. Further, the narrow
views of learning embedded in policy ignore the structural, material, and
social effects of poverty and institutionalized racism emphasized by
research across fields of sociology, anthropology, and economics (Danziger
& Haveman, 2002; Iceland, 2006; Newman, 2000; Rank, 2005).
Viewed through these lenses, the dichotomy of proficient/not proficient
categories embedded in high-stakes testing discourses leaves little room for
students to foster nuanced understandings of themselves as learners and,
indeed, no direct route to refute or reorganize the pronouncement of the
test. As performance on standardized tests becomes increasingly predomi-
nant in the discursive worlds of school, we wondered what stories children
would tell about themselves as learners as they became acculturated into
the discourses of high-stakes tests. Given the prevalence of dichotomized,
assumption-driven understandings of success and struggle, tools are neces-
sary to dig into and beneath discourses surrounding testing, uncovering
some of the assumptions that too often remain hidden in accountability
policy and become even more insidious in their invisibility.
RESEARCH ON HIGH-STAKES TESTING AND
STUDENT PERSPECTIVES
In this section we discuss two areas of research that centrally inform our
argument: research on high-stakes testing and studies drawing on student
perspectives to inform policy and practice.
High-Stakes Testing
High-stakes assessments are currently a crucial component of the institu-
tional power in which children negotiate schooling. Prominent scholars
of assessment have long argued that these measures provide just one
partial lens on what a child knows and is able to do (e.g., Koretz, 2008;
Linn, 2000; Ryan & Shepard, 2008; Shepard, 1995). Therefore, such mea-
sures should be viewed as providing only a particular kind of information
343CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
about children’s learning or a classroom teacher’s or school’s facilitation
of children’s learning. Research conducted since NCLB suggests that the
intense focus on high-stakes testing has led to both overt cheating of the
system (e.g., Nichols, Glass, & Berliner, 2006; Vasquez-Heilig & Darling-
Hammond, 2008), covert corruption of quality teaching and learning
(e.g., Au, 2007; Wright & Choi, 2006) and concerns about outcomes for
some students, including increased dropout rates (e.g., Clarke, Haney, &
Madaus, 2000; Jacob, 2001).
Specifically, studies suggest that the emphasis on testing constrains
teachers in their attempts to provide students with rich learning experi-
ences (e.g., Abrams et al., 2003; Amrein & Berliner, 2002; Au, 2007; Ber-
liner, 2007; Hamilton et al., 2005; Madaus & Clarke, 2001; Mathison &
Freeman, 2003). For instance, an increased focus on reading and math, as
tested by standardized tests, has resulted in less instructional time spent on
subjects like science, social studies, and the arts, and more time engaged in
drilling students, especially poor children, in practice tests designed to
mimic the tests (Berliner, 2007; Ravitch, 2010). Madaus and Clarke (2001)
note that when teachers teach to the test, they pay attention not only to the
content of the test, but also to the form; thus, the form of the questions can
narrow the focus of instruction further. Indeed, some studies suggest that
high-stakes tests can undermine children’s opportunities to learn impor-
tant aspects of content. For instance, Hillocks (2002) studied state writing
tests and found that in some states, the high-stakes writing assessments were
driving instruction in ways that emphasized simplistic notions of genre,
purposes for writing, and process. Beyond the research that exposes the
unintended and problematic consequences of standardized tests, a recent
review of the literature on high-stakes testing concludes that there is not
convincing evidence that such testing has its intended effect of increasing
student learning (Nichols, 2007).
The potential of assessment to foster and reward less sophisticated
approaches to subject matter follows from some of the assumptions about
competency and achievement embedded in policies that rely on high-
stakes tests as the primary assessment of children’s learning, including
the assumption that competency can be reduced to observable behaviors
and captured through one paper-pencil assessment (Norris, Leighton, &
Philips, 2006). Ravitch (2010) argues that policies such as NCLB and Race
to the Top may actually lower education quality, as they are based on a
“technocratic approach to school reform that measures ‘success’ only in
relation to standardized test scores in two skill-based subjects” (p. 27).
Further, it follows that pressure to perform would be high for children and
teachers for, as Noddings (2002) writes, high-stakes testing assumes “that
we are not only trying to find out ‘how we are doing,’ but how each and
every child is doing....There is the clear implication that if kids and
teachers are working hard enough every child should pass” (p. 70). In
short, much research on the use and consequences of high-stakes testing
344 ELIZABETH DUTRO & MAKENZIE SELLAND
concurs with Nichols and Berliner’s (2007) arguments that as Campbell’s
Law would predict, overreliance on high-stakes tests as the mechanism of
reform is flawed and harmful to education at both individual and institu-
tional levels (p. 31).
With the recent implementation of Race to the Top, this emphasis on
high-stakes testing in the United States is far from waning. As states move
to comply with demands to pass legislation that makes it possible to link
teacher and principal evaluation, in part, to students’ achievements on
high-stakes tests, the scope of the consequences for such tests is increasing
across the country (Sawchuck, 2009). Beyond specific measures, the Race
to the Top effort increases the overall emphasis on longitudinal test score
data as a significant factor in determining effective schooling (Race to the
Top, 2011).
Attention to Student Perspectives
In turning to children’s perspectives on testing, we situate this study in the
work of other researchers who argue for the importance of student per-
spectives in informing research that intends to intervene in policy and
practice (e.g., Cook-Sather, 2002, 2006; Jones & Yonezawa, 2002; Kirshner
& Pozzoboni, 2010; Marquez-Zenkov, 2007; Mitra, 2003, 2006; Rubin &
Silva, 2003; Rudduck, Chaplain, & Wallace, 1996; Thiessen, 2007; Thiessen
& Cook-Sather, 2007). As Thiessen (2007) writes, attention to student
perspectives as a crucial lens through which to understand and improve
education is grounded in the conviction “that what matters in schools is
centred on students, their daily actions and interactions, and how they
make sense of their lives” (p. 6). Rudduck et al. (1996) argue that young
people are not only a crucial source of information on effective practices in
schools, but respond constructively and analytically when adults seek their
perspectives. We conducted our study in the spirit of those convictions,
including arguments surrounding the particular importance of heeding
the perspectives of children who attend school in districts in which large
numbers of students do not make it to high school graduation (McNeil,
Coppola, Radigan, & Vasquez-Heilig, 2008; Mitra, 2006; Rubin & Hayes,
2010; Marquez-Zenkov, 2007).
Indeed, researchers invested in learning directly from children and
youth consistently emphasize the opportunity presented by students’ views
to create an array of contextualized, on-the-ground changes to better
support students. Yonezawa and Jones (2009), in a study that engaged
students as co-researchers in educational reform in the San Diego Unified
School District’s high school reform initiative, note how a wide array of data
that includes input from students “allows teachers to develop a more
complete portrait of students’ needs and the kinds of classroom practices
that best support student learning and academic success” (p. 210). Simi-
larly, Mitra (2006) describes how the introduction of student voice efforts
345CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
can have positive effects for youth themselves, as well as for teachers and
schools, increasing positive student–teacher communication, influencing
school policy, and motivating teachers.
Although adolescents predominate in studies drawing centrally on
student perspectives, our analysis joins others who turn to the views of
younger children to inform understanding of policy and practice in schools
(e.g., Allodi, 2002; Pollard, Thiessen, & Filer, 1997). This work includes
some studies that have specifically investigated children’s perspective on
high-stakes assessment (Haney & Scott, 1987; Thorkildsen, 1999). For
instance, Wheelock et al. (2000a) used student drawings of themselves
taking high-stakes tests in Massachusetts to show that a majority of sampled
students depicted themselves as anxious, upset, bored, or cynical about
testing. In a later article, they further concluded that student drawings
suggested that high-stakes testing “does not motivate all students in the
same way,” and that while some students maybe respond to the high stakes
of the test with increased motivation, others, often older, urban students,
“may simply give up,” seeing the tests as less of “a challenge” and more as
a sources of “intimidation and humiliation.”
METHOD
In this section, we discuss the contexts of the study, the participating
children, and our approach to data collection and analysis.
Research Context and Participants
Although the data we share in this article is drawn from interviews with
children, those conversations took place within an extended, qualitative
classroom-based study that allowed for close observation and contextual-
ized understandings of the children’s responses. Through Elizabeth’s close
collaboration with a third grade teacher, Sharon, the larger study investi-
gated the relationship between children’s uses of literacy in and out of
classrooms, the learning and understandings they demonstrated in the
classroom, and their experiences with policy implementation in the wake of
NCLB. (Makenzie collaborated on data analysis and writing after the study
was completed.) The study employed ethnographic methods to build inter-
pretations that were grounded in the everyday experiences of life in this
classroom, school, and neighborhood. Throughout the 2 years of the larger
study, Elizabeth was a participant-observer in Sharon’s classroom at least 2
days a week for approximately 3 hours per visit, with additional visits if
children were engaged in special activities (such as musical performances,
the talent show, the field trip, or field day) or if Sharon needed additional
adult help for a special activity. Visits included observations and interac-
tions with children—captured on a digital recorder or through field
346 ELIZABETH DUTRO & MAKENZIE SELLAND
notes—in the classroom and on the playground, in the lunchroom, after
school, and on the one field trip the class took to the city’s Museum of Art
in the first year of the study. Elizabeth also interacted with parents and
family members after school and at school events and witnessed Sharon’s
conversations with parents in these settings. In May of each school year,
Elizabeth conducted formal interviews, lasting approximately an hour, with
each of the children about his or her perspectives on reading and writing,
out-of-school activities, hobbies, friendships, descriptions of homes and
neighborhoods, and life, residential, and school histories. As we describe in
more detail later in this section, the interview also included a set of ques-
tions about children’s experiences with and perspectives on district and
statewide standardized testing, the analysis of which is the focus of our
discussion here.
Davis Elementary School is located in a large Midwestern city with one of
the highest rates of child poverty in the United States and high school
dropout rates of over 60% in both years of the study. Although many of the
city’s schools reflected long-established patterns of racial homogeneity in
neighborhoods, Davis Elementary was located in a racially diverse neigh-
borhood, with African American, Puerto Rican, White, multiracial, and
smaller numbers of Middle Eastern and Asian American families living in
close proximity to the school. Across the 2 years of the study, Sharon’s
classroom reflected the racial diversity of the school and neighborhood as
well as the poverty impacting most of the families in the school (100% of
children received free or reduced lunch). A total of 33 children partici-
pated in the study, including 18 in year 1 and 15 in year 2. Of the total
participating children, 17 self-identified as boys and 16 as girls; 7 self-
identified as African American, 5 as Puerto Rican, 14 as White, 4 as biracial,
1 as Lebanese, 1 as Trinidadian, and 1 as Guyanese (see Tables 1 and 2).
As reflected in Tables 1 and 2, the participating students represented
the full range of scoring categories on the state reading assessment. Seven-
teen students scored at proficient or above (two “advanced,” six “acceler-
ated,” and nine “proficient”), while 16 were identified as below proficient
(10 “basic” and 6 “limited”). Although our current analysis focuses on
themes and discourse patterns in the interview data, rather than detailed
portraits of individual students, we attempt to situate our illustrations of
children’s responses within our understanding of particular children’s aca-
demic positioning within the classroom.
Data Sources and Analysis
The interviews that provided the data for our analysis were contextualized
within data collected in the larger study, including field notes of observa-
tions, students’ written work, and audio recordings of instruction, informal
interactions with the teacher and students, and formal interviews with the
teacher and each student. Additional data sources relevant for our analyses
347CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
here include state and district assessment results in reading as well as
policy-focused documents Sharon received from the district and school,
including notices about assessment dates, professional development,
reminders to teachers that they needed to be following the guidelines set by
literacy coaches, and notices related to the scheduling of high-stakes state
tests and the school’s push to improve its progress toward Adequate Yearly
Progress (AYP), an element of NCLB that required schools to reach specific
achievement goals for the school as a whole and for specific subgroups of
students.
The assessment-focused questions we posed to children were included in
the spring interviews conducted with participating third graders across the
2 years of research. Children were interviewed individually in a quiet space
in the school. If an interview exceeded 1 hour, children took a break to get
a drink and have a small snack. At the time of the interviews, Elizabeth was
a familiar presence in the children’s classroom and most children were
enthusiastic participants in the one-on-one conversations. During the
second year of the study, Elizabeth also conducted two focus-group inter-
views with six of the children (three in each group), then fourth graders,
who had participated in year 1 of the study (identified in Table 1). Those
TABLE 1
Year 1, Student Self-Identified Race/Ethnicity and Reading Proficiency
Information
Student
Name
Self-Identified
Race/Ethnicity
Designated Reading
Proficiency—State
Reading Achievement
Test
Diante African American Basic
Ella White Limited
Jade* African American Limited
Samantha White Basic
Jalal* Lebanese Limited
Tara White Proficient
Randy White Basic
Molly* White Basic
Julius* African American/Puerto Rican Advanced
Dion African American Basic
Liana* Puerto Rican Accelerated
Thomas Puerto Rican/White Limited
Mohinder Guyanese Limited
Anjali Trinidadian Proficient
Tiffany White Basic
Noelle White Proficient
Amy White Proficient
Ricardo* Puerto Rican Limited
* Children who were also interviewed in the spring of fourth grade during year 2
of the study
348 ELIZABETH DUTRO & MAKENZIE SELLAND
interviews occurred in the spring and focused specifically on the children’s
recent experiences with the high-stakes state assessment (see Appendix A
for interview protocols). The fourth-grade students were chosen based on
two overlapping factors: first, children identified for follow-up conversa-
tions reflected a range of achievement categories and responses to high-
stakes assessment experiences (e.g., enthusiasm, discouragement); second,
because children’s participation in the follow-up conversation required
new parental consent and teachers’ cooperation to schedule common
meeting times during the school day, the six children also represented a
convenience sample of students from the first year of the study. Each
individual and focus-group interview was audio recorded and transcribed.
Our analysis of the transcribed interviews involved two primary tools that
were chosen based on two questions guiding the analysis: What patterns
arise in children’s talk about high-stakes testing? What does children’s talk
about high-stakes testing reveal about their perceptions of the role of
testing in their school experiences and how they are positioned within the
system of accountability they encounter in school? To address these ques-
tions, we drew on tools associated with inductive approaches to learning
from qualitative data as well as tools from critical discourse analysis.
Our inductive analysis involved reading and coding transcripts to iden-
tify patterns in children’s talk in relation to each of the interview ques-
tions pertaining to assessment, resulting in sets of categories of responses
mapped to each interview question. For instance, the question “Why do
you think you take these kinds of tests in school?” resulted in three
TABLE 2
Year 2, Student Self-Identified Race/Ethnicity and Reading Proficiency
Information
Student
Name
Self-Identified
Race/Ethnicity
Designated Reading
Proficiency—State
Reading Achievement Test
Alberto Puerto Rican Proficient
Aisha African American Accelerated
Lebron African American Proficient
Ruby African American Proficient
Allison White Basic
Tina White Proficient
Donovan White Proficient
Antonio Puerto Rican Accelerated
Edgardo Puerto Rican/White Accelerated
Jesse African American/White Accelerated
Rihanna African American Basic
Zachary White Basic
Ron Puerto Rican Basic
Hillary White Advanced
Travis White Accelerated
349CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
primary categories captured by the following phrases “find out what I
know,” “see if I learned it,” and “determine grade placement.” In addi-
tion, we also paid close attention to responses that were surprising
and/or pushed back on or challenged some of the patterns that arose.
For instance, an anomalous response to the interview question we cite
above, and that we found important, came from Tyrone, a child whose
family moved often and who had been absent from school almost half of
both second and third grade. He responded, “That’s how they keep track
of where you at. It tells the name of the school right on the paper.”
Although our discussion of the inductive analysis foregrounds themes
identified in the data, a different perspective, like Tyrone’s, was certainly
instructive in considering the relationship between testing and other
aspects of a child’s experiences in the classroom.
Our theoretical assumption that relations between discourse and power
are central in the schooling experiences of children led us to our second
analytic tool: critical discourse analysis (CDA). CDA attempts to investigate
the sociocultural aspects of language use within an explicit commitment to
discern and analyze issues of power embedded in the array of ways people
use speech and other forms of communication to represent the world and
convey meaning (Fairclough, 2010; Kress, 2009). We drew primarily on the
following analysis questions adapted from Fairclough (2010):
• What relational values do words have?
䊊Are there euphemisms used?
䊊Are there markedly formal or informal words?
䊊What social relationships does the language depend on and
construct?
• What expressive values do words have? Is there a positive or negative
connotation? Are there values apparent/do values contrast?
• Relational modality: Are there implicit authority claims and implicit
power relations?
• How are pronouns (e.g., I, you, we, them, they) used? What identifi-
cations or separations do these create?
• Is there grammatical agency? If so, with whom is it located?
The CDA focused on the bounded statements of the children, rather
than on the full exchanges between Elizabeth and the children. In other
words, our focus was on the children’s use of language in their responses to
the interview questions, rather than issues of turn-taking or the interplay of
the adult and the child’s language. As we read children’s responses we
posed the analytic questions above, which resulted in notes on each tran-
script that we then compared across transcripts. We documented patterns
(and anomalies) in the data and then identified examples of children’s
responses that provided instructive illustrations of our primary findings. In
the following section, we share the results of our analysis.
350 ELIZABETH DUTRO & MAKENZIE SELLAND
RESULTS
The results of our analysis reflect both the critical discourse and thematic
analyses and center on three areas: children’s language about the adults
invested in their achievement, their sense of the stakes involved in testing,
and links between their feelings about test taking, perceptions of scores,
and assumptions of reading competence.
The Adults “Behind the Curtain”: A Nebulous, Ubiquitous “They”
In analyzing the language children used to talk about the purposes and
consequences of high-stakes testing, one of the most striking findings was
the ubiquitous presence of the pronouns “they” and “them” to designate
those who created the tests and determined students’ success or failure.
When asked why they think they take tests like the state reading test in
school, all of the children used these pronouns in their responses. In
contrast, in response to that question, none of the children used the name
of an adult in their school, for instance their principal or teacher (though,
as we discuss below, known adults were mentioned by a few children in
response to other questions). As Fairclough (2001) argues, the use of the
pronouns “we,” “you,” and “they” are often cues to issues of power and
solidarity in discourse.
The use of the pronouns “they” and “them” suggests that the children
are highly aware that there are adults who exist “behind the scenes” who are
actively involved in high-stakes testing. As Julius, one of the two children in
the study who scored in the “advanced” category, said, “They need us to
take those tests so they know how we are doing in school. They take our
scores and they decide if we are making it.” Similarly, other children spoke
of unnamed adults who created tests and used them in various ways to make
decisions about students. For instance, in a response emphasizing the tests’
role in assessing what students have learned, Liana, whose reading test
score designated her “accelerated,” explained, “It helps them to know if we
have learned the stuff we’re supposed to learn.” Tara, a “proficient” reader,
responded, “They want to know if we read well enough.” Although also
pointing to test results as demonstrations of what students know, Rihanna,
whose scores located her in the “basic” category, said, “They can see if we
know how to do everything right,” suggesting a seemingly impossible bar
for students. Finally, sharing a response we heard from several children
(and also discuss in relation to a separate finding below), Ella, who scored
in the lowest achievement category, said, “I don’t really know. I guess they
need to see if we can go to the next grade.” These children’s responses
represent the range of ways that children employed “they” or “them” when
talking about the reasons why they take high-stakes tests in school.
All children expressed an understanding of high-stakes tests as a source
of information for adults about them and their peers (the “we” they
351CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
invoke), but their sense of what the adults are looking for varies from Julius’
and Liana’s nuanced understandings of scores as an indication of what
students have learned to Rihanna’s suggestion that the tests measure stu-
dents’ ability to get “everything right.” As confident, accomplished readers,
Julius and Liana experience tests as a context to demonstrate their knowl-
edge and skills, whereas Rihanna’s response suggests a discouragingly
impossible expectation for a child aware of her academic struggles in
relation to some of her peers. She invokes a stark dichotomy between right
and wrong, everything and nothing, and her scores locate her on the
“wrong” side of those binaries.
The use of the pronouns “they” and “them” also suggested that the
children point to adults well beyond those they know and with whom they
work closely at their school as involved in the creation or the consequences
of these assessments. Although several children did invoke adults in their
school when discussing their understanding of the stakes involved in high-
stakes tests. When asked who they were thinking of when they used the
words “they” or “them” (for instance, the follow-up question to Tara,
quoted above, was, “I noticed you said “they” want to know how well you
read. Who are they? Who is it who wants to know how well you read?”), 19
of the 33 children indicated that the adults invested in testing were located
outside of their school. As Travis said, “They are people who our tests go to.
I think they are downtown. I know that our scores are important to people
downtown and even the state! That’s why we have to do our best.” Like
Travis, who was designated an “accelerated” reader in the second year of
the study, other children who indicated that the adults invested in their
tests were located outside of their school invoked the district or the state in
their explanations. For instance, children framed their use of “they” as “the
people downtown,” “the superintendent,” “the people in charge of all of
the schools,” “even the state wants our grades,” and “I think the governor is
even involved.”
When asked to explain their use of “they” or “them,” six children
expressed uncertainty or confusion (for instance, shrugging or murmuring
“I really don’t know”), but the remaining eight children identified their
teacher, principal, or other known adult when asked to explain their use of
“they.” Tara, for instance, referred to her teacher and the school principal
in her response, explaining, “Ms. Blair can look at our scores and see how
we’re doing. It’s important that we do good. Ms. Jackson [the principal]
and Ms. Blair told us it’s really, really important.” Ron, a child designated
a “basic” reader in the second year of the study, said, “Ms. Blair and the
other teachers” and Anjali, a “proficient” reader from the first year,
responded, “Well, Ms. Jackson and Ms. Barnes [principal and vice princi-
pal] see our points on tests.”
Children’s use of pronouns indicates varying understandings about
those who are invested in the high-stakes tests they experience in school.
Although some children connected their testing experiences with adults at
352 ELIZABETH DUTRO & MAKENZIE SELLAND
their school, for many children the adults they connected to those tests
exist as nebulous authority figures, disconnected from the adults who they
know to be invested in their success in school. The children’s constructions
of the nebulous “they” appears connected to the limited information they
have gleaned about the disposition of their tests. Many of them understand
that the tests are not kept at the school, but are “sent away” to be “graded”
by adults who do not know them. Others clearly have not gleaned that
understanding, considering only the classroom or school level when
describing their understandings of how the tests are used. Still others use
“they” or “them” to invoke adults’ investment in their assessment perfor-
mance, but could not articulate (or, perhaps, did not feel comfortable
expressing) a sense of who those adults might be. The children who spoke
of adults outside of their school demonstrate a more complex understand-
ing of the nature of high-stakes testing in their district and state, pointing
correctly to investment in their test scores at both the district and state
levels. As we describe in the section below, although their conception of the
adults who are involved in testing may be various and, at times, nebulous,
the children ascribe those others tremendous power over their schooling.
Children’s Views of What Is at Stake in Their Scores
Children expressed several understandings related to the stakes involved in
the assessments they experienced in school. Their responses indicated an
understanding that the tests held weighty consequences for their school,
teachers, and their own school experiences. However, their responses also
reveal several misunderstandings about the relations between testing, their
individual and collective experiences, and the consequences of their per-
formance. Children expressed understanding of high-stakes tests as being
used to judge their own learning and performance primarily in response to
the questions: Why do you think you take these kinds of tests in school? Do
you know what happens after you take the tests? Are the scores on those
tests important? Why? Below, we discuss two primary themes in children’s
responses to those questions: personal consequences of test scores and the
stakes for their teachers and school.
Personal Consequences
The most prevalent personal consequence of testing the children raised
was grade retention. In the focus groups with the six fourth graders who
had participated in the research when they were in third grade, all of the
children indicated that the high-stakes tests they would take in fourth grade
would determine whether they would be allowed to progress to fifth grade.
The following is one illustrative exchange from the discussion with Julius,
Ricardo, and Jalal (who scored respectively at Advanced, Limited, and
Limited on the third grade reading test):
353CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
[researcher]: Do you think your scores on these tests are important? If so, why are
they important?
Julius: Oh yeah! We all need to do good on them cuz we want to go on to fifth grade.
Ricardo: Yeah, I just do not want to stay in fourth grade.
Jalal: I know. What if everybody went to fifth grade and you were just stuck back in
fourth grade?
Certainly, in the focus groups, children built on one another’s
responses. Julius raised the issue of grade placement and the other boys
responded to the assumed link he created between scores and retention.
However, their focus on retention was not an anomaly. Among the 33 third
graders interviewed across the 2 years, 15 children, from across scoring
categories, ascribed this weighty personal consequence to their scores. For
instance, Liana, who consistently expressed confidence about her achieve-
ment, said, “My score on the tests means that I will go to the next grade and
the next and that someday I can go to college and do whatever I want.
That’s what my mama says.” In contrast, Jade, who scored in the “limited”
category on the reading test, responded, “Since I don’t do so well on the
test, it maybe means that I will have to be behind other people and maybe
have to get more school in the summer or something.” Similarly, when
asked whether and why the tests are important, Dion, who was designated
a “basic” reader, said, “Yes they are, cuz, you know, you might not get to go
to fourth grade. Like, I had to do kindergarten twice.” Lebron, a “profi-
cient” reader, explained, “They’re important because we want to go to
fourth grade.”
We did not anticipate children’s assumed connection between their
performance on state tests and the specter of grade-level retention.
Although some urban districts, Chicago being a prime example, have tied
retention to assessment scores as part of reform initiatives in the past
decade, this was not the case in these children’s district. The children’s
scores were used to determine the school’s status in relation to AYP, had
consequences for the amount of oversight teachers would experience,
and could place a school on a list for potential district takeover or
closure. However, the scores from the third grade state reading tests or
the fourth grade state assessment held no punitive consequences for indi-
vidual students.
Institutional Consequences
In addition to describing the high stakes for them as individuals, children
also indicated some understanding of the tests as being consequential for
their school and their teachers. Twenty-one of the children spoke of some
sense of their teacher and/or principal as having a stake in students’
performance. For instance, Randy described in some detail his understand-
354 ELIZABETH DUTRO & MAKENZIE SELLAND
ing of these consequences when addressing the question of why the scores
on high-stakes tests are important: “Well, I know that Ms. Blair is nervous
about those tests. She tries to hide it, but when you have a whole assembly
to talk about trying our hardest, then you know that the teachers want us to
have high scores. I mean, even Mrs. Jackson talks a lot about how we can do
better than before and how our school can get good scores. She doesn’t
want her school to be in trouble, like, they will say ‘your kids need to spend
more time in school’ or ‘that school is not very smart.’ She wants us to do
our best and Ms. Blair does too.”
Jesse also talked about the school’s investment in children’s perfor-
mance on high-stakes tests: “Well, I know the school cares because I heard
that some schools could be shut down if kids don’t get good grades on
those tests. My cousin’s school did shut down and maybe it was because of
tests, I don’t know. They did have a contest at his school, before it was shut,
so kids could win a bike if they got the highest test in the school. If you even
did some better, you could get smaller stuff like a ball or even a video game.”
Although other children’s statements tended to be briefer, illustrative
responses invoking the institutional stakes of test scores included Ruby’s
sense of the consequences for her teacher (“Well, someone [the literacy
coach] comes to visit Ms. Blair sometimes and maybe they will think she
isn’t a great teacher if we don’t do good on the test”) and Donovan’s
understanding of the stakes for the school as a whole (“At an assembly, Ms.
Jackson said we should try our best to do really our best. The scores will tell
them if our school is teaching us good”).
Feelings About Tests, Knowledge of Scores, and
Assumptions of Competence
One of the concerns that both researchers (e.g., Nichols & Berliner,
2007) and social critics in education (e.g., Kohl, 2005; Kozol, 2005) raise
in relation to the current emphasis on high-stakes testing is children’s
emotional experiences with testing. Perhaps the relatively small number
of studies investigating this issue is due to the complexities involved in
accessing this aspect of testing. Wheelock, Bebell, and Haney (2000a)
approached this issue by asking children and youth to draw pictures of
their testing experience. This method allowed researchers to analyze
children’s visual representations as well as the children’s explanations
of what they depicted. They found that some children and youth drew
pictures conveying anxiety, frustration, anger, or dread in relation to
testing.
Although interviews certainly cannot fully access children’s experiences
of testing, what children say about their affective responses to those situa-
tions is an important part of forming deeper understandings about how
high-stakes testing functions for students. Thus, we included questions that
focused on children’s feelings related to high-stakes assessment. In contrast
355CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
to Wheelock et al., the children in our study did not express strong negative
emotions in relation to tests. When asked about their feelings about the
high-stakes tests they had recently experienced, children’s responses most
often indicated that they either liked or disliked the experience of test
taking and/or they expressed boredom in relation to testing. Of more
interest in our data were the connections children drew between their
knowledge of or assumptions about their scores and their sense of their
competence in reading. Because children’s expressions of their positive or
negative feelings about the tests related to their discussions of the link
between test scores and competence, we discuss both issues in this section.
As one might expect, when asked about their feelings about the tests,
most children began by framing their responses in terms of whether they
“liked” the experience of testing. As also might be assumed, many children
(25 of the total 33) described negative feelings about the act of sitting and
taking the tests. This response is not surprising—most would not presume
that sitting in a desk and taking a pencil-and-paper test is a fun or enjoyable
experience—and, thus, cannot be overinterpreted as holding negative con-
sequences for their schooling experiences. Indeed, children often framed
negative feelings about the tests in terms of boredom, although some, as we
discuss further below, did express discouragement about their scores and,
as discussed above, some children seemed anxious about their presumed
link between test scores and grade retention.
One of the most intriguing aspects of this category of response was that
five children, Julius, Liana, Noelle, Aisha, and Edgardo, all of whom scored
at “proficient” or above, expressed positive feelings about the experience of
taking high stakes tests. For instance, Julius said, “I kind of like the tests. I’m
really good at them, that’s what my grandma says, and I know I get high on
them, so I think they are kinda fun.” Noelle shared, “I don’t mind them. They
can be interesting.” Similarly, Aisha said, “Well, they’re OK. I like doing that
kind of work and I finish early a lot and get to read my book” and Edgardo
explained, “What’s fun about them is that it’s a different kind of day—we get
mints and some extra recess breaks.” In their daily school experience, these
five children received many positive messages about their achievement—
from high grades on report cards to being asked to support peers during
assignments. Enciso (2001) also describes a boy from her research study who
enjoyed taking standardized tests and made a connection between taking
the tests and his process of pursuing goals or “levels” when playing his
favorite video games. Like the boy in her study, the children who spoke
positively about the tests were all children who expressed identities as high
achievers in school, which positioned them very differently in relation to
high-stakes testing than many of their classmates.
Although children’s general expressions of dislike for the testing expe-
rience may not be surprising or particularly noteworthy, it is important to
note that none of the children across the 2 years of the study who scored
less than proficient on any of the high-stakes tests they experienced
356 ELIZABETH DUTRO & MAKENZIE SELLAND
expressed positive feelings about their testing experiences. Indeed, to
varying degrees, children made connections between their understandings
of the construction and uses of high-stakes testing, their feelings about the
tests, and their perceptions of their own competence in literacy. Most of
these connections were brief and focused primarily on their perceptions of
their own scores on the tests. These responses were primarily connected to
the questions in the interview that specifically asked if the child knew about
their own scores, how they knew, and what they thought their score meant.
The degree to which children claimed to know their scoring level varied
widely and most children did not use the language employed by the district
or state to describe scoring levels (note that for the third grade students any
knowledge of their performance on the assessment would be focused on
district assessments, as state scores were not yet reported at the time of the
interviews).
Five children—Ella, Jalal, Mohinder, Allison, and Zachary—claimed to
not know how they scored on the tests, responding to the question with
“No,” “I don’t know,” or “I have no idea.” Those children were still asked
what they thought their score would mean and all five responded similarly:
that the scores reveal whether you are a good reader. In another part of
their interview, children were asked what it meant to be a good reader and
whether they thought they were a good reader. All five of the children who
said they did not know their test scores had indicated that they did not
consider themselves good readers. For these students, all of whom scored
below proficient, the tests could function to cement their perception of
themselves as reader into “good” or “bad.” Although our analysis cannot
provide evidence about the consequences of these children’s views of the
meaning of test results, further analyses might illuminate whether knowl-
edge of their test scores would construct or reinforce children’s percep-
tions of themselves as “bad” readers.
Some of the children’s responses suggested that they were making
assumptions about their test scores based on other evidence they had
gathered about their reading competence. This theme in the responses
mapped to a group of 11 children who indicated relatively vague knowledge
of their own proficiency as measured by the tests. Most of these 11 children
said they were not sure how they knew about their scoring level, which makes
sense in light of the nonspecific nature of their knowledge about their score.
For instance, Ricardo captured this category of response, saying, “I don’t
think I did too good.” Similarly, Aisha responded, “I think I would be high on
that test,” sharing her correct assumption that her scores would suggest
strong reading skills. Demonstrating a similar assumption that his test score
would map to other evidence of his reading competence, Randy shared,
“Well, not too good, because I know I don’t read the best in this class.”
When the children we quote above asked what their scores on the tests
mean for them, the children’s responses generally echoed their answers
to the earlier question on what the tests mean for students more gener-
357CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
ally. For instance, Ricardo said, “It probably means I need to read better.”
Randy responded, “I hope it doesn’t mean that they won’t let me go to
fifth grade” and Aisha said, “It means that Ms. Blair can see how I’m
doing.” Six of the 11 children whose responses fell into this category
expressed concern or anxiety over grade retention or a need to improve
their reading.
The remaining children claimed to know how they scored on the assess-
ments they had taken earlier in the year (or in previous years). For children
who claimed to have knowledge of their scores, their answer to the question
of how they knew indicated parents or other family members, their teacher
(some children who scored at “proficient” or above indicated that Ms. Blair
had privately praised them), or uncertainty of a specific source (e.g., “I
think I saw it on a paper at home.” “Well, wasn’t it on my report card? I
think it was”). We included children in this category who spoke confidently
of their knowledge, whether or not their responses mapped to the language
used to report scores by the district or the state. In other words, we were
interested in their claims of that knowledge and how they interpreted their
claimed knowledge of their scores, rather than whether their claims indi-
cated “correct” understandings.
Children who claimed to know their scores often spoke with certainty
about their reading competence, whether in positive or negative terms. For
instance, Julius, an “advanced” reader, said, “Oh, yeah, I know I got a high
score. Everybody in my family and Ms. Blair were proud of me.” Jade, who was
in the “limited” scoring category, shared, “Yes, I did not get high on the test.
I am trying harder, but it’s hard, you know, to answer those questions.”
Tiffany, who scored at the “basic” level, replied, “I never do good on a
reading test. I like reading, but the tests, no way, I’m not good on those.”
Hillary, deemed an “advanced” reader, said, “Yeah, well, I don’t mean to
brag, but I get high on tests. I think I might be the highest reader in the class.”
Dion, who scored at the “basic” level, explained, “I get a bad grade on those
tests because I just think and think, but I can’t tell what is the right bubble to
fill in.” These responses represented the range of answers children provided.
As we discuss further below, what we found most striking in this area of
responses was, first, the overlap between children’s positive or negative
feelings about tests and their position within high- or low-proficiency
categories and, second, their dual assumptions that their sense of their
competence within the classroom would dictate their test score or that the
test score would reinforce their sense of their reading competence. In what
follows, we further discuss our interpretations as well as some of the impli-
cations of our analyses.
DISCUSSION AND IMPLICATIONS
Our analysis is certainly constrained by the small number of participants
and our focus on interview responses, which, though instructive, are but
358 ELIZABETH DUTRO & MAKENZIE SELLAND
one of the important directions needed to better understand how chil-
dren are situated within the current policy emphasis on high-stakes
testing. For instance, an important direction for further research and
analysis is to map children’s responses to high-stakes testing to their expe-
riences with literacy as documented in other data sources. We have begun
to do this by building case studies of the complex connections between
high-stakes testing and children’s larger experiences with literacy in this
and our other long-term classroom-based studies. As our analysis demon-
strates, however, examining children’s responses across the 2 years of this
qualitative study reveals some of the ways that children make sense of
high-stakes testing and, as we discuss in this section, raises crucial issues
and questions about how children are situated in the discourses sur-
rounding testing and accountability.
We were struck by our finding that only children who scored proficient
or above spoke positively about their testing experience or about their own
competence in relation to the test. In contrast to some findings from
conversations about testing with elementary students (Debard & Kubow,
2002), the children designated as low achieving in our study were not
optimistic about their ability to score highly on the tests they experienced.
In addition, unlike in Wheelock et al.’s (2000a) research examining stu-
dents’ drawings of their testing experience, the children in our study did
not respond to tests with anger. In part, this could be because the students
we interviewed were younger than the students in Wheelock et al.’s study.
Anger could signal some understanding of the tests as flawed or unfair,
whereas the children we interviewed appeared to assume the infallibility of
the test itself as well as the infallibility of nebulous adults who monitored
students’ scores. Studies of older students’ perceptions of testing experi-
ences (e.g., Paris, Herbst, & Turner, 2000; Wheelock et al., 2000a) suggest
that resistance to and resentment of high-stakes tests’ authority may build
as students continue to encounter them over years of school. However,
these young children’s acceptance of the tests’ authority to capture their
competence as readers and its power to determine their trajectory in school
was one of the most concerning findings in our interviews. If, from their
earliest experiences in classrooms, children are building storylines about
themselves as learners with potential to succeed in school, the authoritative
messages they receive about their achievement matters deeply in what
narratives are available to them. In districts in which dropout rates suggest
that many students already experience a fragile relationship to school,
research affirms our concern that students’ knowledge of their positioning
as less than proficient in key content areas such as reading may discourage,
rather than encourage, students’ investment in school (e.g., Wortham,
2006).
Along with other studies, ours challenges some of the assumptions
embedded in recent policy rhetoric about the importance of high-stakes
testing as the key accountability measure in schools (e.g., Madaus & Clarke,
359CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
2001; Vasquez-Heilig & Darling-Hammond, 2008; Wheelock et al., 2000a).
Although policies such as Race to the Top emphasize consequences for
adults (teachers, school and district administrators) based on students’
achievement scores, the children in our study perceived dire personal
consequences for low achievement, even in the absence of any such overt
penalties. In addition, although our data do not allow access to tests’
impact on student motivation, the children’s talk about testing comports
with other researchers’ findings challenging the assumption that attaching
high stakes to tests motivates students to try harder (Nichols & Berliner,
2007; Ravitch, 2010; Wheelock et al., 2000a). It was only the already-high-
achieving readers in our study who responded with energy, confidence, or
enthusiasm to either the experience of taking the tests or their perception
of their scores. Students in the two lowest scoring categories held no
illusions about how they were positioned within the successful reader/
struggling reader positions available in the classroom, suggesting that some
children assume their test scores will reflect their already-established per-
ception of their competence as readers.
Children’s responses in these areas point to the importance of opening
up new and different storylines in which children can locate themselves as
they are supported in reading and other subject areas. For instance, chil-
dren need and deserve clarity about how high-stakes tests function for them
and for their schools. If students face personal stakes through test scores, as
they do in some school systems, they need to understand those conse-
quences. However, children also need to know when test scores hold no
punitive consequences for individual students, as was the case for the
children with whom we worked. It is simply not acceptable for children to
carry the dread of something as dire as grade retention when adults who
could eliminate those fears surround them each school day. In addition,
explicit conversations about growth and trajectories would challenge the
stark binaries some children seem to perceive in relation to achievement
and open up positive narratives about potential and progression toward a
series of achievable goals. Otherwise, as Ravitch (2010) argues, an overem-
phasis on test scores to the exclusion of other important goals of education
can undermine students’ desire to learn lead to an unappealing outcome:
“higher test scores and worse education” (p. 230).
Finally, we point to the children’s use of pronouns to indicate ambigu-
ous, invested adults who waited in the wings to receive their test scores
and determined what those scores would mean for students, teachers,
and schools. First, we are interested in how their use of pronouns, par-
ticularly “they” and “them,” indicated children’s understandings of key
aspects of the testing process. For instance, many children rightly pointed
to adults at the district and state levels who cared about student scores
and who held significant power over schools and those in them. When
schools are holding assemblies to encourage strong performance on tests,
as well as sending many subtle and not-so-subtle messages about the
360 ELIZABETH DUTRO & MAKENZIE SELLAND
importance of test scores, it is no wonder that even young children may
sense pressure from entities located outside of their schools. As a number
of children’s responses suggested, even those adults who are putatively in
charge, such as teachers and principals, are under surveillance. Second,
the children’s use of pronouns suggested their sense of their positioning
within a power hierarchy in which nebulous others monitored their indi-
vidual achievement. This raises questions of what it means, in de
Charms’s (1977) words, to feel like a “pawn” in a system, experiencing
one’s actions “as determined by others and external circumstances″(p.
445). Research with urban students in high-poverty schools suggests that
students’ feelings of decreased control over important aspects of their
schooling impacts their perceptions of their ability to forge a positive
path toward high school graduation (Diamond & Spillane, 2004; Lee,
Smith, Perry, & Smylie, 1999; Oakes, 2005; Valenzuela, 2005; Vasquez-
Heilig & Darling-Hammond, 2008; Marquez-Zenkov, 2007). Based on her
work in an elementary school in California in which English language
learners experienced many high-stakes tests across the school year, what
she refers to as “overtesting,” Zacher-Pandya (2011) suggests that children
absorb and can become discouraged by their understanding that others
are deeply invested in their continually monitored and publicly displayed
performance on tests. Although the third graders in our study were quite
matter-of-fact in their conviction that unknown adults awaited their test
scores, they were also clearly building understandings and harboring mis-
understandings about the high-stakes accountability and oversight that
drives current school reform in the United States. Research linking high-
stakes testing with increased dropout rates among older students who are
similarly positioned in underresourced schools and neighborhoods
underscores the importance of seeking, heeding, and learning from
children’s perspectives early in their school experience. Here, too, we
argue that for children to construct positive narratives about school and
their trajectories as learners, they need explicit information about high-
stakes testing and its consequences, coupled with demonstrations from
adults charged with supporting them that test scores are fallible and
cannot paint full and accurate portraits of their potential to thrive in
school.
We wish Molly could have celebrated her reading of Frog and Toad,a
book that represented her tremendous growth as a reader. By the end of
third grade, Molly and many of her classmates had clearly absorbed the
import of the tests they took including a sense of the high stakes involved
for them as individuals, their teachers and administrators, and their school
as an institution. We wish to be clear that it is not our intent to demonize
large-scale assessments per se. Shepard (2008), for instance, points to the
potentially useful role large-scale tests could play in providing useful and
generative data on student achievement if such assessments were substan-
tively challenging (as opposed to limited to multiple choice items), focused
361CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
on capacity building rather than coercion, and considered along with other
indicators of robust learning in schools and classrooms. What we do
contend, however, is that children’s responses to high-stakes assessment
underscore the serious cautions running through a significant body of
educational research about imbuing high-stakes outcomes with the ulti-
mate authority to determine how children, teachers, and schools are
positioned in the dichotomous categories of proficient/not proficient,
successful/unsuccessful, or adequate/inadequate permeating discourses of
accountability in the United States. Statistics suggesting that more than half
of the children who participated in these interviews will not complete high
school underscore the urgent need to listen and learn from students about
how accountability policies play out in their school lives.
REFERENCES
Abrams, L. M., Pedulla, J. J., & Madaus, G. F. (2003). Views from the classroom:
Teachers’ opinions of statewide testing programs. Theory Into Practice,42(1),
18–29.
Allodi, M. W. (2002). Children’s experiences of school: Narratives of Swedish
children with and without learning difficulties. Scandinavian Journal of Educa-
tional Research,46, 181–205.
Amrein, A. L., & Berliner, D. C. (2002). High-stakes testing, uncertainty, and
student learning. Education Policy Analysis Archives,10(18). Retrieved May 17,
2011, from http://epaa.asu.edu/epaa/v10n18/
Au, W. (2007). High-stakes testing and curricular control: A qualitative meta-
synthesis. Educational Researcher,36, 258–267.
Barksdale-Ladd, M. A., & Thomas, K. F. (2000). What’s at stake in high stakes
testing: Teachers and parents speak out. Journal of Teacher Education,51, 384–397.
Berliner, D. (2007). The incompatibility of high-stakes testing and the development
of skills for the twenty-first century. In R. Marzano (Ed.), On excellence in teaching
(pp. 113–144). Bloomington, IN: Solution Tree Press.
Campano, G. (2007). Immigrant students and literacy: Reading, writing, and remember-
ing. New York: Teachers College Press.
Clarke, M., Haney, W., & Madaus, G. (2000, January). High stakes testing and high
school completion. Boston: Boston College, National Board of Educational Testing
and Public Policy.
Collins, K., & Valente, J. (2010, June 17). (Dis)abling the Race to the Top. Teachers
College Record. Retrieved April 11, 2011, from http://www.tcrecord.org (ID
Number: 16020)
Cook-Sather, A. (2002). Authorizing students’ perspectives: Toward trust, dialogue,
and change in education. Educational Researcher,31(4), 3–14.
Cook-Sather, A. (2006). Sound, presence, and power: Exploring “student voice” in
educational research and reform. Curriculum Inquiry,36(4), 359–390.
Danziger, S. H., & Haveman, R. H. (2002). Understanding poverty. Cambridge, MA:
Harvard University Press.
Davies, B., & Harré, R. (1990) Positioning: The discursive production of selves.
Journal for the Theory of Social Behavior,20(1), 43–63.
Debard, R., & Kubow, P. (2002). From compliance to commitment: The need for
constituent discourse in implementing testing policy. Educational Policy,16, 387–
405.
362 ELIZABETH DUTRO & MAKENZIE SELLAND
de Charms, R. (1977). Pawn or origin? Enhancing motivation in disaffected youth.
Educational Leadership,34, 444–448.
Diamond, J. B., & Spillane, J. P. (2004). High-stakes accountability in urban elemen-
tary schools: Challenging or reproducing inequality? Teachers College Record,
106(6), 1145–1176.
Dutro, E. (2010). What ‘hard times’ means: Mandated curricula, middle-class
assumptions, and the lives of poor children. Research in the Teaching of English,44,
255–291.
Enciso, P. (2001). Taking our seats: The consequences of positioning in reading
assessments. Theory Into Practice,40, 166–174.
Fairclough, N. (2001). Language and Power. London: Longman.
Fairclough, N. (2010). Critical discourse analysis: The critical study of language. New
York: Pearson.
Filer, A., & Pollard, A. (2000). Social world of pupil assessment: Processes and contexts of
primary schooling. New York: Continuum.
Fusarelli, L. D. (2004). The potential impact of the No Child Left Behind Act
on equity and diversity in American education. Educational Policy,18(1), 71–
94.
Hamilton, L. S., & Berends, M. (2006, April 8–12). Instructional practices related to
standards and assessments (Rand Working Paper No. WR-374-EDU). Paper pre-
sented at the annual meeting of the American Educational Research Associa-
tion, San Francisco.
Hamilton, L. S., Berends, M., & Stecher, B. M. (2005, April). Teachers’ responses to
standards based accountability (Rand Working Paper Series). Santa Monica, CA:
RAND.
Haney, W., & Scott, L. (1987). Talking with children about tests: An exploratory
study of test item ambiguity. In R. D. Freedle & R. P. Duran (Eds.), Cognitive and
linguistic analyses of test performance (pp. 298–368). Norwood, NJ: Ablex.
Harré, R., & van Langenhove, L. (1998). Positioning theory: Moral contexts of interna-
tional action. London: Wiley-Blackwell.
Hicks, D. (2005). Class readings: Story and discourse among girls in working-poor
America. Anthropology & Education Quarterly,36, 212–229.
Hillocks, G. (2002). The testing trap: How state writing assessments control learning. New
York: Teachers College Press.
Iceland, J. (2006). Poverty in America: A handbook. Berkeley: University of California
Press.
Jacob, B. (2001). Getting tough? The impact of high school graduation exams.
Educational Evaluation Policy Analysis,23, 99–121.
Jones, M., & Yonezawa, S. (2002). Student voice, cultural change: Using
inquiry in school reform. Journal for Equity and Excellence in Education,35(3),
245–254.
Kiplinger, V. (2008). Reliability of large-scale assessment and accountability systems.
In K. Ryan & L. Shepard (Eds.), The future of test-based accountability (pp. 93–114).
New York: Routledge.
Kirshner, B., & Pozzoboni, K. (2011). Student interpretations of a school closure:
Implications for student voice in equity-based school reform. Teachers College
Record,113(8), 1633–1667.
Kohl, H. R. (2005). Stupidity and tears: Teaching and learning in troubled times. New
York: New Press.
Koretz, D. (2008). Measuring up: What educational testing really tells us. Cambridge,
MA: Harvard University Press.
Kozol, J. (2005). The shame of the nation: The restoration of apartheid schooling in America.
New York: Crown.
363CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
Kress, G. (2009). Multimodality: A social semiotic approach to contemporary communica-
tion. New York: Routledge.
Lee, V. E., Smith, J. B., Perry, T. E., & Smylie, M. A. (1999). Social support, academic
press, and student achievement: A view from the middle grades in Chicago. Chicago:
Consortium on Chicago School Research.
Linn, R. L. (2000). Assessments and accountability. Educational Researcher,29(2),
4–16.
Lyons, G. (2011). Michelle Rhee: Education reform huckster: The myth that
schools are best run like businesses is emphatically demolished. Retrieved May
17, 2011, from http://www.salon.com/news/politics/war_room/2011/04/06/
michelle_rhee_lyons
Madaus, G. F., & Clarke, M. (2001). The adverse impact of high stakes testing on
minority students: Evidence from 100 years of test data. In G. Orfield & M.
Kornhaber (Eds.), Raising standards or raising barriers? Inequality and high stakes
testing in public education (pp. 85–106). New York: The Century Foundation.
Marquez-Zenkov, K. (2007). Through city students’ eyes: Urban students’ beliefs
about school’s purposes, supports, and impediments. Visual Studies,22(2), 138–
154.
Mathison, S., & Freeman, M. (2003, September 19). Constraining elementary teach-
ers’ work: Dilemmas and paradoxes created by state mandated testing. Education
Policy Analysis Archives,11(34). Retrieved December 10, 2006, from http://
epaa.asu.edu/epaa/v11n34/
McNeil, L. M. (2000). Contradictions of school reform: Educational costs of standardized
testing. New York: Routledge.
McNeil, L. M., Coppola, E., Radigan, J., & Vasquez-Heilig, J. (2008). Avoidable
losses: High-stakes accountability and the dropout crisis. Education Policy Analysis
Archive,16(3), 1–48.
Mitra, D. (2003). Student voice in school reform: Reframing student–teacher rela-
tionships. McGill Journal of Education,39(2), 289–304.
Mitra, D. (2006). Student voice from the inside and outside: The positioning
of challengers. International Journal of Leadership in Education,9(4), 315–
328.
Newman, K. S. (2000). No shame in my game: The working poor in the inner city. New
York: Vintage.
Nichols, S. L. (2007). High-stakes testing. Journal of Applied School Psychology,23(2),
47–64.
Nichols, S. L., & Berliner, D. C. (2007). Collateral damage. Cambridge, MA: Harvard
Education Press.
Nichols, S., Glass, G., & Berliner, D. (2006). High stakes testing and student achieve-
ment: Does accountability testing increase student learning? Education Policy
Analysis Archives,14, 1–175.
Noddings, N. (2002). High stakes testing and the distortion of care. In J. Paul, C. D.
Lavely, E. Cranton-Gingras, & L. Taylor (Eds.), Rethinking professional issues in
special education (pp. 69–82). New York: Ablex/Greenwood Press.
Norris, S. P., Leighton, J., & Philips, L. M. (2006). What is at stake and knowing the
content and capabilities of children’s minds: A case for basing high-stakes tests
on cognitive models. In R. R. Curren (Ed.), Philosophy of education: An anthology
(pp. 477–490). New York: Blackwell.
Oakes, J. (2005). Keeping track: How schools structure inequality. New Haven, CT: Yale
University Press.
Paris, S. G., Herbst, J. R., & Turner, J. C. (2000). Developing disillusionment:
Students’ perceptions of academic achievement tests. Issues in Education,6,
17–46.
364 ELIZABETH DUTRO & MAKENZIE SELLAND
Pedulla, J. J., Abrams, L. M., Madaus, G. F., Russell, M. K., Ramos, M. A., & Miao, J.
(2003). Perceived effects of state-mandated testing programs on teaching and learning:
Findings from a national survey of teachers. Chestnut Hill, MA: National Board on
Educational Testing and Public Policy.
Perna, L., & Thomas, S. (2009). Barriers to college opportunity: The unintended
consequences of state-mandated testing. Educational Policy,23, 451–479.
Pollard, A., Thiessen, D., & Filer, A. (Eds.). (1997). Children and their curriculum.
London: Falmer.
Popham, W. J. (2001). Teaching to the test. Educational Leadership,58(6), 16–
20.
Race to the Top: Promoting innovation, reform, and excellence in America’s public schools.
(n.d.). Retrieved May 17, 2011, from http://www.whitehouse.gov/the-press-
office/fact-sheet-race-top
Rank, M. R. (2005). One nation, underprivileged: Why American poverty affects us all.
Oxford, UK: Oxford University Press.
Ravitch, D. (2010). The death and life of the great American school system: How testing and
choice are undermining education. New York: Basic Books.
Roderick, M., & Engel, M. (2001). The grasshopper and the ant: Motivational
responses of low achieving students to high stakes testing. Educational Evaluation
and Policy Analysis,3(23), 197–227.
Rubin, B. C., & Hayes, B. (2010). “No backpacks” vs. “drugs and murder”: The
promise and complexity of youth civic action research. Harvard Educational
Review,80, 149–175.
Rubin, B. C., & Silva, E. (Eds.). (2003). Critical voices in school reform: Students living
through change. New York: RoutledgeFalmer.
Rudduck, J., Chaplain, R., & Wallace, G. (Eds.). (1996). School improvement: What can
pupils tell us? London: David Fulton.
Ryan, K., & Shepard, L. (2008). The future of test-based accountability. New York:
Routledge.
Sawchuck, S. (2009). NEA knock administration on “Race to the Top.” Education
Week. Retrieved May 17, 2011, from http://blogs.edweek.org/edweek/
teacherbeat/2009/08/.html
Shepard, L. (2008). A brief history of accountability testing: 1965–2007. In K. Ryan
& L. Shepard (Eds.), The future of test-based accountability (pp. 25–46). New York:
Routledge.
Shepard, L. A. (2000). The role of assessment in a learning culture. Educational
Researcher,29(7), 4–14.
Shepard, L. A. (1995). Using assessment to improve learning. Educational Leadership,
52(5), 38–43.
Sleeter, C. (2004). Context-conscious portraits and context-blind policy. Anthropol-
ogy & Education Quarterly,35(1), 132–136.
Solano-Flores, G. (2006). Language, dialect, and register: Sociolinguistics and the
estimation of measurement error in the testing of English language learners.
Teachers College Record,108, 2354–2379.
Thiessen, D. (2007). Researching student experience in elementary and secondary
school: An evolving field of study. In D. Thiessen & A. Cook-Sather (Eds.),
International handbook of student experience in elementary and secondary school (pp.
1–76). New York: Springer.
Thiessen, D., & Cook-Sather, A. (Eds.). (2007). International handbook of student
experience in elementary and secondary schools. New York: Springer.
Thorkildsen, T. A. (1999). The way tests teach: Children’s theories of how much
testing is fair in school. In M. Leicester, C. Modgil, & S. Modgil (Eds.), Values,
culture, and education (pp. 61–79). London: Falmer.
365CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT
Valenzuela, A. (2005). Leaving children behind: How “Texas style” accountability fails
Latino youth. Albany: State University of New York Press.
Vasquez-Heilig, J., & Darling-Hammond, L. (2008). Accountability Texas-style: The
progress and learning of urban minority students in a high-stakes testing
context. Educational Evaluation and Policy Analysis,30(2), 75–110.
Wheelock, A., Bebell, D. J., & Haney, W. (2000a). Student self-portraits as test-
takers: Variations, contextual differences and assumptions about motivation.
Teachers College Record. Retrieved December 11, 2011, from http://www.tcrecord.
org/library (ID Number: 10635)
Wheelock, A., Bebell, D. J., & Haney, W. (2000b). What can student drawings tell us
about high-stakes testing in Massachusetts? Teachers College Record. Retrieved
December 11, 2011, from http://www.tcrecord.org/library (ID Number: 10634)
Wortham, S. E. F. (2006). Learning identity: The joint emergence of social identification
and academic learning. Cambridge, UK: Cambridge University Press.
Wright, W., & Choi, D. (2006). The impact of language and high-stakes testing
policies on elementary English language learners in Arizona. Education Policy
Analysis Archives,14, 1–58.
Yonezawa, S., & Jones, M. (2009). Student voices: Generating reform from the
inside out. Theory Into Practice,48(3), 205–212.
Zacher-Pandya, J. (2011). Overtested: How high-stakes accountability fails English lan-
guage learners. New York: Teachers College Press.
APPENDIX
Interview Protocol
The interviews followed a set of structured questions that were posed to
each child. However, the interviews were semistructured in the sense that
follow-up questions and conversation built from each child’s responses.
Introductory statement: I wanted to ask you some questions about some of the tests
you take in school. These questions are about the tests like the [state reading test] or
the [district reading test]; the ones where you have a booklet, and you read a passage
and then answer some questions and fill in the bubbles [provided further descrip-
tions, examples of the tests until the child seemed clear about the tests to
which the questions referred].
Questions posed to each child:
Why do you think you take these kinds of tests in school?
Do you know what happens after you take the tests?
Are the scores on those tests important? Why?
How do you feel about those tests? [Do you remember the state reading test
that you took last week? Can you describe the experience of taking the test?
What was it like for you?]
Do you know how you did on the tests? [Prompt: I mean, do you know how
you scored?] [If so] How do you know? [To all students] What do the scores
mean for you?
366 ELIZABETH DUTRO & MAKENZIE SELLAND
Additional questions asked of the fourth-grade follow-up students:
Was there anything different about the [state] test this year and the tests
like that one that you took last year?
What else did you learn about these kinds of tests in fourth grade that you
didn’t know in third grade?
367CHILDREN’S PERSPECTIVES ON HIGH-STAKES ASSESSMENT