ArticlePDF Available

Abstract

Educational researchers and policymakers have often lamented the failure of teachers to implement what they consider to be technically sound assessment procedures. Through a case study of New York City’s Central Park East Secondary School (CPESS), in the years when it served as a model for progressive American school reform, Duckor and Perlstein demonstrate the usefulness of an alternative to reliance of the technical characteristics of standardized tests for constructing and judging assessments: teachers’ self-conscious and reasoned articulation of their approaches to learning and assessment. They conclude that when teachers are given opportunities for genuine, shared reflection on teaching and learning and classroom practices are tied to this understanding, fidelity to what they call the logic of assessment offers a more promising framework for the improvement of schooling than current forms of high-stakes, standardized accountability. Thus, instead of expecting teachers to rely on data from standardized assessments or replicate features of standardized testing in their own assessment practices, researchers, policymakers and teacher educators should promote fidelity to the broader logic of assessment.
1
Teachers College Record Volume 116, 020301, February 2014, 33 pages
Copyright © by Teachers College, Columbia University
0161-4681
Assessing Habits of Mind: Teaching to the
Test at Central Park East Secondary School
BRENT DUCKOR
San Jose State University
DANIEL PERLSTEIN
University of California, Berkeley
Background/Context: Educational researchers and policymakers have often lamented the
failure of teachers to implement what they consider to be technically sound assessment proce-
dures. In recent years, the belief that teachers are unwilling or unable to implement appropri-
ate assessment procedures has contributed to the rapid expansion of high stakes, standardized
testing in schools. Supporters of No Child Left Behind (NCLB) have contrasted teachers’
assessment practices with standardized testing, arguing that teacher-created classroom assess-
ments lack the technical characteristics required to produce trustworthy measures of student
learning or compare large populations of students.
Research Question/Focus of Study: Through a case study of New York City’s Central
Park East Secondary School (CPESS), in the years when it served as a model for progressive
American school reform, Duckor and Perlstein demonstrate the usefulness of an alternative
to reliance on the technical characteristics of standardized tests for constructing and judging
assessments: teachers’ self-conscious and reasoned articulation of their approaches to learning
and assessment.
Research Design: In order to determine CPESS teachers’ assessment practices and the pro-
cess through which they were developed, Duckor and Perlstein conducted semi-structured oral
history interviews with a sample of CPESS teachers. They triangulated teachers’ recollections
through a content analysis of course assignments, rubrics, grading reports, and other artifacts
of assessment at CPESS. The sources of this data included published accounts of CPESS and
primary sources provided by teachers or uncovered in archival research.
Conclusions/Recommendations: Duckor and Perlstein conclude that when teachers are
given opportunities for genuine, shared reflection on teaching and learning and classroom
practices are tied to this understanding, fidelity to what they call the logic of assessment
Teachers College Record, 116, 020301 (2014)
2
Educational researchers have long lamented the gap between the types of
assessments they develop and those common among teachers. “Over the
last century,” Rick Stiggins (2001) argues, “sound assessment” has failed
to enter into “the day-to-day practice of instruction” in American schools
(p. 5). Administrators’ and teachers’ assessment practices, researchers
argue, have contributed to inaccurate judgments about students’ knowl-
edge and limited the usefulness of educators’ feedback, thus hamper-
ing countless students from reaching their full academic potential (Linn,
2000; Meyer, 1996; Popham, 2004).
In recent years, the belief that teachers are unwilling or unable to im-
plement appropriate assessment procedures has contributed to the rap-
id expansion of high stakes, standardized testing in schools. Supporters
of No Child Left Behind (NCLB) have contrasted teachers’ assessment
practices with standardized testing, arguing that teacher-created class-
room assessments lack the technical characteristics required to produce
trustworthy measures of student learning. Moreover, advocates of NCLB
argue that a lack of standardization protocols and procedures pre-
cludes the use of scores derived from teacher-constructed assessments
to compare large populations of students. While critics of the approach
to schooling promoted by NCLB (and now Race to the Top) have fre-
quently claimed that it negatively impacts teaching and learning, they
have rarely offered a persuasive alternative model for assessment itself.
Ironically, at the very moment when policymakers were moving to
make standardized testing a defining feature of K-12 schooling, a model
of teaching that relied on the logic of sound educational assessment was
being developed, with relatively little input from measurement experts
or policy elites, in a New York City public school. Situated in Harlem and
serving the area’s youth, Central Park East Secondary School (CPESS)
offered a progressive, constructivist pedagogy to economically and politi-
cally marginalized students. The school’s goals reached beyond conven-
tional questions of subject content mastery as measured by standardized
Regents exams to include readiness to graduate from high school, attend
a four-year college, find a productive, meaningful career, and engage
in civic life. In the last years of the twentieth century and the first years
offers a more promising framework for the improvement of schooling than current forms of
high-stakes, standardized accountability. Thus, instead of expecting teachers to rely on data
from standardized assessments or replicate features of standardized testing in their own as-
sessment practices, researchers, policymakers and teacher educators should promote fidelity to
the broader logic of assessment.
TCR, 116, 020301 Teaching to the Test at Central Park East
3
of the twenty-first, CPESS served as the flagship of American secondary
school reform. School founder Deborah Meier’s The Power of Their Ideas
and Frederick Wiseman’s documentary High School II, along with the re-
ports of countless visitors, touted CPESS’s achievements, and educators
and reformers across the nation sought to replicate the school (Darling-
Hammond, Ancess, & Falk, 1995; Darling-Hammond, Ancess, & Ort,
2002; Due, 2002; Newmann & Associates, 1996; OUSD, 2000; Pearlman,
2002; Shah, Mediratta, & McAlister, 2009; Sizer, 1996).
The efforts of CPESS’s creators were facilitated by the policy and po-
litical environment of the times. In the 1980s and 1990s, New York City
public school chancellors, teacher union leaders, and reformers, along
with state officials, increasingly turned to teacher-led alternative pro-
grams and constructivist pedagogies as keys to improving urban educa-
tion. With a “waiver” from New York’s Regent’s Diploma requirements,
CPESS and a growing number of small public schools were able to pursue
educational reforms that emphasized personalization, critical thinking,
real world experience, and respect for the individual child. CPESS’s in-
fluential backers included the Center for Collaborative Education and
the Coalition of Essential Schools, and supporters ranged from newspa-
per tycoon and Reagan administration insider Walter Annenberg to the
stalwart liberal senator from Massachusetts, Ted Kennedy.
In the decade and a half following CPESS’s 1985 founding, a teacher-
designed and -implemented assessment system played a central role at
the school. Learning by doing became synonymous with assessing by do-
ing. At countless meetings, CPESS teachers considered what the ways
students used their knowledge revealed about student progress and
achievement in the classroom. They debated ways that the interpreta-
tion and use of classroom data could serve students’ learning trajecto-
ries and development. In thinking through their data, CPESS teachers
were also attentive to fundamental issues in educational measurement
and assessment such as score generalizability; construct validation; and
rater reliability—without always using this “professional” language.
Demonstrating remarkable fidelity to the individual elements of sound
assessment (i.e., the use of well-defined learning targets, multiple item
types, and consistent scoring procedures), these teachers addressed the
elements’ relationship to one another, and to the vision of constructivist
teaching and learning that animated their school.
Not surprisingly, teachers focused on particular students and the local
trajectories of their learning tended to produce classroom assessments
that appear weak by the technical criteria of standardized testing. For
instance, individual teachers cannot produce test results that allow for
comparisons among student populations beyond their classrooms. Still,
Teachers College Record, 116, 020301 (2014)
4
using Central Park East Secondary School as a case study, we argue that
policies driven by these concerns mistake the means used to ensure rigor
in standardized tests for the first principles of the science of sound assess-
ment. We offer an alternative model of how to conceptualize classroom
assessment, one that relies on what we call the logic of assessment.
By logic of assessment we mean the underlying principles of assessment
design (NRC, 2001). The development of any sound assessment begins
with a series of fundamental questions, one leading to the next. What do
I want students to learn? How do I want them to use that knowledge?
How can I discover whether they can do so? What kinds of evidence allow
for meaningful, consistent interpretations of student learning? Efforts
to adhere to this logic have led those developing standardized tests to a
number of crucial concepts, such as validity and reliability. But the tech-
niques used to address such concerns are themselves tied to particular
forms of student evaluation. In recent years policymakers have conflated
the means of producing sound standardized tests with the inherent char-
acteristics of all sound assessments. Focusing on the fundamental logic of
assessment rather than on particular techniques of standardized tests can
provide educators with a principled stance with which to frame critiques
of NCLB.
Moreover, a focus on the logic of assessment—instead of the psycho-
metric techniques for producing standardized scores—attends to the
entire classroom learning process, not just the measureable knowledge
that can be detected at a particular testing occasion. Such a framework
can help teachers to deepen their thinking about what and how they
are teaching and deepen their knowledge of what students are learning,
how they are learning it, and how they are able to use that knowledge.
Thus, formulating assessment in terms of its logic before attending to
its techniques has far greater potential than the type of assessment pro-
moted by NCLB to enrich teaching and learning. In their attentiveness
to the relationship of the individual elements of assessment to one an-
other, CPESS’s teachers offer an exemplary case of fidelity to the logic
of assessment.
METHODOLOGY AND DATA SOURCES
In order to examine how CPESS’s teachers came to understand the logic
of assessment and were able to make effective use of assessment data
generated within their school, this study combines the methodologies of
collaborative research and oral history. Both methodologies foreground
the perspectives and sensemaking of participants.
The research team included present and former New York public
school teachers who were instrumental in the development of CPESS’s
TCR, 116, 020301 Teaching to the Test at Central Park East
5
approach to teaching and learning, together with two university re-
searchers—one a historian of education, and the other an assessment
scholar who had previously taught at CPESS. The collaborative research
project included numerous in-person and web-based meetings over sev-
eral years.
Working iteratively, the team developed a semi-structured interview
guide that solicited data on a number of topics, including teachers’ ca-
reer paths, experience at CPESS, ideals about learning and approaches
to teaching. One section of the guide focused specifically on assessment.
The interviews were designed both to reveal commonalities among teach-
ers and to attend to the uniqueness of individual experience. The histo-
rian of education on the research team, who has no personal connection
to CPESS, conducted the interviews, beginning with the core group of
teachers who had participated in designing the guide.
All interviews were conducted in person at locations chosen by infor-
mants and were audiotaped. Most lasted approximately one-and-a-half
hours. Oral historians have long recognized the need to design and
conduct interviews that elicit full recollection while discouraging bias.
To do so, we questioned teachers about events at CPESS rather than
asking them to recall their former attitudes and opinions (Gant, 1987).
Interviews started with broad questions and then narrowed to more
specific topics (Patton, 2002; Spradley, 1979). The initial assessment
questions avoided technical terms in order not to exaggerate teachers’
levels of technical understanding and engagement (Dougherty, 1999).
However, if teachers did not initiate use of technical terms, we probed
about teachers’ technical knowledge.
Based on data from the oral histories, together with archival and pub-
lished sources, we identified the group of CPESS teachers who had been
most influential in shaping the school in its formative years. We then
approached the CPESS teachers who were not part of the research team
to be interviewed; we were able to interview 8 of the 11 teachers we iden-
tified as most influential in shaping the school. This expanded set of
interviews produced most of the data used in this study.
We also constructed a database of more than 80 teachers who had taught
at CPESS, with their disciplines, institutional locations, and years of em-
ployment at the school. We then conducted semi-structured interviews
with a sample of these teachers to gain in-depth understanding of their
experiences. The choice of which teachers to interview reflected numer-
ous factors including the recommendation of other teachers, the avail-
ability of contact information, and the desire to include teachers from a
diverse group of informants. These interviews largely followed the same
format as the earlier ones. That is, we posed questions from the protocol
Teachers College Record, 116, 020301 (2014)
6
to all interviewees, probing on responses based on recurrent themes,
particularly focused on their experience with Habits of Mind, portfolio
assessment, and other related classroom-level assessment practices.
The potential interview sample bias in our study risks overstating the
importance of viewpoints held by more senior teachers at CPESS, many
of whom were founders of the school, served on our collaborative re-
search team, and worked closely with school leader Deborah Meier in
the early years (1985-1995). We address this, in part, by interviewing a
number of younger teachers who came to the school in later years (1995-
2002). Although we cannot claim that this sample was representative of
CPESS teachers, the data from them on the culture of the school and the
processes through which new teachers were inducted into it were consis-
tent across such differences as race, gender, and subject matter.
All interviews were transcribed. The two university researchers inde-
pendently analyzed the transcripts. The historian read for emergent
themes; the assessment expert in terms of the technical characteristics
of assessment (constructs, items design, scorer calibration, rater reliabil-
ity, validation, et cetera). This procedure enabled us to combine induc-
tive and deductive approaches to data analysis (Eisner, 1991; Epstein &
Martin, 2005). Moreover, we analyzed the transcripts not only for the
themes they contained but also for the narrative structure and logic by
means of which informants connected one theme or topic to another (A.
M. Miles & Huberman, 1994). To do so and in keeping with our collab-
orative approach, we relied on large extracts of text as units of analysis.
We triangulated oral history recollections through content analysis of
varied artifacts of assessment at CPESS. The sources of these data includ-
ed published accounts of CPESS, items saved by the teachers we inter-
viewed, and extensive archival research. Evidence from documents such
as rubrics, course syllabi and assignments, grading period reports, vid-
eotaped lessons, digital projects, graduation committee presentations,
portfolio documents, and public presentations by teachers produced
findings consistent with teachers’ accounts of the school’s “alternative
assessment” mission, viewpoint on standardized testing, and the role of
the Habits of Mind in shaping portfolio graduation aims and purposes
(Darling-Hammond et al., 1995; Duckor, 1999, 2001; Gold & Lanzoni,
1993).
Unless otherwise indicated, the oral history interviews are the source
of quotations throughout the article. We use pseudonyms to protect
the identity of the interview subjects. While we analyzed the entire in-
terview transcripts, most of the data came from the interview questions
that addressed the logic and artifacts of assessment at Central Park East
Secondary School.
TCR, 116, 020301 Teaching to the Test at Central Park East
7
THE ASSESSMENT TRIANGLE AND THE LOGIC OF ASSESSMENT
The National Research Council (2001)—representing those Stiggins
(2001) called the nation’s “measurement experts”—suggests the crucial
elements of any well-designed assessment system. “Every assessment,”
the Council argues,
is based on three interconnected elements: a theory of what stu-
dents know and how to develop competence in a subject domain
(cognition); tasks or situations used to collect evidence about stu-
dent performance (observation); and a method for drawing infer-
ences from those observations (interpretation). (p. 36)
Moreover, as the Council notes,
A crucial point is that each of the three elements of the assess-
ment triangle not only must make sense on its own, but also must
connect to each of the other elements in a meaningful way to
lead to an effective assessment and sound inference. (p. 49)
The National Research Council terms these three elements and their
interrelationship “the Assessment Triangle.”
At Central Park East Secondary School, a multifaceted theory of cog-
nition, the Habits of Mind, constituted the first pillar of the Assessment
Triangle. Five Habits of Mind formed the cornerstone of CPESS’s cur-
riculum, through which it sought to develop the intellectual engagement,
skepticism, and empathy it placed at the core of democratic thinking.
In keeping with its focus on active inquiry, these habits were cast as
questions:
How do you know what you know? (Evidence)
From whose point of view is this being presented? (Perspective)
How is this event or work connected to others? What causes
what? (Connection)
What if things were different? (Supposition)
Who cares? Why is this important? (Relevance)
According to CPESS educators, the Habits of Mind simultaneously re-
flected the essential questions of intellectuals and scholars working across
the range of academic disciplines, focused lessons on genuine, higher
order learning, and promoted the critical intelligence and problem solv-
ing necessary for democratic life. The shared public commitment to the
Habits of Mind not only allowed students to know what was expected of
them, but also allowed teachers to know what they could expect of each
Teachers College Record, 116, 020301 (2014)
8
other, “to agree not only on what to teach, but also,” in the words of
school founder Deborah Meier (1995), “on how their teaching and their
kids’ learning would be assessed” (pp. 49-50).
The Habits of Mind permeated lessons, classroom assignments, science
labs, Socratic seminars, and other academic exercises. They were also
evident at school plays and convocations, in conflict mediation sessions,
at internship debriefings, in the principal’s office, at staff meetings, and
so forth. Together, they constituted a deeply constructivist theory of stu-
dent cognition that drove teaching, learning, and assessment at CPESS.
Still, the Habits of Mind could apply to a range of pedagogical approach-
es. Teaching and learning at CPESS relied on a curriculum of projects and
problem solving to enact Deweyan ideals of democratic learning and life.
”If ideas, meaning, conception, notions, theories, systems are instrumen-
tal to an active reorganization of the given environment,” Dewey (1920)
argued, “they are reliable, sound, valid, good, true” (p. 128). Assessment,
according to this logic, does not merely confirm learning; the successful
display and application of knowledge is an integral element of knowledge
itself. Echoing Dewey and the larger constructivist tradition in education
(Popkewitz, 1998), the concept and logic of assessment at CPESS was inte-
grated into the culture of teaching and learning.
Students demonstrated mastery of the Habits of Mind through a body
of evidence that CPESS educators referred to as “graduation by portfo-
lio.” In the technical language of assessment, these portfolios constituted
the observations that formed the second pillar of the Assessment Triangle.
Portfolios served as records of individual students’ understanding of ma-
jor subject areas: math, science, English language arts, history, and art.
Each portfolio contained a set of student work referred to as “exhibi-
tions.” Paper-and-pencil tests, performance tasks, simulations, real-world
projects, and many other “items” were included as evidence of learning.
To demonstrate the development of their thinking, students often in-
cluded work from ninth grade alongside of work products developed in
the eleventh and twelfth grades (Darling-Hammond et al., 1995).
Preparation of portfolios shaped instruction in all CPESS classes
throughout the year. Although the portfolios reflected different sub-
ject matter domains, all this disparate material and information was or-
ganized around the five “big ideas” articulated in the Habits of Mind.
Similarly, CPESS classes focused on the Habits of Mind along with sub-
ject matter content knowledge.
Students defended their portfolios before committees led by CPESS
teachers. These presentations both reinforced the notion that a cru-
cial aspect of knowledge was its use and facilitated teachers’ shared un-
derstanding of the Habits of Mind and how mastery of them could be
TCR, 116, 020301 Teaching to the Test at Central Park East
9
demonstrated. Using rubrics, exemplars, and scorer moderation tech-
niques, CPESS educators developed interpretations—the third pillar of the
Assessment Triangle—that met the technical requirements of a viable,
legitimate assessment system yielding valid and reliable inferences about
student achievement (see, e.g., Wilson & Sloane, 2000).
The “graduation by portfolio” assessment system provided at least
three levels of analysis of student work, each intended to yield a mean-
ingful inference about a particular student’s skills and proficiencies. At
the first level of analysis/inference, teachers provided assessments of stu-
dent progress in their courses, conceptualized in terms of the progress
students were making toward the production of portfolios. CPESS used
both letter grades and narrative evaluations of coursework to provide
stakeholders such as students, administrators, parents, and other teach-
ers with an emerging picture of the students’ progress towards mastery of
the school curriculum and the Habits of Mind. The narrative evaluations,
in particular, enabled teachers to frame assessments in terms of students’
ability to use the Habits of Mind. Statements such as “generally knowl-
edgeable, can make connections between the material we study and her
knowledge of the world in general,” and “she was able, through questions
in her oral presentation, to make a number of sophisticated arguments,
and make connections between situations of Black Americans in revo-
lutionary times and now” epitomized the role of the Habits of Mind in
assessment practices (Tyner-Mullings, 2008, p. 61).
At the second level of analysis/inference, when students submitted their
written portfolios to their advisor, a rubric was used to formally evaluate
the level of mastery demonstrated by the exhibitions or what assessment
researchers might call “evidence” submitted. Evidence for mastery of the
Habits of Mind ranged from video documentaries to essays to lab reports
and all manner of project-based activities embedded in the curriculum.
CPESS’s scoring guides and rubrics were closely aligned with the Habits
of Mind, that is, the cognitive learning outcomes promoted by the school
community—across disciplines and divisions.
At the third level of analysis and inference, a graduation committee
independently evaluated the student’s mastery of the learning outcomes
by examining the written records contained in the portfolios and hearing
an oral presentation of the material in them. Recognizing that the full
sum of student achievement was unlikely to be accurately represented
at any particular time or event, graduation committees were settings in
which teachers could share concerns about the meaningfulness (“valid-
ity”) of inferences drawn from the limited samples of student work, as
well as the consistency (or “rater reliability”) of scores among committee
members (Darling-Hammond et al., 1995).
Teachers College Record, 116, 020301 (2014)
10
While CPESS teachers worked to uphold “technically” or psycho-
metrically sound principles in the interpretation of student evidence,
they also tried to see the student’s performance in the context of the
whole child (Chittenden & Wallace, 1991; Wolf, Bixby, Glen, & Gardner,
1991), knowledge of which depended on the relationship an individual
teacher had built with an individual student. Factors such as the student’s
socio-emotional well-being, family life, struggles with reading and writ-
ing, status in the community, and so forth were deemed meaningful to
making a judgment (Lieberman, 1995). CPESS teachers thus sought si-
multaneously to produce comparable assessments of students’ learning
and to situate learning in the unique situation of each student. While
the dilemmas CPESS teachers faced are especially apparent in school-
based performance assessments, the tension between an appreciation
of the unique situation and learning of any individual and a commit-
ment to validity, reliability, and generalizability in evaluating assessment
data should concern all educators (Kane & Mitchell, 1996; Moss, 1992;
Shavelson, Baxter & Gao, 1993). CPESS’s approach contrasts with the
disaggregation of student data in NCLB. The use of racial and other sub-
groups in NCLB might be a useful first step in attending to differences
among students but it still reduces the complexity of social, political, cul-
tural, and economic life to simplified and monolithic categories.
Graduation committee deliberations about portfolio defenses, as they
were called, were occasions not only for the assessment of individual
students but also for coordinating common standards of judgment and
reflection on the dilemmas of obtaining valid and reliable assessment
information. Together with administrators, staff, and parents, teachers
also reviewed sample portfolios, occasionally with outside experts, for
example, from Teachers College and Brown University, in professional
development sessions. Many times CPESS teachers and administrators
grappled with the problem of what measurement experts call item sam-
pling, that is, how to generalize about an overall level of student perfor-
mance based on potentially insufficient data. These struggles were par-
ticularly intense in cases where a student was on the “borderline” and not
clearly ready to pass her graduation committee without revision of the
academic work (Gold & Lanzoni, 1993). Again, the contrast with NCLB is
useful. Whereas the current assessment regime encourages teachers and
administrators to game the system by focusing on students at the cusp of
passing scores in test-taking skills, CPESS promoted teacher accountabil-
ity through public discussion of students’ ability to use knowledge in the
worlds of college, work, and civic life.
A final level of inference warranted by insiders and outsiders was col-
lege admission and completion. In fact, year after year, more than 90%
TCR, 116, 020301 Teaching to the Test at Central Park East
11
of CPESS graduates enrolled in college, including many highly selec-
tive institutions (K. H. Miles & Darling-Hammond, 1998). In a study of
CPESS graduates, Bensman (2000) reported positive results for students
who went on to attend post-secondary institutions. In the parlance of
educational measurement, the predictive validity of CPESS’ graduation
“scores” could be inferred, in part, by their positive correlation with suc-
cess for graduates in college.
LISTENING TO TEACHERS’ TALK ABOUT ASSESSMENT FOR LEARNING
What did CPESS’s vision of assessment and school-based accountability
look like to its adherents? How was that vision similar to and different
from these teachers’ experience of more traditional approaches to test-
ing and grading? Why did they embrace a new culture of portfolio-based
assessment, even when they recognized the wider institutionalized an-
chors that discouraged pedagogical reform? In order to uncover teach-
ers’ thinking about these questions, this research relies on the voices of
CPESS teachers themselves.
By recasting these teachers’ vision of accountability in the language of
technical expertise in educational assessment and measurement, we run
the risk of ascribing values and motives that our subjects may not have
held and obscuring other motives that informed their work. To the ex-
tent possible, we address this challenge by letting CPESS teachers speak
for themselves in the text.
While many CPESS teachers, together with the MacArthur Foundation,
saw school founder Deborah Meier as a genius whose uniqueness pre-
cludes attempts to generalize from her example, the teachers themselves
have far more in common with other experienced public school teachers.
As Deborah Meier (1995) argued, “we are [not] more caring than other
teachers or other schools” (p. 62). Rather, “we have a structure and a style
that enables us to show our care effectively.” As researchers have noted,
school organizational structures (including class size, departmental divi-
sions, curriculum sequencing, and tracking – even use of physical space)
can facilitate or impede both teachers’ and students’ work. Teachers are
typically alone when they examine student work and think about student
performance (Little, Gearhart, Curry, & Kafka, 2003). CPESS, on the
other hand, was organized in order to maximize teachers’ collaborative
capacities to reflect on student growth. The school’s use of block sched-
uling, advisory, teacher-team meetings, and graduation committees en-
abled teachers to leave the isolation of their own classrooms and “buy
time” to think together about the meaning of student work and academic
achievement in the broader contexts of the school’s improvement.
Teachers College Record, 116, 020301 (2014)
12
CPESS’s structure and vision, rather than the unique attributes of
CPESS’s teachers, fostered open, critical, yet non-threatening discussions
among teachers through which they developed understanding of and
commitment to the school’s approach to teaching and assessment. Even
if the very decision to work at CPESS signals the distinctiveness of these
teachers, much of their thinking and practice resonates with that of their
peers in so-called unreformed schools.
CURRICULUM AND ASSESSMENT AT CPESS
CPESS built on the pedagogy and renown of the Central Park East el-
ementary schools, from which it drew much of its vision and many of its
students. Troubled that graduates of the elementary schools were con-
signed to traditional middle and high schools, Deborah Meier and like-
minded teachers established CPESS in 1985. The school’s “fundamental
aim,” according to its handbook for 11th and 12th graders, was “to teach
students to use their minds well.” CPESS’s vision of “authentic learning,”
educational researcher Linda Darling-Hammond and her associates
(1995) argue, was epitomized “when a student challenges himself with a
self-initiated question that he is driven to find the answer to” (p. 21). Or,
in the words of CPESS math teacher James W., “the mission of CPESS
[was] to support kids to become independent thinkers.”
CPESS envisioned active, engaged learning as a prerequisite for a life
of active, engaged democratic citizenship. “My ideas on teaching and
learning,” Deborah Meier reflected, “focus on ‘small d’ democratic val-
ues. I mean a respect for diversity, a respect for the potential of each
individual person, a respect for opposing points of view, and a respect
for intellectual vigor. My concern is with how students become critical
thinkers and problem solvers, which is what a democratic society needs”
(Fliegel, 1994).
“School,” Meier and Paul Schwartz (1995), who succeeded her as prin-
cipal, argued, “must be a place where students learn the habits of mind,
work, and heart that lie at the core of such a democracy. Since you can’t
learn to be good at something you’ve never experienced–even vicari-
ously–then it stands to reason that schools are a good place to experience
what such democratic habits might be” (p. 28).
CPESS’s vision owed more to John Dewey’s injunction (1900) that
“what the best and wisest parent wants for his own child; that must the
community want for all its children” (p. 7) than to contemporary mul-
ticulturalism. Unlike many of today’s reformers who conceptualize stu-
dents’ interest as grounded in the particularity of ethnic, racial, or gen-
der identities (Tate, 1995), CPESS was grounded in the conviction that
TCR, 116, 020301 Teaching to the Test at Central Park East
13
democratic education connected the particular interests of the individual
to the shared interests of the entire society. Still, to guide students to
what was “best and wisest,” the school did not rely on the transformative
potential of charismatic teachers recruited from elite colleges and uni-
versities (Dobbie & Fryer, 2009; Kopp, 2003) but rather focused on the
organizational conditions of students’ and teachers’ work. “We created a
structure,” Deborah Meier and Paul Schwartz observed, “in which people
. . . could think aloud together and jointly make decisions” (1995, p. 29).
CPESS’s creation was part of a broader campaign, led by educa-
tor Theodore Sizer and the Coalition of Essential Schools, to reform
American high schools. This movement was grounded in the conviction
that studying a few things well and deeply rather than covering many
topics superficially fosters an ongoing capacity to think and learn; that
authentic learning is characterized by the active solving of real problems;
that the role of the teacher is to guide such explorations rather than to
pour knowledge into student “vessels”; that the results of inquiry into
authentic problems should be displayed publicly; that all students are
entitled to a demanding and engaging education; and that for such an
education to take place, personalized teaching and learning and caring
relationships between teachers and students needed to displace the ano-
nymity of factory or shopping mall high schools. Eventually enrolling
some 450 7th through 12th graders, CPESS appealed both to educators
who argued that large high schools fostered student anonymity and dis-
engagement and to those who attributed student failure to a superficial
curriculum unconnected with thoughtfulness or obvious relevancy to stu-
dents’ lives.
The Coalition’s principles infused CPESS efforts. The seventh- and
eighth- grade U.S. history course was organized around the study of pow-
er rather than a narration of two centuries’ worth of events (Kirp, 1989).
“Because we were focused on big ideas,” adds English teacher Mary M.,
There would be a lot of different ways to look at subject matter. If we
were studying the government, we would focus on ways to engage
kids. Why should we study government? What’s important about
doing it? How different countries vary in their governments. If we
were studying government and then went to the United States, we
might read To Kill a Mockingbird and figure out what you might
learn about government from reading a novel, so there were lots of
different ways to get to a concept or an understanding. . . We really
always started with big ideas and developed activities in service of
the big ideas. There were broad content areas, but the Habits of
Mind really pushed learning as the central piece.
Teachers College Record, 116, 020301 (2014)
14
The learning process was no less important than content. According
to co-directors Deborah Meier and Paul Schwartz, CPESS educators “or-
ganized our curriculum and assessment around the idea that a person
in the habit of looking for answers to the five [Habits of Mind] questions
when presented with a novel situation is using his or her mind well”
(Meier & Schwartz, 1995, p. 30). “The whole idea,” math teacher Betty
B. would recall, “was that, for example . . . if you’re learning history, you
want to be doing it. Or if they’re truly learning how to write, are they
asked to write, and for which audience, and what kind of genre? Or if
you’re really studying science, you’re learning about science content, [but
also] who are you as a scientist.”
Social studies teacher Brian D. was initially hired at CPESS to teach a
world history course. Although he found the school deeply engaging, the
course itself left him cold. “Look,” he said to his colleagues,
I want to teach economics. I want to teach it as Habits of Mind to
answer these enduring questions—what is value? Who creates it?
How should wealth be distributed according to the perspectives of
classical economists like Smith, Ricardo, and Malthus? The Habits
of Mind can be used to answer fundamental questions about eco-
nomics. At the time the issue of rent control was really hot in New
York City. The Habits of Mind allowed me to make connections
between classic economics texts and the media reports of the rent
control debate. We’d read articles from different sources—The
New York Times, The Post, The Economist—dealing with the debate
going on in the city. Perspective: do media accounts represent
the landlords’ perspective? The workers’ perspective? The capi-
talist perspective? What in the texts suggests connections between
rent control and other social issues? Classroom discussions were
personal and lively: “So what if rents are raised? Who gets hurt?
Who are you in this debate? Are you advocating for landlords or
for renters? Are you advocating for your personal opinion and
interests or more globally based on some principle of economics?”
My colleagues were fairly agnostic around what topics or texts to
include in the course. You just had to deliver the goods—exhibi-
tions leading to portfolios around the Habits of Mind.
In the words of Darling-Hammond and her associates (1995), CPESS
and other Coalition schools promoted “high standards without standard-
ization” (p. 22). Such a goal did not eliminate the need for students to
master content or for teachers to cover traditional curriculum topics, but
it centered assessment on broader and more enduring, but also more
amorphous matters in settings closer to practical life.
TCR, 116, 020301 Teaching to the Test at Central Park East
15
ASSESSING FOR STUDENT LEARNING
Portfolios formed the capstone of CPESS’s curriculum and assessment.
The school’s portfolio assessment process was inspired by a 1987 visit to
Walden III, an alternative public school in Racine, Wisconsin (Henderson
& Raywid, 1994). In order to graduate from CPESS, students had to pres-
ent a series of discipline-based portfolios to committees composed of a
teacher with subject expertise, the student’s advisor, another adult from
inside or outside of the school, and a student peer from a lower grade.
(At first 14 portfolios were required; later the number was reduced to 7).
Relying on a rubric that assessed the portfolio according to each of the
five Habits of Mind, the committee would deliberate and then share its
deliberations with the student.
At portfolio presentations, Mary M. recalls,
You have a humanities teacher if it’s a humanities portfolio. You
have the advisor; you have perhaps a parent, another student;
you have a diverse group who then have to talk and use evidence
from the rubric. “Well I don’t think she had enough evidence on
this particular thing.” “Well, yes she does. If you look on page 4 .
. . ” So there would be a discussion among the committee to eval-
uate the demonstration of mastery. You have to discuss it and
agree on where you think the presentation and portfolio falls.
At an early portfolio presentation, a student presented a study of the
effectiveness of various commercially marketed antacids. After the com-
mittee deliberated, Melissa T. told the student,
I gave the paper an 18 [out of 20]. I thought it was a wonderful
paper. It went well beyond the scope of the initial experiment; it
was clear that you knew what you were doing in the experiment;
and I loved the way it was written. . . . I gave you a 4 [out of 5] on
the presentation, because I was somewhat disappointed in your
ability to explain pH, although I felt that you handled the rest of
the presentation and questions quite well (Darling-Hammond
et al., 1995, p. 22)
Still, CPESS did not conceptualize portfolios as collections of a stu-
dent’s work in a given subject. Rather, as Mary M., explains,
It was a specific project that demonstrated your understanding
of the major concepts of that particular field. So a history portfo-
lio, or a science portfolio, would be a project in that field where
you entered into a period of discovery and research. You started
Teachers College Record, 116, 020301 (2014)
16
with a problem you wanted to find out about, or an issue or ques-
tion, and you researched it in a variety of ways using the tools of
that discipline, and you came out with some findings that you
presented to committee, so that it was a discussion. It’s not a col-
lection of objects. The objects represent a process. And you have
to demonstrate that you understand the major concepts of the
field in front of the committee that has examined the materials
that you gathered for them. You present an oral piece, not what
was in the paper, and you get questioned.
The rubric teachers developed to assess portfolios fostered their (and
thus students’) attentiveness to each of the Habits of Mind and to their
interrelationship. “If you give evidence for your opinion on the areas
of genocide around the world” in a portfolio presentation, Mary M. ex-
plains, “you have documentation as to why you think the Trail of Tears
belongs in that list or the Armenian genocide, or your sources—primary
and secondary—that you have used to support your points of view.”
The demands of the portfolio came to shape all the classes at CPESS.
“In the classes,” Mary M. recalls, “kids did what we called exhibitions,
which were projects on content specific topics. So there might be an exhi-
bition on the Fourth Amendment, and that would be a mock trial, with a
written piece, an oral piece, and a visual of some sort.” Moreover, as Lori
C. explains, “if the portfolio’s the final outcome, you gotta teach to the
final outcome. So part of my focus as a humanities teacher was what were
the expectations for the history portfolio and how can I build your capac-
ity to be able to meet those expectations.” The Habits of Mind framed
those expectations. “In history,” Lori C. would teach her students, “it’s
not just like, ‘write a country study and write about the population,’ like,
‘make a list of the things that are in the encyclopedia.’ It wasn’t that. You
needed to be able to use the Habits of Mind to be able to demonstrate
your understanding of a particular history event, time period. So part of
my training to my students was to be able to use the Habits of Mind to do
it for one of the events that we were studying.”
CPESS educators developed their approach to assessment to better ar-
ticulate their approach to teaching and learning. They thus had a strong
sense not only of what they were assessing but also of how and why they
were assessing it. “CPESS,” recalls English teacher Lori C., “was based
on formative assessment. . . . What we were doing was figuring out where
the kids were at, what did they need to know, how were you going to get
them and bridge the gap between where you wanted to get them with the
project and where they were in their learning and understanding.”
TCR, 116, 020301 Teaching to the Test at Central Park East
17
CPESS students had many opportunities to revise the projects that
constituted the bulk of their work before they received a final evaluation.
Most feedback was geared toward improving that work and developing
the skills and habits whose weaknesses it made manifest. The commit-
ment to formative rather than summative assessment (Bloom, Hastings,
& Madaus, 1971; Scriven, 1973; Shepard, 2000) was reflected in the
school’s version of “report cards” and how they were used. “At CPESS,”
Lori C. explains,
Your narrative report cards could be as long as three pages; they
definitely were two, and it was all narrative. It wasn’t like a lot
of checklists. . . . So I decided to teach the kids how to do nar-
rative report cards, and that was all a metacognitive process of
learning, of understanding what were the criteria, which chil-
dren need to be successful in the humanities classroom. Both
conceptually as well as critical thinking skills. Teaching children
that understanding then having them assess their own learning,
so they did it and it was an amazing process which I continued to
repeat over and over so that the kids could come away from my
class owning their own learning, as opposed to being reliant on
me for telling them and determining what their assessment was.
By the time they appeared before a graduation committee, students
had not only repeatedly revised the contents of their portfolios accord-
ing to Habits of Mind-based feedback, they had completed multiple
“exhibitions” in which they practiced the process of portfolio creation
and defense. “Multiple assessment opportunities,” as Darling-Hammond
et al. (1995) conclude, encouraged “students to learn to reflect on and
evaluate their own work and experiences. Because the requirements are
portfolio-based, the criteria are open and constantly discussed, and the
work is viewed as always in process and improving” (p. 32).
The school-wide insistence on Habits of Mind in teaching and assess-
ing students was manifest to James W. from his first encounters in math
class. What “most astounded” him about CPESS was
that the kids that I would get in 11th grade after four years at
CPESS were so fluent in the Habits of Mind. They could really
talk the talk. And they knew, for example, if there was a con-
versation in the class about something, and a kid would make a
statement about something, they would say, “well, what’s your
evidence?” The other thing they knew was they really under-
stood the rubric. Because we used this rubric all the way through
from 7th to 12th. . . . What their strengths and weaknesses were
Teachers College Record, 116, 020301 (2014)
18
and what they had to do. It gave a framework, not just for grad-
ing them, but for their development.
Before coming to CPESS, James W. had taught for many years at New
York’s Brandeis High School. He contrasted CPESS’s approach with his
earlier experience. At Brandeis,
It was all about homework, three tests where kids would have
to work a series of problems, and then at the end there would
be a final. The grade would be an average of the tests and the
homework and the final. At CPESS I rarely gave a test, so the
assessments were all pretty much based on student work, their
performance in class, their work on projects.
Science teacher Randall H. also distinguished CPESS’s approach to
assessment with the system at New York’s George Washington High
School, where he had earlier taught. CPESS, Randall argued,
tried to provide authentic assessment. Authentic assessment for
me really tries to understand the individual in figuring out how
much they’ve advanced and what they really know. When you try
to put a test of bubble sheets in a scanner like we did a bunch at
George Washington, that’s more cursory assessment. The pur-
pose is to be able to educate a whole bunch of kids really quickly
and figure out what you said they were supposed to get and re-
port that back to the city, who can report that back to the county,
who can report that back to the state, who can report that back
to the feds. I think authentic assessment is much more about
tapping into a child, trying to see where they are, and trying to
help them reach their full potential by figuring out where you
can continue to help them grow.
No less than their students, CPESS teachers were afforded many op-
portunities to develop and internalize their understanding of the school’s
approach. Assessment, as Darling-Hammond and her associates argue
(1995), was “embedded . . . in an organization structured for caring and
striving for academic rigor” (p. 22). Eleventh- and twelfth-grade teachers
spent only about 12 hours per week in traditional classes; they devoted
much of the freed up time to helping students prepare their portfolios
(Darling-Hammond et al., 1995). In addition to the process of deliber-
ating about individual portfolios, long meetings about the portfolio as-
sessment system fostered CPESS educators’ shared vision. “Our common
understandings came around conversations we’d have as teachers–both
within and across disciplines,” notes Betty B. “We’d talk about what it
TCR, 116, 020301 Teaching to the Test at Central Park East
19
means to be an educated person, a person who leads your life with an
inquiry stance.” “We met as teams,” echoes Mary M.
There would be a humanities team meeting. We would talk about
what the exhibitions would look like. How were we really going to
know what the kids knew? How were they going to demonstrate
their understanding? What were ways to do that? And we shared.
Before they met to consider any individual portfolio, Mary M. notes,
teachers at CPESS
did a lot of practice with each other at staff meetings, at events
with other schools. We had to do a lot of that to make the rubric
reliable. And validity. So we did a lot of that. Sometimes we’d
have people from a lot of different universities come in and read
the portfolios and give us their sense. People from colleges who
taught freshman to look and see what they thought. We did a
fishbowl thing, and how did they evaluate it.
Assessing portfolios on graduation committees required teachers to re-
flect on what was central in other disciplines and to articulate what was
central in their own. “I was an English teacher,” Mary M. notes. “Even
though I may have known a little bit of history I had to really listen to
what the history teacher was saying was really important about this par-
ticular civilization or this particular period of time or this particular
document.” In this way, CPESS teachers relied on one another’s subject
expertise to moderate their scoring procedures, and focus their attention
on relevant elements of the rubric being used to evaluate student work.
CPESS teachers also enjoyed sustained support from the administra-
tion to engage in faculty retreats to tackle the reliability of assessment
data generated by their accountability system. They also made time for
creating and revising their assessment tools. At the time of the school’s
founding, Penny W. recalls,
We had a rubric retreat. . . . at the beginning and end of the year.
. . . And we took papers from the Senior Institute. . . . We took
papers and we got in groups and we all read the same papers
and we all scored them to get inter-rater reliability. And then we
saw that we could put 1, 2, 3, and 4 down, and then we worked
on the defining the Habits of Mind . . . and that’s what we used
to score all our papers across the school.
Compared to the type of standardized testing developed in the first
decades of the twentieth century and promoted by No Child Left Behind
Teachers College Record, 116, 020301 (2014)
20
legislation, portfolios intensified questions about what educational mea-
surement and assessment scholars refer to as the “generalizability” of
claims of knowledge on the basis of evidence from only a few topics.
CPESS teachers, like many assessment researchers (Resnick & Resnick,
1992; Wiggins, 1989; Wolf, 1989), argued for the soundness of their as-
sessment practices based on the notion that measuring complex perfor-
mances and higher order thinking skills should be the goal in public
education—not a preoccupation with the technical elegance of easily ad-
ministered standardized test items.
CPESS’s approach also reflected the idea that assessment needed to be
grounded in educators’ knowledge of and relationship with the particu-
lar student involved. But, CPESS educators wondered, if attention to a
student’s particular circumstances and skill set was central to validity, did
that same focus also undermine generalizability? That is, the more teach-
ers framed valid assessment in terms of evidence of a student’s capacity
to use knowledge in a specific authentic task, the less confidence they
had about the breadth of the student’s general knowledge beyond the
observed performance. “This might sound bad,” Randall H. notes, but
CPESS’s focus raised the question of what constitutes “a passing grade
for someone who has a lot of talent versus a passing grade from someone
who was borderline. You take into consideration who the kid was and
what the kid was.” Most teachers, Randall H. recognized, did not need to
confront this question. “When you see a Regents test you can’t take other
factors into consideration. All you have are what filled in those bubbles
and what the essay looks like, and that’s it.” At CPESS on the other hand,
“You have more visibility into the whole child and even the presentation
and the effort that they put in and that’s where you can give other points
or credit where there wouldn’t be any credit in the other situation.” (See
also, Meier, 1984.)
CPESS teachers confronted the tradeoffs of their assessment system. In
one staff development meeting (Darling-Hammond, 1995), the teachers
reviewed the portfolio of a struggling student. “I want to emphasize that
we are going to keep running into the conflict between our standards and
what we know about our students—their histories, work habits, abilities,”
Eloise H. observed,
This student has a hard time getting through things. This paper
is his first effort in completing a book and writing about it. . . .
I graded the paper “minimum pass” because I have confidence
in his ability to intellectually discuss the literary meaning of the
book before his graduation committee and to make connections
TCR, 116, 020301 Teaching to the Test at Central Park East
21
between his life, the life of his community, and the book. This is
an opportunity for him to succeed. It would set the foundation
for future improvement and success in the thirteen portfolios
to follow. Without this vote of confidence by the staff, his future
remains jeopardized. (pp. 68-69)
Other teachers weighed in, some supporting a passing grade, some
not. None, though, disputed co-director Deborah Meier’s claim that
“we’re not really discussing if this paper meets our standards. It doesn’t.”
Nor did staff dispute the implication that Meier drew from the student’s
poor performance, that “we haven’t figured out how to help him become
a better writer” (Darling Hammond, 1995, p. 69). In short, CPESS teach-
ers were committed to teaching to the “test” because teaching and assess-
ment were guided by a shared, public understanding of learning.
The quality of CPESS discussions of their assessment system struck
educational researchers. “In several sessions during which portfolio rat-
ings were reviewed by staff,” Linda Darling-Hammond and her associ-
ates (1995) report,
A number of fundamental questions were raised: Is our system
of assessment evaluating the things we think are important for
students to know and be able to do? Are we using similar cri-
teria when we assess student work? How are faculty evaluation
affected by knowing a student well? Should students with special
needs be held to the same standards as other students? How do
we achieve high standards without dysfunctional standardiza-
tion? Can we assure . . . that our assessment system is valid? Even
more important . . . how do we ensure that assessment serves our
broader goals for student learning? (p. 64)
ASSESSING HABITS OF MIND ACROSS THE CURRICULUM
Although the Habits of Mind guided curriculum and assessment through-
out the school, they were best suited to the humanities and social sci-
ences. Teachers worked to adapt them to other disciplines. “I thought of
myself as a teacher, not as a math teacher,” Betty B. recalls, “but you’re
teaching content.” Like those in other subjects, math teachers at CPESS
discussed their unique approach in extended meetings. Betty B recalls
meetings focused on relevance: What is it we’re teaching in this unit?
What’s the math here? And how do we want kids to show they know it?
But mathematics required particular attention to conventions, procedur-
al fluency and symbolic representations, and seemed to offer less room
Teachers College Record, 116, 020301 (2014)
22
than the humanities to build on students’ interests or develop empathy
for others’ points of view. In Betty’s view, geometry and algebra required
attentiveness to the formalities of logic and representation that were not
well articulated in the more generic Habits of Mind-based rubric. “In
math,” she continued, “the curriculum dictated [the kinds of projects stu-
dents undertook]. Portfolio pieces in math reflected the curriculum more
than other subjects. There were portfolio pieces that looked the same. . .
. Talking about the Habits of Mind in relation to math—it was a struggle.
. . . A lot of that seemed to be a little bit more contrived.”
Just as teachers adapted the Habits of Mind to the pedagogical and
curriculum demands of their subject, they adapted the standards by
which they evaluated portfolios to their discipline. Again, math exempli-
fied this process. “We had a rubric that we would use to grade the port-
folio,” James W. explains, “and that rubric was actually used in all the
disciplines.” Sometimes, however, math teachers had to adapt the rubric
to fit the demands of mathematics. “Point of view was certainly difficult,”
James W. notes, “but we also felt like there was a need for something
around specific mathematics conventions like representation, use of no-
tation.” Although the school-wide rubric played a crucial role in the as-
sessment of math portfolios, Lori C. explains, “We had to create our own
rubrics. At times we created rubrics that were specific to our projects.”
The Habits of Mind and the rubric based on them received even more
revision in art teacher Jonas H.’s thinking about assessment. Before
teaching at CPESS, Jonas had worked for many years as an artist. The
presentation of portfolios in which students put together a multifaceted,
complex body of work (what is sometimes called an “items design” in
assessment) made complete sense to him. After all, he explains, “the no-
tion of a portfolio is really the artist model. For me it was where I lived
to begin with. So for me what was more significant was to value that
practice not only in the area of art but in one’s life.” Still, Jonas was more
concerned with students’ ability to articulate their own voices than their
attentiveness to alternative interpretations. For Jonas, portfolios served
more as a means through which artists could express the complexity of
their individual visions than as running records of students’ growth and
mastery of the Habits of Mind. The Habits of Mind functioned more to
help Jonas H. reflect on what mattered in art than to guide the evalu-
ation of students’ portfolios. Ironically, then, even when teachers ques-
tioned the full applicability of the Habits of Mind to their disciplines,
they did so according to the very concerns enumerated by the Habits of
Mind themselves.
TCR, 116, 020301 Teaching to the Test at Central Park East
23
BUILDING A CULTURE OF LEARNING, ASSESSMENT, AND RESPECT
In the interviews conducted for this study, CPESS teachers discussed
persuasively the role of assessment in teaching. They had a keen sense
of what they were assessing in terms of: 1) how to maintain focus on
the Habits of Mind; 2) how to assess in a variety of modalities such as
projects, portfolios, and multi-media; 3) how to interpret student work
with rubrics and scoring guides aligned with the Habits of Mind; and
4) how to address and work through the challenges and opportunities
posed by competing interpretations of the student data, related to is-
sues of score generalizability and rater reliability. Teachers could also
articulate the impact of their assessment practices—both on students as
a group, and on particular individuals. Still, the impact of the portfolio
assessment system itself and the shared moral, political, and pedagogical
commitments of CPESS teachers do not fully explain the role of teacher-
led assessment at the school. In the remainder of this article, we contrast
CPESS teachers’ assessment ideas and practices with the ideas and prac-
tices that had previously marked their work. Finally, we suggest what
enabled these teachers to transform their approach.
CPESS teachers contrasted the school’s assessment practices with those
of the schools in which they had previously worked. At other schools, great
faith was placed in standardized tests that typically assessed lower-order
thinking skills, using items that were easily administered (Newmann,
Bryk, & Nagaoka, 2001; Resnick, 1987). And often teachers’ own tests at
these schools were not very different. They too reinforced a curriculum
focused on mastery of often-trivial declarative knowledge which students
were not expected to actually use in thinking about questions that arose
outside of the classroom. This “mere absorption of facts and truths,” as
Dewey (1900) presciently observed, “is so exclusively individual an affair
that it tends very naturally to pass into selfishness” (p. 29). The pervasive
atmosphere of judgment and failure enforced by such assessments thus
became a self-fulfilling strategy and heightened the reproduction of so-
cial inequality in schools.
CPESS’s process for developing teacher-initiated curricula and assess-
ment thus reflected not only a pedagogical perspective but also a form of
social-political critique. CPESS teachers were convinced that traditional
assessment practices were merely one of the many “cumulatively devas-
tating” procedures at typically organized public schools—and especially
those populated by poor and minority students—which created degrad-
ing, stultifying conditions for teachers and students (Meier, 1987, p. 543;
Meier, 1974-2000).
Teachers College Record, 116, 020301 (2014)
24
At CPESS, on the other hand, commitment to the Habits of Mind fos-
tered a culture of respect for students and teachers. As a new teacher at
CPESS, Brian D. was struck by the frequency with which,
People would talk about the Habits of Mind—in the staff room,
in the hallways, in Advisory meetings—not just in classrooms.
The assistant principal would be talking with a student hanging
out in the hallway during classes. And the student would be go-
ing off, arguing, telling the AP to chill. And all these questions
are being posed to the student, “Well, how do you think that
makes other people feel when you don’t take going to class seri-
ously? So, when you do that, what are the effects of that on others
around you? Is there another viewpoint on this? What does it
mean to understand yourself through somebody else’s eyes? So
if you were in my position, what’s the next thing you would need
to do?” It seemed overwhelming . . . how many perspectives can
anyone really absorb? But this kid, nearly twice as tall as the AP,
is just standing there, holding his ground, and communicating
for his life. He gets it—perspective isn’t just a word on the chalk-
board—it’s a way for him to hold his ground, give ground and
just be human.
The Habits of Mind transformed teachers’ work and relations, no less
than students’. Prior to coming to CPESS in its second year, English
teacher Mary M. taught in a New York City junior high school. Students,
she recalls,
were grouped by reading scores. Even if everyone has an 8.1
reading score, that doesn’t mean they all learn the same way, but
somehow you tailored your learning to the reading score ability
levels. . . . I had to stick to the lesson plan format. . . . My room
was very tightly organized. I had kids in row, I had my name on
the board, I had my homework. I was on page 42. I’m not sure
if the kids were there with me, but I knew where I was. . . . How
I got to CPESS was I had been teaching in New York City junior
high schools for 18 years, and I was ready to quit.
Although sympathy for CPESS’s vision attracted teachers to the school,
they did not arrive with the skills and habits needed to implement it.
Before coming to CPESS, James W. had taught math at “a very large,
comprehensive high school.” There, his teaching was “very teacher di-
rected, teacher dominated. I spoke in curriculum tracks and sequences.
I had, you know, to get kids ready for the next class.” When James W.
TCR, 116, 020301 Teaching to the Test at Central Park East
25
applied to teach at CPESS, he “knew what the school’s perspective was
and had a lot of sympathy for it.” Still, James
didn’t have any actual practice doing it. . . . It’s not like I came
into CPESS fully formed. I had students who would right away
tell me, “That’s not the CPESS way,” when I would start lectur-
ing, or talking or whatever I was doing. And I didn’t really do
it all that well for a long time. I would say two or three years. I
mean, it was almost equivalent to becoming a new teacher in
many ways.
The sense of teacher community engendered by the Habits of Mind
fostered the capacity to grasp and enact CPESS’s approach. Mary M. was
most struck by
the community of learners and the joint approach to learning.
It was all about learning; it was all about ideas; it was all about
problem solving. . . . Everybody was engaged in conversation
thinking about what we were teaching, how we were teaching
it. . . . Everybody was talking about how you do this, or what’s a
good way to do that, and so there was an openness about trying
things. And what struck me was that you could learn along with
the kids, that you weren’t just the giver of information, you were
the facilitator of learning.
Moreover, the Habits of Mind shaped not only teaching and assess-
ment at CPESS; they suffused the culture of the school. Lori C. recalls,
The Habits of Mind were on every wall; they were everywhere.
Debbie, every time she spoke to you, “well, that’s your point of
view.” Meeting Debbie in the hallway when you’ve had a dispute,
because she would tell you, “I don’t believe that! You’re not giv-
ing me enough information and evidence.” Every task was em-
bedded with the Habits of Mind so that you became a community
of critical thinkers and you were constantly asking, how do you
know what you know? How do I know what I know?
The pervasiveness of the Habits of Mind in non-academic as well as
academic activities both reflected and facilitated their central role in
teaching and assessment.
At CPESS, the Habits of Mind allowed teachers to confront issues that
went unacknowledged at other schools. Before moving to Central Park
East Secondary School, Betty B. had taught in a traditional school in
Chinatown.
Teachers College Record, 116, 020301 (2014)
26
The belief was, if you were Black or Latino, then you weren’t as
smart as the Asian kids in this building. Most of the kids in that
building who were Latino or Black were in my special ed class-
rooms or in the special ed department. You can’t possibly be
telling me that these children are not as smart as them, because
when they step outside, they’re all in the same community. But
to have a conversation about that in that building among the
teachers, it was unheard of! Why? Because there was nothing that
put us on the same page about where we were going in the first
place. Nothing.
For Betty B., questions of racial bias among staff also arose at CPESS,
but the school’s approach enabled staff to confront such difficult issues.
The Habits of Mind is what put us on the same page about ev-
erything we were doing. What do you mean, this kid needs the
structure and you’re saying he doesn’t have it? How would you
know what’s best for our kids? And that was a struggle. Absolutely
a struggle. All the time. We struggled with at what point do we
need to go back to a teacher-directed curriculum and telling kids
what they need to know versus saying, “here’s where the kid is;
let’s go where the kid takes us.” And who’s benefitting? I would
get at it with another teacher about that. But at least we could
have that conversation, and the rubric forced us as teachers to try
to be smart about our arguments. Never in a traditional school.
Inevitably, efforts to confront rather than evade the problems of real
life can be threatening to teachers, and difficult for them to address,
especially where social divisions such as those of race are in play. The
Habits of Mind afforded teachers a means to face those tensions and to
consider opposing perspectives from students and fellow teachers. Such
a climate of trust and communication was a prerequisite to assessment
for learning and not merely of learning (Black & Wiliam, 1998; Stiggins,
2002).
CONCLUSION
Thinking back on his experience at Central Park East Secondary School,
art teacher Jonas H. recalls,
I saw students learn things that I never thought they would be
able to learn, such as how to approach learning, such as how to
take care of yourself, such as how to ask questions, such as I can
TCR, 116, 020301 Teaching to the Test at Central Park East
27
disagree with you and we don’t have to be enemies. It was really
the best example of democracy that I’ve ever seen in my life.
For both students and teachers, education at CPESS reflected a relent-
less focus on the Habits of Mind, the five central constructs that structured
academic inquiry and represented lifelong learning goals. Although
CPESS teachers were generally predisposed to embrace the Habits of
Mind, the opportunities which the school structured for them to develop
and implement their understanding were crucial to the school’s success
(K.H. Miles & Darling-Hammond, 1998).
Over the past decade, the school reform movement for which CPESS
was a beacon has been displaced. Although many charter and small pub-
lic schools continue to implement the approach to learning and assess-
ment pioneered by CPESS (Fredrick, 2009), policy elites have by and
large abandoned progressive pedagogical reforms that trust in teachers’
commitment and capacity in favor of reliance on standardized tests to
shape teachers’ classroom practices.
Teachers frequently critique the reliance on standardized tests central
to current national and state-led accountability regimes. “Teaching to
the test,” they charge, narrows the curriculum to the limited set of topics
found in bubble test items, reinforces behaviorist models of instruction
among teachers, discourages critical and creative thinking, promotes su-
perficial engagement with classroom learning among students, and fur-
thers the alienation of students in school (Hodges, 2002; Menken, 2006;
Nichols, Glass, & Berliner, 2006; Rothstein, 2008). The categorization
of students produced by standardized testing practices (for instance, be-
ing labeled “below basic”) can reinforce feelings of marginalization that
already impact the achievement of many students of color and others ill-
served in schools. And even within a narrowed curriculum, externally im-
posed standards do not necessarily produce better teaching (Firestone,
Schorr, & Monfils, 2004; Porter, 1989; Rothman, 1995).
On the other hand, left to their own devices, teachers often create tests,
quizzes, and homework assignments that also discourage students’ deep
and sustained engagement with essential questions across the curriculum
(Wiggins, 1993). No less than the reliance on standardized test scores,
teachers’ reliance on grades as “feedback” about learning possesses lim-
ited meaning and effectiveness for students (Guskey, 2004). Rather than
advancing genuine student understanding, such grading practices can,
in fact, distort communication with students about emerging skills and
proficiencies.
A sophisticated understanding of assessment played a crucial role at
CPESS. Teachers came to embrace principles of sound assessment (its
Teachers College Record, 116, 020301 (2014)
28
“logic”) because these elements grew out of the shared, highly devel-
oped conceptualization of teaching and learning fostered at the school.
Teachers possessed a well-defined theory of cognitive outcomes, a mul-
tifaceted observation strategy to collect and examine student work/data,
and finally, a meaningful and consistent framework for interpreting that
work. Each of these fundamental elements of assessment design and
practice were integrated into the daily work of teaching and learning.
CPESS’s assessment system thus demonstrates that when schools are
structured to give them the opportunity, teachers have the capacity to
implement practices that are faithful to the logic of assessment as articu-
lated in the Assessment Triangle (NRC, 2001).
And beyond dealing with the technical challenges presented to psycho-
metricians who construct standardized achievement measures, CPESS
teachers also grappled with the messy, complex and multidimensional
nature of student achievement. They openly acknowledged the tension
between the Habits of Mind and the claims of the subject disciplines and
content standards (Leiberman, 1995). They worried about how to ad-
dress the experience and vision of the student as a unique learner in
a world marked by social inequality and exclusion—what measurement
scholars consider the extraneous “noise” that can distort a true “sig-
nal” purportedly represented by a standardized test score. Rather than
seeking to isolate assessment scores from threats to validity related to a
particular student’s motivation, values, personality, and circumstances,
CPESS teachers treated that “noise” as an integral element of learning.
They skillfully wove it into the fabric of their teaching and assessment
(Meier, 1993).
Finally, CPESS teachers evaluated the data derived from their assess-
ments in principled ways: in fact, the notions of evidence, perspective
and connection were embedded in their understanding of what makes
an assessment score reliable or an inference valid. These teachers were
attuned to the issues, questions, and problems that engage educational
assessment and measurement scholars alike. While they recognized that
portfolio-based assessment might not allow the same claims of validity
and reliability as standardized tests, they did understand how systematic,
evidence-based conversations around student achievement enhanced
their practice. CPESS teachers and staff did not suffer through account-
ability conversations at their school site—they demanded them.
CPESS teachers understood classroom learning as an individual stu-
dent’s active pursuit of academic knowledge whose truth and worth was
also measured by its use outside of the classroom, in the worlds of col-
lege, work, and civic life. The teachers engaged in teaching to the test—
through projects and rich tasks that allowed students to demonstrate the
TCR, 116, 020301 Teaching to the Test at Central Park East
29
power of their ideas (Meier, 1995). Moreover, they developed a unified
approach to learning, teaching, and assessment; in this accountabil-
ity system, portfolios constituted not only a means of assessing student
learning but also method for students to learn.
Although the evidence demonstrates that CPESS teachers developed
a demanding and engaging approach to teaching and assessment that
merits emulation, adherence to the logic of assessment does not dictate
the choice of a particular assessment modality. Too often the debate
about measuring students’ progress has been framed as a commitment to
either multiple choice “bubble” tests or “authentic” performance tasks,
without real discussion about the knowledge and learning that is being
assessed. Either approach (or a combination of them), when mandated,
inevitably shapes teachers’ classroom practices. Fidelity to the logic of as-
sessment demands that teachers develop mutually reinforcing pedagogi-
cal and assessment practices in relation to one another.
Moreover, fidelity to the logic of assessment contrasts with the ap-
proach promoted by federal and state policymakers. While the creation
of “Common Core” state standards and “Smarter Balanced” assessments
aligned to them do constitute a response to criticisms of NCLB, they
continue to rely on a model of assessment that discourages teachers en-
gaging the fundamental principles of assessment in their daily practice.
Teachers may feel a need to teach to new standardized tests just as they
have felt a need to teach to existing ones, but neither a richer array of
test questions, nor the participation of teacher representatives in the de-
velopment standards and assessments, nor involvement of teachers in
“data-driven” conversations about test data, encourage teachers to view
assessment as a means of fostering students’ deepest engagement with
knowledge and its uses.
Rather than seeing teaching to the test as a constraint that distracts
from the lessons they value, teachers embracing what we have called
the logic of assessment would ask such questions as: What do we in our
classrooms, communities and nation want students to learn? How do we
want them to be able to use this knowledge? How do we design or use
assessment tools that capture and represent what students have learned?
Which kinds of evidence do we need to draw valid and reliable conclu-
sions? In answering such questions, moreover, teachers not only enrich
their own work but also model a deeper, more useful discussion of assess-
ment and its purposes for the public.
CPESS’s school-based accountability system simultaneously mirrored
the logic of assessment that underlies educational research and brought
about a new, transformative assessment of literacy among students, teach-
ers, and administrators. This awareness in turn reflected and sustained
Teachers College Record, 116, 020301 (2014)
30
a school culture suffused with respect for teachers and students jointly
engaged in the fostering of democratic learning and life.
References
Bensman, D. (2000). Central Park East and its graduates: Learning by heart. New York:
Teachers College Press.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education,
5, 7-74.
Bloom, B. S., Hastings, T., & Madaus, G. (1971). Handbook of formative and summative
evaluation of student learning. New York: McGraw-Hill.
Chittenden, E., & Wallace, V. (1991). Reforming school assessment practices: The case of
Central Park East. Planning and Changing, 22(3-4), 141-146.
Darling-Hammond, L., Ancess, J., & Falk, B. (1995). Authentic assessment in action: Studies of
schools and students at work. New York: Teachers College Press.
Darling-Hammond, L., Ancess, J., & Ort, S. (2002). Reinventing high school: Outcomes
of the Coalition Campus Schools Project. American Educational Research Journal, 39,
639-673.
Dewey, J. (1920). Reconstruction in philosophy. New York: H. Holt and Company.
Dewey, J. (1990). The school and society. Chicago, IL: University of Chicago Press. (Original
work published 1900)
Dobbie, W., & Fryer, R., Jr. (2009). Are high quality schools enough to close the achievement gap?
Evidence from a social experiment in Harlem (Working Paper No. 15473). Cambridge, MA:
National Bureau of Economic Research.
Dougherty, J. (1999). From anecdote to analysis: Oral interviews and new scholarship in
educational history. The Journal of American History, 86(2), 712-723.
Duckor, B. (1999, June). Entrepreneurship for all: A portfolio-based approach. Presentation
at the Research Institute on Secondary Education Reform conference, University of
Wisconsin, Madison, Wisconsin.
Duckor, B. (2001, June). Student performance assessments at Central Park East Secondary School.
Presentation at School Redesign Network Institute, Stanford University, Menlo Park,
California.
Due, L. (2002). Revolution at Oakland Unified. East Bay Express, 14-24.
Eisner, E. W. (1991). The enlightened eye: Qualitative inquiry and the enhancement of educational
practice. New York: Macmillan Publishing Company.
Epstein, L., & Martin, A. (2005). Coding variables. In K. Kempf-Leonard (Ed.), Encyclopedia
of social measurement (pp. 321-327). Amsterdam, Netherlands: Elsevier.
Firestone, W. A., Schorr, R. Y., & Monfils, L. F. (Eds.). (2004). The ambiguity of teaching
to the test: Standards, assessment and educational reform. Mahwah, NJ: Lawrence Erlbaum
Associates.
Fliegel, S. (1994). Debbie Meier and the dawn of Central Park East. City Journal. Retrieved
from http://www.city-journal.org/article01.php?aid=1414
Fredrick, T. (2009). Looking in the mirror: Helping adolescents talk more reflectively
during portfolio presentations. Teachers College Record, 111(8), 1916-1929.
Gant, R. (1987). Archives and interviews: A comment on oral history and fieldwork practice.
Geography, 72, 27-35.
Gold, J. (Producer & director), & Lanzoni, M. (Ed.). (1993). Graduation by portfolio: Central
Park East Secondary School [Motion picture]. (Available from New York Post Production,
29th Street Video Inc., at http://vimeo.com/13992931)
TCR, 116, 020301 Teaching to the Test at Central Park East
31
Guskey, T. R. (2004). Are zeros your ultimate weapon? Education Digest, 70(3), 31-35.
Henderson, H., & Raywid, M. (1994). “Small” revolution in New York City. The Journal of
Negro Education, 63(1), 28-45.
Hodges, P. (2002). High stakes testing and its impact on rural schools. Rural Educator, 24, 3-7.
Kane, M., & Mitchell, R. (1996). Implementing performance assessment: Promises, problems, and
challenges. Mahwah, NJ: Lawrence Erlbaum Associates.
Kirp, D. (1989, Jan.). Education: The movie. Mother Jones, 36-45.
Kopp, W. (2003). One day, all children...: The unlikely triumph of Teach For America and what I
learned along the way. Cambridge, MA: Perseus Books.
Lieberman, A. (1995). Practices that support teacher development: Transforming
conceptions of professional learning. Phi Delta Kappan, 76(8), 591-596.
Linn, R. (2000). Assessments and accountability. Educational Researcher, 29(2), 4-15.
Little, J., Gearhart, M., Curry, M., & Kafka, J. (2003). Looking at student work for teacher
learning, teacher community and school reform. Phi Delta Kappan, 85, 184-192.
Meier, D. (1974-2000, undated). On testing [Mailings to Parents]. Deborah Meier Papers,
Lilly Library, University of Indiana, Bloomington.
Meier, D. (1984, March-April). On proposed structure of CPE II [Letter to Ted Sizer]. Deborah
Meier Papers, Lilly Library, University of Indiana, Bloomington.
Meier, D. (1987, Fall). Good schools are still possible. Dissent, 543-549.
Meier, D. (1993, July 9). Thirteen Core Beliefs. [Memo]. Deborah Meier Papers, Lilly Library,
University of Indiana, Bloomington.
Meier, D. (1995). The power of their ideas: Lessons for America from a small school in Harlem.
Boston, MA: Beacon.
Meier, D., & Schwartz, P. (1995). Central Park East Secondary School: The hard part
is making it happen. In M. Apple & J. Beane (Eds.), Democratic schools (pp. 26-40).
Alexandria, VA: ASCD.
Menken, K. (2006). Teaching to the test: How No Child Left Behind impacts language
policy, curriculum, and instruction for English language learners. Bilingual Research
Journal, 30, 521-546.
Meyer, R. H. (1996). Comments on chapters two, three, and four. In H. F. Ladd (Ed.),
Holding schools accountable: Performance-based reform in education (pp. 137-145).
Washington, DC: The Brookings Institution.
Miles, K. H., & Darling-Hammond, L. (1998). Rethinking the allocation of teaching
resources: Some lessons from high performing schools. Educational Evaluation and Policy
Analysis, 20(1), 9-29.
Miles, M. & Huberman, A.M. (1994). Qualitative data analysis: An expanded sourcebook.
Thousand Oaks, CA: Sage.
Moss, P. A. (1992). Shifting conceptions of validity in educational measurement: Implications
for performance assessment. Review of Educational Research, 62(3), 229-258.
National Research Council. (2001). Knowing what students know: The science and design of
educational assessment. Committee on the Foundations of Assessment, J. Pellegrino,
N. Chudowsky, & R. Glaser (Eds.). Board on Testing and Assessment, Division of
Behavioral and Social Sciences and Education. Washington, DC: National Academy
Press.
Newmann, F. M. & Associates (1996). Authentic achievement: Restructuring schools for intellectual
quality. San Francisco, CA: Jossey-Bass.
Newmann, F. M., Bryk, A. S., & Nagaoka, J. (2001). Authentic intellectual work and standardized
tests: Conflict or coexistence. Chicago, IL: Consortium on Chicago School Research.
Nichols, S., Glass, G., & Berliner, D. (2006). High-stakes testing and student achievement:
Does accountability pressure increase student learning? Education Policy Analysis Archives,
14, 1. Retrieved from http://epaa.asu.edu/ojs/article/view/72
Teachers College Record, 116, 020301 (2014)
32
Oakland Unified School District. (2000). New small autonomous schools district policy. Oakland,
CA: Author.
Patton, M. Q. (2002). Qualitative evaluation and research methods (3rd ed.). Thousand Oaks,
CA: Sage.
Pearlman, B. (2002). Designing, and making, the New American High School. Technos,
11(1), 12-19.
Popham, W. J. (2004). Why assessment illiteracy is professional suicide. Educational
Leadership, 62(1), 82-83.
Popkewitz, T. (1998). Dewey, Vygotsky, and the social administration of the individual:
Constructivist pedagogy as systems of ideas in historical spaces. American Educational
Research Journal, 35(4), 535-570.
Porter, A. (1989). External standards and good teaching: The pros and cons of telling
teachers what to do. Education Evaluation and Policy Analysis, 11(4), 343-356.
Resnick, L. B. (1987). Education and learning to think. Washington, DC: National Academy
Press.
Resnick, L. B., & Resnick, D. P. (1992). Assessing the thinking curriculum: New tools for
educational reform. In B. R. Gifford & M. C. Connor (Eds.), Changing assessments:
Alternative views of aptitude, achievement, and instruction (pp. 37-75). Boston: Kluwer
Academic.
Rothman, R. (1995). Measuring up: Standards, assessment, and school reform. San Francisco,
CA: Jossey-Bass.
Rothstein, R. (2008). The corruption of school accountability. School Administrator, 65, 14-15.
Scriven, M. (1973). The methodology of evaluation. In B. R. Worthen & J. R. Sanders
(Eds.). Educational evaluation: Theory and practice. Worthington, OH: Charles A. Jones.
Shah, S., Mediratta, K., & McAlister, S. (2009). Building a districtwide small schools movement.
Providence, RI: Annenberg Institute for School Reform at Brown University.
Shavelson, R. J., Baxter, G. P., & Gao, X. (1993). Sampling variability of performance
assessments. Journal of Educational Measurement, 30(3), 215-232.
Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher,
29(7), 4-14.
Sizer, T. (1996). Horace’s hope: What works for the American high school. Boston, MA: Houghton
Mifflin Co.
Spradley, J. P. 1979. The ethnographic interview. New York: Holt, Rinehart and Winston.
Stiggins, R. J. (2001). The unfulfilled promise of classroom assessment. Educational
Measurement and Practice, 20, 5-15.
Stiggins, R. J. (2002). Assessment crisis: The absence of assessment FOR learning. Phi Delta
Kappan, 83, 758-765.
Tate, W. (1995). Returning to the root: A culturally relevant approach to mathematics
pedagogy. Theory Into Practice, 34(3), 166-173.
Tyner-Mullings, A. R. (2008). Finding space: Educational reforms in practice in an urban
public school (Doctoral dissertation). City University of New York, New York. UMI
Dissertation Abstracts (ProQuest).
Wiggins, G. (1989). A true test: Toward more authentic and equitable assessment. Phi Delta
Kappan, 70, 703-713.
Wiggins, G. (1993). Educative assessment: Designing assessments to inform and improve student
performance. San Francisco, CA: Jossey-Bass.
Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment
system. Applied Measurement in Education, 13(2), 181-208.
Wolf, D. (1989). Portfolio assessment: Sampling student work. Educational Leadership, 46,
35-39.
TCR, 116, 020301 Teaching to the Test at Central Park East
33
Wolf, D., Bixby, J., Glen, J., III., & Gardner, H. (1991). To use their minds well: Investigating
new forms of student assessment. Review of research in education, 17, 31-74.
BRENT DUCKOR is a teacher educator and psychometrician in the
Department of Secondary Education at San Jose State University’s
College of Education. He teaches classroom assessment and evaluation
and educational psychology in the Single Subject Credential Program.
His current research focuses on investigating the validity evidence to sup-
port the use of the Performance Assessment for California Teachers and
other such teacher evaluation regimes. He taught in the Senior Institute
at Central Park East Secondary School from 1996-2000.
DANIEL PERLSTEIN is a historian and chair of the Program in Policy,
Organization, Measurement & Evaluation at UC Berkeley’s Graduate
School of Education. His research focuses on efforts to create emanci-
patory forms of education amid the social and economic inequalities of
American schools and life. This work has ranged from studies of the ra-
cial politics of American education and the educational models created
in the African American freedom struggle to critiques of school anti-vio-
lence programs and progressive education.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Looking at Student Wo r k Fo r Te a ch e r Le arning, Te a ch e r C o m m u n i ty, and S chool Re f o r m Teachers are usually alone when they examine student work and think about student perf o rmance. The authors describe several projects that have enabled teachers to leave the isolation of their own classrooms and think together about student work in the broader contexts of school improvement and p rofessional development.
Article
Based on historical sources and oral evidence this paper examines the importance of fieldwork investigation in dertmining population change in the Welsh countryside
Article
To give our readers practical insights into accountability and assessment, we invited W James Popham to write this monthly column for Educational Leadership. Popham is the author of 25 books, including The Truth About Testing (ASCD, 2001) and Test Better, Teach Better (ASCD, 2003). In addition, he is a former test developer and an outspoken critic of poorly designed and badly used tests. Popham will address such topics as adequate yearly progress, teaching to the test, and using assessment for diagnostic purposes. We welcome your comments at el@ascd.org.
Article
Background/Context Portfolio assessment is a popular form of authentic assessment, but often portfolios become simply folders full of papers rather than student reflections on their work. Purpose/Objective/Research Question/Focus of Study The author conducted a study aiming to help the ninth-grade students in his English language arts classroom be more reflective about their reading and writing portfolios. Research Design This study was conducted as a teacher research study in two parts. The first part studied the effect on the reflectiveness of students when using a 10-minute one-on-one presentation with the teacher instead of a cover letter. From the data received from these first-semester presentations, the teacher-researcher categorized the students’ statements into reflective and nonreflective categories. In the second part of the study, the teacher used these categories to teach students to speak more reflectively during their second-semester presentations. Conclusions/Recommendations The author found that students can be taught to be more reflective about their work and that this newfound reflectiveness helps students take more control in their literacy education.
Book
This text examines, through case studies of elementary and secondary school classrooms, how five schools have developed "authentic", performance-based assessments of students' learning and how this work has interacted with and influenced the experiences students encounter.