To read the full-text of this research, you can request a copy directly from the author.
Lorrie A. Shepard is professor of education and chair of the Research and Evaluation Methodology program area of the School of Education at the University of Colorado at Boulder. She is also currently serving as dean of the School of Education.
This article presents initial findings from an evaluation research study of the implementation of a Web-based decision support tool, the Quality School Portfolio (QSP), developed at the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at the University of California, Los Angeles (UCLA). The study focused on users' experiences with the training for and implementation of QSP. Data were collected by telephone interviews. The results show that QSP provided educators with enhanced access to more extensive and broadly founded student data and with the ability to analyze student data to identify at-risk students. Additionally, QSP was found to promote collaboration and shared planning among educators. It is concluded that technology tools, which can facilitate the analysis and reporting of educational data, have opened up the prospect of timely identification of at-risk students and interventions to meet their educational needs. Tools like this also support sound assessment practices, providing opportunities for frequent assessment and other evidence of competency beyond standardized testing.
Our objective has been to develop an instructional theory and corresponding curricular materials that make scientific inquiry accessible to a wide range of students, including younger and lower achieving students. We hypothesized that this could be achieved by recognizing the importance of metacognition and creating an instructional approach that develops students' metacognitive knowledge and skills through a process of scaffolded inquiry, reflection, and generalization. Toward this end, we collaborated with teachers to create a computer enhanced, middle school science curriculum that engages students in learning about and reflecting on the processes of scientific inquiry as they construct increasingly complex models of force and motion phenomena. The resulting ThinkerTools Inquiry Curriculum centers around a metacognitive model of research, called the Inquiry Cycle, and a metacognitive process, called Reflective Assessment, in which students reflect on their own and each other's inquiry. In this article, we report on instructional trials of the curriculum by teachers in urban classrooms, including a controlled comparison to determine the impact of including or not including the Reflective Assessment Process. Overall, the curriculum proved successful and students' performance improved significantly on both physics and inquiry assessments. The controlled comparison revealed that students' learning was greatly facilitated by Reflective Assessment. Furthermore, adding this metacognitive process to the curriculum was particularly beneficial for low-achieving students: Performance on their research projects and inquiry tests was significantly closer to that of high-achieving students than was the case in the control classes. Thus, this approach has the valuable effect of reducing the educational disadvantage of low achieving students while also being beneficial for high-achieving students. We argue that these findings have strong implications for what such metacognitively focused, inquiry-oriented curricula can accomplish, particularly in urban school settings in which there are many disadvantaged students.
State and federal government espouse school performance reports as a way to promote education reform. Some practicing educators question whether performance reports are effective. While the question of effectiveness deserves study, it accepts the espoused purposes of performance reports at face value, and fails to address the more basic, tacit political and symbolic roles of performance reports. Theories of organization, modern government, and regulation provide a context that helps to clarify these political and symbolic roles. Several performance report and assessment programs in California provide illustrations.
This April 2011 article is a reprint of the original May 1989 (V70N9) article and includes a new one-page introduction (on page 63 of this issue) by the author. The problem of assessment in education persists, the author maintains, because we have not yet properly framed the problem. We need to determine what are the actual performances we want students to be good at, he urges, define authentic standards and tasks to judge intellectual ability, and then design a test that measures the performance. The article focuses on the authentic test, which is a contextualized, complex intellectual challenge, rather than a collection of fragmented and static bits or tasks.
Describes the key features of the Collegiate Learning Assessment (CLA) project, which assesses the "value added" of an institution. The project assesses the institutional contribution to student learning through a focus on general education skills and the assessment of student performance relative to other students and through a pretest-posttest model. (SLD)
This article is a review of the literature on classroom formative assessment. Several studies show firm evidence that innovations designed to strengthen the frequent feedback that students receive about their learning yield substantial learning gains. The perceptions of students and their role in self‐assessment are considered alongside analysis of the strategies used by teachers and the formative strategies incorporated in such systemic approaches as mastery learning. There follows a more detailed and theoretical analysis of the nature of feedback, which provides a basis for a discussion of the development of theoretical models for formative assessment and of the prospects for the improvement of practice.
there has been little change in the naive theory of intelligence that motivates the use and interpretation of intelligence tests in schools and clinics / the implicit, underlying assumptions about mental testing and the uses to which these tests should be put have remained unchanged / however, these assumptions differ greatly from those emerging from new work in mental testing / begin by going back to the early history of mental testing to uncover the roots of the traditional assumptions described / discuss data that do support these views and draw out surprising continuities between certain traditional views of the psychometric community and the newer views we discuss
discuss recent work that centers on and supports the new assumptions / describe the general properties of the modern research in dynamic assessment and its instructional implications, with a brief description of the theoretical frameworks within which this work developed / review a number of studies that investigate the psychometric properties of dynamic assessments of children's ability to reason inductively / delve into the issue of dynamic assessment within the context-rich realm of early mathematics [in elementary school] / discuss interactive instruction in general and illustrate it with one approach, reciprocal teaching (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Soviet psychologists' views of the relationship between psychology and Pavlovian psychophysiology (or the study of higher nervous activity, as it is referred to in the Soviet literature) has long been a matter of curiosity and concern in the United States. Not accidentally, it has also been a matter of concern and dispute within the USSR. The following is an excerpt from a work by one of the Soviet Union's most seminal psychological theorists on this issue. Written in the late 1920s, this essay remains a classic statement of Soviet psychology's commitment to both a historical, materialistic science of the mind and the study of the unique characteristics of human psychological processes.
The theory of formative assessment outlined in this article is relevant to a broad spectrum of learning outcomes in a wide variety of subjects. Specifically, it applies wherever multiple criteria are used in making judgments about the quality of student responses. The theory has less relevance for outcomes in which student responses may be assessed simply as correct or incorrect. Feedback is defined in a particular way to highlight its function in formative assessment. This definition differs in several significant respects from that traditionally found in educational research. Three conditions for effective feedback are then identified and their implications discussed. A key premise is that for students to be able to improve, they must develop the capacity to monitor the quality of their own work during actual production. This in turn requires that students possess an appreciation of what high quality work is, that they have the evaluative skill necessary for them to compare with some objectivity the quality of what they are producing in relation to the higher standard, and that they develop a store of tactics or moves which can be drawn upon to modify their own work. It is argued that these skills can be developed by providing direct authentic evaluative experience for students. Instructional systems which do not make explicit provision for the acquisition of evaluative expertise are deficient, because they set up artificial but potentially removable performance ceilings for students.
Carrot or stick? How do school performance reports work? Educa-tion Policy Analysis Archives, 2(13) Available The rise, fall, and rise of state assessment in California
M E M Fetler
Fetler, M.E. (1994). Carrot or stick? How do school performance reports work? Educa-tion Policy Analysis Archives, 2(13). Available: http://epaa.asu.edu/epaa/v2n13.html Kirst, M., & Mazzeo, C. (1996). The rise, fall, and rise of state assessment in California, 1993-1996. Phi Delta Kappan, 78(4), 319-323.
Interactive learning environments: A new look at assessment and instruction Changing assessments: Alternative views of aptitude, achievement, and instruction (pp. 121-211) Biology: Course description
A L Brown
J C Campione
L S Webber
Brown, A.L., Campione, J.C., Webber, L.S., & McGilly, K. (1992). Interactive learning environments: A new look at assessment and instruction. In B.R. Gifford & M.C. O'Connor (Eds.), Changing assessments: Alternative views of aptitude, achievement, and instruction (pp. 121-211). Boston: Kluwer Academic Publishers. College Entrance Examination Board. (2002). Biology: Course description. New York: Col-lege Entrance Examination Board.