To read the full-text of this research, you can request a copy directly from the authors.
Abstract
Formative assessment, in which the assessment is integrated within instruction and aimed at increasing learning, can replace summative assessment in many situations. Two programs of formative assessment are described: DIANOSER and SMART.
To read the full-text of this research, you can request a copy directly from the authors.
... A major problem is that measuring student performance, which is the basis for formative assessment, can take an enormous amount of time. Hunt and Pellegrino (2002), for example, argue for the use of more computer technology as follows: "The purpose of the technology is to gather information about the student that can be summarized and presented to a teacher so that the teacher can adjust instruction to the dominant ideas present in the class." ...
... However, this aspect is difficult to implement in reality because, on the one hand, most modern school classes consist of a large number of students and thus one-on-one conversations between students and teachers are not possible. On the other hand, the whole formative assessment process takes a lot of time (Hunt & Pellegrino, 2002). There is no point in formative assessment by a teacher if the teacher cannot identify, analyze, and respond to the problems of individual students (Hunt & Pellegrino, 2002). ...
... On the other hand, the whole formative assessment process takes a lot of time (Hunt & Pellegrino, 2002). There is no point in formative assessment by a teacher if the teacher cannot identify, analyze, and respond to the problems of individual students (Hunt & Pellegrino, 2002). ...
Formative assessment is about providing and using feedback and diagnostic information. On this basis, further learning or further teaching should be adaptive and, in the best case, optimized. However, this aspect is difficult to implement in reality, as teachers work with a large number of students and the whole process of formative assessment, especially the evaluation of student performance takes a lot of time. To address this problem, this paper presents an approach in which student performance is collected through a concept map and quickly evaluated using Machine Learning techniques. For this purpose, a concept map on the topic of mechanics was developed and used in 14 physics classes in Germany. After the student maps were analysed by two human raters on the basis of a four-level feedback scheme, a supervised Machine Learning algorithm was trained on the data. The results show a very good agreement between the human and Machine Learning evaluation. Based on these results, an embedding in everyday school life is conceivable, especially as support for teachers. In this way, the teacher can use and interpret the automatic evaluation and use it in the classroom.
... Black and Wiliam (2009) reveal that although theorizing formative assessment has the potential to improve practice through a better understanding of the learners' possible responses to feedback, it has shown great surprise at how effective many school classroom teachers have been in the absence of such knowledge. Hunt and Pellegrino (2002) further claim that all teachers must be experts at formative assessment. For example, they must be aware of either the materials that students are expected to understand or the various alternative and problematic ways in which students may fail to understand them. ...
... However, research on learning progressions is promising for the lack of development related to responding to information about student learning (Andrade, 2010). Moreover, the depth of this literature varies greatly across disciplines (Hunt & Pellegrino, 2002). Such a challenge is also addressed by Asghar (2012), that is, for tutors, the recent focus on constructing underpinning theories of formative assessment, which many will be unaware of. ...
... Such a challenge is also addressed by Asghar (2012), that is, for tutors, the recent focus on constructing underpinning theories of formative assessment, which many will be unaware of. According to Hunt and Pellegrino (2002), another challenge in employing formative assessment is that it is time-consuming. There is no point in formative assessment by a teacher if the teacher cannot identify, analyze, and respond to the problems of individual students. ...
Assessment is inevitable in education. Formative assessment plays a crucial role in enhancing students' learning and performance. With the growing understanding of the role of formative assessment, many countries in the world have considered formative assessment as the primary mechanism of assessment reform in education. In Cambodia, a shift in focus to formative assessment as its assessment reform aimed at enhancing students has been noticed. The present study employed a quantitative method to examine teachers' and learners' perceptions of formative assessment practices in enhancing learning in Cambodian EFL courses. The participants of this study were 30 teachers (3 females) and 50 students (22 males). They are teaching and learning English as a Foreign Language. The results revealed that the respondents have a more positive perception of formative assessment. It highlighted a better understanding of formative assessment concepts, purposes, and development strategies. The results also indicated that participants' perceptions about formative assessment enhance learning in EFL courses through various roles of feedback. The results showed that respondents perceived that it is demanded of assessors or teachers to have higher knowledge about formative assessment to assess students in formative ways. Time-consuming was also perceived as a challenge in formative assessment practice. The findings also revealed that participants were more likely to use formative assessment in their EFL courses in the future. Future studies with many participants, mixed method approaches, and cross-disciplinary research in Cambodia should be conducted in either private or public HEIs.
... Testing is not only a tool for grading, but also a teaching tool. A formative and integral assessment aims to aid the student in improving his or her capabilities and should be a habitual practice in classroom activities (16). For teachers, tests are useful to place the course in perspective and obtain feedback as to what the students have learned. ...
... For teachers, tests are useful to place the course in perspective and obtain feedback as to what the students have learned. For students, tests are an opportunity to show what they have learned and discover the extent and depth of their knowledge (16,26). Of course, effectively completing the learning-teaching cycle requires prompt feedback, preferably immediately after concluding tests (30). ...
... Consequently, professors could better plan strategies aimed at improving student's reflections. In this process, one would examine the concepts the students are mastering and, more importantly, those they are failing to understand (16,20,21,26,30). ...
The present study examined the relationship between second-year medical students' group performance and individual performance in a collaborative-learning environment. In recent decades, university professors in the scientific and humanistic disciplines have successfully put into practice different modalities of collaborative approaches to teaching. Essentially, collaborative approach refers to a variety of techniques that involves the joint intellectual effort of a small group of students, which encourages interaction and discussion among students and professors. The present results show the efficacy of collaborative learning, which, furthermore, allowed students to participate actively in the physiology class. Average student's grades were significantly higher when they engaged in single-best-response, multiple-choice tests as a student team, compared with taking the same examinations individually. The method improved notably knowledge retention, as learning is more effective when performed in the context of collaborative partnership. A selected subset of questions answered wrongly in an initial test, both individually and collectively, was used on a second test to examine student retention of studied material. Grade averages were significantly improved, both individually and groupwise, when students responded to the subset of questions a second time, 1, 2, or 3 wk after the first attempt. These results suggest that the collaborative approach to teaching allowed a more effective understanding of course content, which meant an improved capacity for retention of human physiology knowledge.
... For students, collaborative quizzes and tests are an opportunity not only to express their knowledge but also to unveil potential misunderstandings (1,25,26). For instructors, they are useful for ensuring that students comprehend various principles correctly, and if not, they provide further explanation and correct potential misconceptions (25,27). ...
... For students, collaborative quizzes and tests are an opportunity not only to express their knowledge but also to unveil potential misunderstandings (1,25,26). For instructors, they are useful for ensuring that students comprehend various principles correctly, and if not, they provide further explanation and correct potential misconceptions (25,27). The high test scores achieved in both intervention groups renders it difficult to detect between-group differences. ...
Collaborative teaching strategies such as peer instruction and conventional group work have previously been shown to enhance meaningful learning, but they have not previously been compared. In this present study, we compared the impact of solving quizzes with peer instruction and conventional group work on immediate learning in a laboratory exercise. A total of 186 second year medical students were randomized to solve two quizzes by either a peer instruction strategy (n = 93) or conventional group work (n = 93) during a mandatory laboratory exercise on respiratory physiology, after which all students completed an individual test. There was no difference in total test scores between groups, but students randomized to peer instruction obtained the highest test scores in solving simple integrated questions. Conversely, students randomized to conventional group work provided the best evaluations of the overall assessment of the laboratory exercise. In conclusion, different collaborative teaching strategies implemented during a laboratory exercise appear to affect immediate learning and student satisfaction differently.
... The test has the advantage of allowing them to keep learning, knowing the level of the learned concepts, and acquire new knowledge. Besides, the paradigm can be used to assess the retention of learned concepts [3,14,15]. ...
... Knowledge expressed orally or in writing implies the risk of errors in its transmission and, hence, to give rise to misconceptions [13,[15][16][17]. In the collaborative learning, after the discussion, students receive a feedback, strengthening thereby the efficiency of the learning process and preventing opportunely the acquisition of conceptual errors [18]. ...
... This does indicate the lack of training for the teachers in formative assessments. Hunt & Pellegrino (2002) also claimed that in formative assessment, it "…requires that the assessor know in advance both the material that students are supposed to grasp and the different alternative and problematical ways in which students may fail to grasp it" (p. 75). ...
... To implement the formative assessment already requires a long time, giving feedback to the students will take a longer time, especially if the teacher needs to give one-to-one feedback. If the teacher is not able to recognize, evaluate, and respond to the students' individual problems, then the formative assessment would not be meaningful (Hunt & Pellegrino, 2002). ...
The current study is conducted with the aim to explore the practices and perceptions of Afghan EFL lecturers toward assessment. A second aim of the study is to explore the challenges the lecturers encounter in the implementation of formative assessments in their classes. To serve these basic objectives, a qualitative case study method design was employed with three English language lectures as the participants. Semi-structured interviews were used as the main instrument to collect data. The findings of the study indicated that all three lecturers maintained positive perceptions toward formative assessment and favored it over summative assessment. However, the study also discovered that the lecturers practice summative assessments more than formative assessments in their classrooms. This, as indicated by the lecturers, was due to the fact that their choices of employing certain assessment practices were dictated in terms of certain challenges such as university rules and policies, large classes, and time constraints. Lastly, some suggestions are made that may prove useful to effectively apply formative assessment in Afghan EFL context.
... (Bloom, Hastings, & Madaus,1971). To put it simply, formative assessment is "integrated within instruction and aimed at increasing learning" (Hunt and Pellegrino 2002). ...
... However, a teacher's ability to use the verbal mode effectively would be truly "integrated testing, in which testing is conducted unobtrusively, as part of normal classroom activity." ( (Hunt and Pellegrino 2002) Again, this ability needs to be nurtured and developed. ...
This study examines how technology, in the form of a classroom-specific network, is harnessed by teachers to informate (as opposed to automate) classroom-based formative assessment. The technology used is the Classroom Performance System (CPS), a wireless response system that provides instant, collated feedback to all the students in class, based on their responses to questions (mainly MCQ, true/false kind of questions). The study involved 107 teachers from 30 schools who were involved in designing, implementing and reflecting on lesson activities in various subjects using CPS. The main purpose of the study was to investigate if and how teachers made use of data generated by CPS to tailor their instructional practices to improve students' learning. Our findings revealed not just a range of strategies used by the teachers, but also a range of interactions around the data generated by CPS. Three models of interactions emerged: Judicious teacher/passive student; judicious teacher/responsive student; and advisory teacher/judicious student.
... There has been a tremendous push for continuous assessment in schools to ensure that students are being prepared adequately and to hold both students and teachers accountable for the quality of student preparation. However, it is of concern that the testing methods being used are not the best choices to meet the accountability goal and even have a potential for harming the system (Hunt & Pellegrino, 2002). Many of the current assessment practices that serve certification and prediction functions well are not well suited for improving learning (Broadfoot, 2000). ...
... Teachers also need to distinguish between disruptive testing, in which evaluation takes place outside the context of normal instruction, and integrated testing, in which testing is conducted unobtrusively, as part of normal classroom activity (Hunt & Pellegrino, 2002). The development of the new assessment regime (NCEA) has dominated teaching in secondary schools over recent years (Lovell, 2004). ...
Assessment is one of the key strategies that, if used correctly, can effectively enhance student learning. This study explores senior ESOL (English for Speakers of Other Languages) students' experiences of and attitudes towards formative assessment in the mainstream classroom. The purpose of this study was to investigate how formative assessment might be used effectively to enhance ESOL students' learning from the perspective of senior ESOL students. Data were collected using mixed methods including questionnaires and follow-up interviews with a range of participants from different ethnic backgrounds. One hundred ESOL students participated in the questionnaire and 22 were subsequently interviewed. The questionnaire provided data on the majority ESOL students' experiences and attitudes. Then the interviews allowed participants to describe their experiences and attitudes in more detail. The qualitative methodology used also provided the opportunity for the participants to explain any possible reasons for their attitudes. This study revealed that all the participants had some experiences in some of the formative assessment activities used in classroom. The participants' perspectives also indicated that ESOL students' high expectations for their academic achievement relied on teachers' understanding of their needs as well as effective classroom practice. Feedback was the most favoured formative assessment method by the ESOL students because the students could find out what they had done correctly and where they had gone wrong. Questioning was not liked by the participants, partly because of the language barrier limiting their understanding of the questions, partly because of the way teachers asked the questions (i.e. no wait-time), and partly because of cultural sensitivity (i.e. not wanting to draw attention to oneself). However, the value of questioning as a formative assessment method was recognised by a number of the participants. Self-assessment was liked and found to be useful by some participants. Peer assessment was not liked because of the students' mistrust of their peers' ability to mark their work correctly. Sharing learning objectives and assessment criteria was regarded as an important way to enhance learning as long as teachers provided clear explanations. The study raises questions about the effectiveness of existing formative assessment activities used in classroom and suggests some specific strategies that may help ESOL students learn more effectively. This study clearly indicates that not all formative assessments are equally effective to students of different backgrounds. The choice of formative assessment methods and the way they are administered in class are both important in determining their success for the participants. ESOL students have their own characteristics and needs (e.g. language limitations) and these should to be taken into consideration when choosing and implementing formative assessment methods. The study is of interest in particular to those who teach ESOL students in mainstream classrooms but also has strong links to the field of cross cultural communication, and to the study of effective teaching and learning.
... They established that more teachers are likely to employ formative evaluation in their classes. On the other hand,Hunt and Pellegrino (2002) while looking at issues, examples and challenges in formative assessments noted that less teachers utilize formative assessment. They argued that formative assessment requires teachers to be experts at it, such as knowing in advance both the materials that students are supposed to grasp and the different alternative and problematical ways in which students may fail to grasp them. ...
Competency Based Curriculum (CBC), a curriculum which calls for a paradigm shift in assessment of learners using Competency Based Assessment (CBA) was introduced in Kenya in 2017. The need for a paradigm shift in assessment from 8.4.4 Content Based Curriculum to 2.6.3.3.3 CBC necessitated the need for this study to determine factors influencing implementation of CBA in Kenya by looking at the extent of utilization of CBA tools and types. The objective was to investigate the extent of utilization of CBA tools and types in Grade 6 in selected schools in Kenya. The study was conducted in Trans-Nzoia, Bungoma and Busia Counties. It was grounded on Stafflebeam’s CIPP model targeting head-teachers and their respective grade 6 science and technology teachers. Mixed method research design was used. Cluster sampling technique was used to sample the three Counties. Stratified sampling technique on the other hand was used to categorize the schools into public and private whereas simple random sampling was used to choose participating schools and in selecting participating teachers in schools with two or more science and technology teachers. Questionnaire for grade 6 science and technology teachers and interview guide for head-teachers were used to collection data. Analysis was done descriptively in order to bring out the overall descriptions of the extent of CBA tools and types utilization. This study was necessary to determine the extent of utilization of CBA tools and types in Kenya as one of the factors influencing CBA implementation. The study found that CBA tools and types are utilized to some extent in grade 6 in selected schools in Kenya (mean 2.8752, SD 1.1697). The study recommended the government to develop comprehensive policy support to support CBA, place a high priority on allocating necessary resources to schools, establish a robust monitoring and evaluation to continuously assess CBA implementation and organize regular professional development programs on assessment for teachers.
... However, the implementation of formative assessment in teachers' daily practice remains a substantial challenge because teachers often adopt only a few of the principles associated with formative assessment (goal setting, gathering data about students' learning, giving feedback to students). Moreover, they do not fully integrate it into their teaching approach (Hunt and Pellegrino, 2002;Bennett, 2011;Wylie and Lyon, 2015;Heitink et al., 2016;Yan and Brown, 2021;Lui and Andrade, 2022). Exploring the antecedents and conditions of formative assessment practices are thus of prime importance when seeking to better understand which personal and contextual factors enable and maximize the use of formative assessment in the classroom (Heitink et al., 2016;Schildkamp et al., 2020;Yan et al., 2022). ...
Introduction
In Luxembourg, competency-based practices (CBP), differentiated instruction (DI), and formative assessment (FA) have been imposed by the 2009 school law. Referring to the Theory of Planned Behavior (TPB), this study examined factors influencing the implementation of these practices in classrooms.
Methods
Teachers participated in an online survey assessing their attitudes, subjective norm, perception of behavioral control, intention, and pedagogical practices regarding CBP, DI, or FA. Measurement models were used in structural equation models testing the TPB.
Results
If the main relationships postulated by the theory were confirmed, some inconstancies were observed depending on the targeted practices. Structural equation TPB models controlling for gender, experience, teaching level, and socio-economic level of the school population explained between 20 and 45% of the variance in teachers’ practices, and between 65 and 75% of the variance in teachers’ intention to use these practices.
Discussion
The relevance of the TPB for studying teaching practices and implications for professional training are discussed.
... They mainly address the effective use of formative assessments in classroom settings. Research has shown that formative assessments (or assessment for learning) can produce major improvements in learning and support teachers to become aware of the preconceptions and problem-solving techniques that their students bring into the classroom (Hunt & Pellegrino, 2002). ...
... On the other hand, our research concludes that the most widespread problem -both for teachers and students-is the time involved in the development of formative assessment. For other authors, this is not the main problem (Hunt & Pellegrino, 2002), so this answer may represent a peculiarity of the context studied (Spanish universities) and may open up interesting lines of research for the future. Mainly, because in the Spanish context temporary dedication to formative assessment has been identified as one of the most widespread difficulties in any educational initiative regardless of level (Aragonés et al., 2020;Cubero & Ponce, 2020;Pineda et al., 2019). ...
Formative assessment is a strategy that optimizes the learning process at any educational level, however its use is not very frequent as literature revision shows. In this paper, we analyse the use of formative assessment in online postgraduate studies (masters) in Spanish universities. Our sample was 31 online master’s degrees coordinators and we analyse the results obtained from a questionnaire with open questions using NVIVO software. Through qualitative analysis of the information supported by cross-queries of codes and attributes, we have considered formative assessment according to fields of knowledge, the type of digital tools used and the main difficulties identified. In this type of online masters, our data show that most of the teachers use this type of formative e-assessment to provide feedback to their students and as part of the final marks of the courses, too. So these results are relevant to understand the E-assessment strategies for master’s degrees. Finally, the main limitation of the study is the fact that it uses a sample limited to the geographical context of Spain. Nevertheless, these data are representative of E-assessment in Spanish master’s degrees and may be of interest for future research, for comparative studies in other contexts and even with face-to-face or blended-learning master’s degrees.
... Hunt and colleagues. 21 also pointed out that creating good FA MCQ items is a challenge because the aim is not to have students checking the correct answers with a certain probability, but rather, the expected students' level of knowledge should be addressed by the possible answers 21 (including the incorrect ones). Furthermore, Hunt and colleagues. ...
Objectives:
The present study aimed to examine whether medical students benefit from an open-book online formative assessment as a preparation for a practical course.
Methods:
A between-subjects experimental design was used: participants - a whole cohort of second-year medical students (N=232) - were randomly assigned to either a formative assessment that covered the topic of a subsequent practical course (treatment condition) or a formative assessment that did not cover the topic of the subsequent course (control condition). Course-script-knowledge, as well as additional in-depth-knowledge, was assessed.
Results:
Students in the treatment condition had better course-script knowledge, both at the beginning, t(212) = 4.96, p < .01, d = 0.72., and in the end of the practical course , t(208) = 4.80, p < .01, d = 0.68. Analyses of covariance show that this effect is stronger for those students who understood the feedback that was presented within the formative assessment, F(1, 213)=10.17, p<.01. Additionally, the gain of in-depth-knowledge was significantly higher for students in the treatment condition compared to students in the control condition, t(208) = 3.68., p < .05, d = 0.72 (0.51).
Conclusions:
Students benefit from a formative assessment that is related to and takes place before a subsequent practical course. They have a better understanding of the topic and gain more in-depth-knowledge that goes beyond the content of the script. Moreover, the study points out the importance of feedback pages in formative assessments.
... We call these tests. They require the interruption of normal instruction and are sometimes called "disruptive" or "drop in from the sky" testing (hunt & Pellegrino, 2002). Technological limitations on interacting with students, providing experience, and capturing relevant data, especially in the classroom, often lead to dramatic truncation in the goals and aspirations of assessment designers. ...
Background
It would be easy to think the technological shifts in the digital revolution are simple incremental progressions in societal advancement. However, the nature of digital technology is resulting in qualitative differences in nearly all parts of daily life.
Purpose
This paper investigates how the new capabilities for understanding, exploring, simulating, and recording activity in the world open possibilities for rethinking assessment and learning activities.
Research Design
This analytic essay enumerates three changes to assessment likely to result from the ability to gather data from every day learning activities.
Findings
The digital revolution allows us to use technology to extend human abilities, represent the world, and collect and store data in previously unavailable ways, all opening new possibilities for the unobtrusive ubiquitous assessment of learning. This is a dramatic shift from previous eras in which physical collection of data was often obtrusive and likely to cause reactive effects when inserted into daily activity. These shifts have important implications for assessment theory and practice and the potential to transform how we ultimately make inferences about students.
Conclusions/Recommendations
The emerging universality of digital tasks and contexts in the home, workplace, and educational environments will drive changes in assessment. We can think about natural integrated activities rather than decontextualized items, connected social people rather than isolated individuals, and the integration of information gathering into the process of teaching and learning, rather than as a separate isolated event. As the digital instrumentation needed for educational assessment increasingly becomes part of our natural educational, occupational, and social activity, the need for intrusive assessment practices that conflict with learning activities diminishes.
... Design and development teams can aim for optimal performance drawing on the research literature in organizational theory. Their projects can grow out of their ability to access and apply the most up-to-date research on how and what students learn (Donovan et al., 1999;Kolodner, 1991;McGinn and Roth, 1999;Pellegrino, 2002), as well as on specific challenges in education such as the assessment of learning (Hunt and Pellegrino, 2002). It is, however, the team members' own use of research methods that will strengthen and align the development of technology for teaching and learning in relation to a larger vision for education. ...
Providing a brief history of the work of design and development (D&D) teams grounded in the relevant literature, this chapter then offers several prominent examples from the past 65 years. These examples allow the comparison of the context for design and development focused on industry, government, and education. Although the implications for different approaches to teamwork for design and development are discussed, we highlight models for developing teams that are especially useful to those who are new to the field. We conclude with challenges faced by design and development teams specifically in regard to educational communications and technology. Design and development teams should: (1) organize in ways informed by the research, (2) access and apply research on teaching and learning, (3) conduct research for ongoing evaluation of technology in education, and (4) contribute to the research in the field from their own innovations, mindful of the social consequences of design and development.
... The nature of the feedback in CBT merits careful attention (Kulhavy and Stock, 1989;Moreno and Mayer, 2005;Shute, 2006). Testing influences the course of learning in formative evaluation but simply scales a learner's mastery in summative evaluation (Hunt and Pellegrino, 2002;Shute, 2006). A test score alone is adequate feedback for informing the learner on how well they are doing but is not useful for clarifying specific deficits in knowledge or skill. ...
... These generalizations provide a common language to describe student ideas that helps students, researchers, and teachers to communicate. Facets are context-sensitive fragments of understanding that students can demonstrate through their answers to diagnostic multiple-choice questions, hand-coding or automated analysis of text, or through a Socratic dialogue with a student in a classroom or online (Hunt and Pellegrino 2002). ...
A number of educational researchers have developed pedagogical approaches that involve the teacher in discovering and helping to correct misconceptions that students bring to their study of their subject matter. During the last decade, several computer systems have been developed to support teaching and learning using this kind of approach. A central conceptual construct used by these systems is the "facet" of understanding: an atomic diagnosable unit of belief. A formidable challenge to applying such pedagogical approaches to new topic areas is the task of discovering and organizing the facets for the new subject area. This paper presents a taxonomy of misconceptions and a methodology for going about the task of preparing a database of facets. Important issues include the generality and diagnosability of facets, granularity of facets, and their placement on a scale of problematicity. Examples are drawn from the subjects of physics and computer science and in the context of two computer systems: the Diagnoser and INFACT.
... These types of assessment lower self-efficacy for lower achieving students, and cue them to make ability attributions for poor performances. On the positive side, Boston (2002) suggested that because formative assessment occurs during learning, it tends to evoke attributions of effort rather than ability; thus, effort becomes the central focus Despite research and other literature supporting the efficacy of formative assessment for learning, Hunt and Pellegrino (2002) noted that formative assessment has not been widely adopted in classroom practice. They suggested several reasons for the lack of adoption. ...
The purpose of this article is to define formative assessment, outline what is known about the prevalence of formative assessment implementation in the classroom, es- tablish the importance of formative assessment with regards to student motivation and achievement, and present the results of a content analysis of current educational psychology textbooks. Several key definitions of formative assessment are exam- ined, and various means of conducting formative assessment are outlined. Numerous studies that have examined the effects of formative assessment on motivation and achievement are summarized. The theoretical link between formative assessment and several important motivational constructs is established, and suggestions for fu- ture research are delineated.
... learning period. Several authors (eg, Cole, Ryan & Kirk, 1995; Hunt & Pellegrino, 2002; Svinicki, 2001 ) have argued that every learning action should focus on the collaborative construction of knowledge between students and teacher and between the students themselves. In this constructive process, educational help is given to students to acquire a meaningful knowledge (Cambridge, 2001; Riedinger, 2006). ...
This paper presents an alternative application of e-portfolio in a university student assessment context. A concept based on student collaboration (called netfolio) is developed, that differs from the classical e-portfolio concept. The use of a netfolio, a network of student e-portfolios, in a virtual classroom is explained through an exploratory study. A netfolio is more than a group of e-portfolios because it offers students a better understanding of learning objectives and promotes self-revision through participation in assessment of other students' learning, as indicated through their portfolios. Class student e-portfolios are interconnected in a unique netfolio such that each student assesses their peers' work and at the same time is being assessed. This process creates a chain of co-evaluators, facilitating a mutual and progressive improvement process. Results about teachers' and students' mutual feedback are presented and the benefits of the process in terms of academic achievements are analysed.
... Although personal experiences and understanding are difficult to contemplate because of their abstract and illusive nature, they are powerful guides to future actions (Freppon & MacGillvary, 1996). The focus of formative A u t h o r ' s p e r s o n a l c o p y assessment is a constructivist process of self-assessment and self-development in which learning builds upon learning (Hunt & Pellegrino, 2002). Instructors are both teachers and learners-simultaneously engaging in a very personal activity to enable them to construct and reconstruct knowledge and meaning while teaching. ...
Reflective peer coaching is a formative model for improving teaching and learning by examining intentions prior to teaching, then reflecting upon the experience. The goal of reflective peer coaching is to promote self-assessment and collaboration for better teaching and ultimately better learning. There are obvious benefits to colleagues collaborating and sharing ideas, thoughts, and observations. However, many models of assessing teaching effectiveness focus on summative evaluation in which colleagues observe each other once or twice a year and fill out institutional evaluation forms. Rarely do colleagues engage in formative conversations about teaching that are guided by the instructor's personal goals and objectives. Reflective peer coaching necessitates a ten-minute planning conversation prior to the actual lesson and a ten-minute reflective conversation after the lesson. These conversations happen regularly and frequently to build self-awareness and self-assessment of the personal craft of teaching. The following article outlines the dynamics of the reflective peer coaching process as a formative assessment model that leads to better learning through improved teaching.
This review paper aims to discuss diverse issues faced by ESL learners in secondary education in Sri Lanka. Linguistic, psychological, social, and cultural factors as well as applicable practices which hinder the development of L2 are discussed in the framework of the investigation that is conducted in collaboration with the detachment of linguistics, psychology, education and applied linguistics. Such challenges are understood in the context of educational and sociocultural situations in Sri Lanka. In view of these analyses, it becomes clear that there is a great demand for specific instructional approaches, improved teacher education, and [2]es in the curriculum that will benefit learners. Using these truths, this paper discusses learner-centered strategies and elaborates on corrective measures for policymakers and educators to enhance the ESL effectiveness in Sri Lanka.
Public libraries have embraced the popularity of maker education and makerspaces by integrating maker education in their program offerings, and by developing makerspaces that enable patrons to tinker and create products. But less attention has been paid to supporting librarians and maker educators in assessing the impact of these spaces. To expand assessment scholarship and practices related to public library makerspaces, we offer two contributions. First, we share findings from a qualitative research study in which we analyzed how 17 library staff and maker educators define success and identify evidence of success in their maker programs. The findings from that study, in conjunction with our collective experience as research partners working with public library makerspaces, laid the foundation for a series of analysis tools we developed to help stakeholders identify the assessment needs of such learning environments. The Properties of Success Analysis Tools (PSA Tools) represent our second contribution; these tools invite library staff and maker educators to reflect on and unpack their definitions of success in order to identify what features a relevant assessment tool should have.
Settler colonialism layers the intentional displacement, subjugation, and genocide of Indigenous Peoples by means of physical control over land, sea, rivers, and material resources. While functioning as a profoundly violent and exploitative system, settler colonialism also serves as a theatre of enduring resistance and activism. As schools operate in this socio-political context, they are not without intent or design. This chapter focuses on the use of critical arts-based assessments as tools to strategically grow learners’ criticality and reposition the act of teaching in dominant spaces in Initial Teacher Education (ITE). Utilising a multi-theoretical lens derived from Critical Race Theory and decolonial thought, we carefully unspool our approach to teaching, where pedagogy, content, and assessment are deployed as deliberate acts of resistance and activism. We use this foundation to engage with culturally-responsive pedagogical strategies, returning them to their theoretical roots, as deliberate tactics to counter the limiting factors we raise in this discussion. Our chapter is intended to draw attention to schools’ role in settler-colonial societies like Australia. It offers strategies that enable future teachers to advance their agency as educational leaders in dominant spaces.
The IIEP Learning Portal’s “Improve Learning” model focuses attention on five major components of the education system: learners and support structures, teachers and pedagogy, curriculum and materials, schools and classrooms, and education system management. Within each of these components, we present research briefs on five major issues—giving education planners a basic overview of a total of 25 areas they may need to address in order to improve learning outcomes and attain high-quality education systems.
The content of this research is designed to provide information on the use of a web-based formative assessment program in elementary and secondary school settings. The research was conducted in several schools implementing a formative assessment software program. The studies within the various school settings revealed a possible advantage of using a web-based formative assessment program in comparison to paper-based testing.
The author details a study of the feedback provided in four online courses and how adult students perceived and made use of that feedback. The relation of feedback to formative assessment is briefly considered. Students indicated a preference for feedback embedded in the document rather than summarized at the end of their work. Categories for types of feedback are established. A discussion of instructor presence as it relates to online courses and student written work submissions outside of threaded discussions is central to the findings in this study.
This study compared the efficacy of 3 approaches to professional development in middle school Earth science organized around the principles of Understanding by Design (Wiggins & McTighe, 1998) in a sample of 53 teachers from a large urban district. Teachers were randomly assigned to a control group or to 1 of 3 conditions that varied with respect to the conceptions of ideal curriculum use embedded within. Teachers either designed units of instruction, adopted units developed by expert Earth scientists and Earth science educators, or learned principled ways to adapt expert-designed curricula. Relying on data from surveys and independent ratings of naturally occurring teacher assignments, we used hierarchical linear modeling techniques to analyze the impacts of the professional development on how teachers planned and coordinated instruction. Our results suggest that to realize positive effects on both their planning and coordination of instruction, teachers need access to high-quality curriculum materials and professional development that helps them plan for principled adaptation of those materials.
In this article, the author clarifies formative learning and assessment for persons who are seeking additional ideas for courses, degree programs, or personal career goals. This scholarly review uses published literature to clarify differences between formative and summative assessment and to outline developmental issues that guide constructivist learning principles. The author provides illustrative teaching examples to demonstrate how to integrate formative learning and assessment into classroom and clinic educational practices. She concludes by advocating formative learning and assessment as a societally necessary educational step for the preparation of career-minded professionals.
This study investigated the effects and experiences of a mutual assessment framework (CoPf) in an online graduate course at a mid-west university. CoPf was integrated into the course structure as an innovative application of the standard e-portfolio assessment tool. Using a mixed method, the study first explored the effects of CoPf compared to the standard e-portfolio in relation to the promotion of revisions to students' work, students' final course grades, and interactions both between the students and the instructor and among students. Qualitative analysis was then conducted to inquire the students' experiences in the CoPf course and how they perceived these experiences. Findings from the data analysis were presented and the contributions/implications of the study were discussed.
Tracing the progress of individual learners as they interact with computer-based learning environments using exploratory data analysis methods can be very use-ful in recognizing, understanding, and classifying students' learning behaviors and performance. The detailed activity logs recorded by a learning environment like Betty's Brain can be the basis for developing traces of student behavior, but they may be difficult to interpret without knowledge of the system's inner workings and architecture. Screen captures also provide trace information, but they typically contain distracting details that are not relevant to the process of interest. Visualization and interpretation of the learner's path is much easier in structured problem solving environments, but linking activities to learning behaviors is more complex in systems like Betty's Brain, where students have much more choice in their knowledge construction task. We have developed visualization schemes for Betty's Brain to trace the learner's progress in their knowledge construction tasks. We describe two of the visualization schemes in this paper, and then discuss how they may (1) help classroom teachers track their students' learning progress as they build their causal maps, and (2) inform the development of feedback rules for future versions of Betty's Brain.
In the total assessment of any system, there is a need to consider that the human is capable of a wide range of cognitive adaptations in managing system anomalies. Yet, it is virtually impossible to anticipate all of these potential adaptations. One solution is to focus on the cognitive demands of the system and to determine the minimum capacity in each cognitive skill the person must have to meet those demands. This approach demands an integrated cognitive assessment involving definition of the type and level of cognitive skills required by the system, and evaluation of the impact of various levels of human cognitive capacity on system performance. This approach has been instantiated in several efforts supported by the Department of Defense and NASA. It involves a taxonomy of cognitive skills, an armory of cognitive tests, and mathematical treatments to relate a person's capacity to system demands. T here is a paradigm shift occurring in the cognitive performance assessment areas of operational test and evaluation. From the vantage point of the twenty-first century, there is an increasing realization that human cognition in complex, techno-logical environments is an extremely plastic entity. The human is capable of interacting with the system by employing a variety of cognitive skills in a variety of combinations. While we may design a system to be operated on by the human in a particular way, and may test it based on that design, a person frequently finds ways to operate within the system that utilize a significantly different mix of cognitive skills than we anticipated. It is becoming clear that we need to move beyond assessment under optimal conditions to anticipate and evaluate the person's ability to function in degraded system environments. The problem is that introducing human flexibility into the testing equation makes it impossibly complex. It is difficult enough to design tests of a system where it is assumed that the human is optimally trained and functioning. If one now adds the complexity of potential human cognitive adaptations to the situation, the problem of designing adequate evaluations be-comes formidable. One solution to this problem may be to abandon the attempt to probe every possible approach the human may take in operating a system, and rather to concentrate on the skills necessary for successful performance under any reasonable range of system conditions. In other words, what cognitive (and psychomotor) skills must the person have to success-fully operate the system under any expected conditions? To answer that question, it will be necessary to develop a way to match the cognitive capabilities of the human to a variety of system demands. Necessarily, this will demand an innovative type of human cognitive testing as well as a technique or model that allows the measured capabilities to be matched to the various system demands. Ultimately, this means there is a need to introduce more ecologically valid techniques into the testing situation so that the limits of human cognitive abilities are factored into the arguably nonlinear demands of the real-world environment. This article presents a summary of 6 years of efforts devoted to investigating how these demands can be met. Several interrelated steps are involved in the solution, and they are schematically illustrated in Figure 1. Each of these steps is then described briefly below. Finally, a brief discussion of how this approach might be applied in the test and evaluation environ-ment is provided.
Online quizzes are simple, cost-effective methods to provide formative assessment, but their effectiveness in enhancing learning and performance in medical education is unclear.
The purpose of this article is to determine the extent to which online quiz performance and participation enhances students' performance on summative examinations.
A retrospective case study investigating relationships between formative and summative assessment in terms of use and outcomes.
Online quiz scores and the rates of quiz participation were significantly correlated with corresponding performance on summative examinations. However, correlations were not dependent on the specific quiz content, and changes in patterns of quiz use were not reflected in corresponding changes in summative examination performance.
The voluntary use of online quizzes, as well as the score attained, provides a useful general indicator of student performance but is unlikely to be sensitive enough to direct an individual student's learning plan.
The present review focuses on the development of a performance battery generation system that selects performance tests most applicable to particular jobs. First, a list or taxonomy of cognitive and performance skills that are involved in real-world jobs or missions is developed. Based on this list, an "armory" of performance tests probing those skills is identified. A new technique is then developed to select from that armory the minimum number of tests that optimally probe the demands of a specific job or mission. While specifics of these developments will continue to evolve, it is hoped that the general framework described here will help close the gap between laboratory testing and real-world tasks, and form the foundation of the way performance test batteries will be developed in the future.
How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.
A timely complement to John Bruer's Schools for Thought, Classroom Lessons documents eight projects that apply cognitive research to improve classroom practice. The chapter authors are all principal investigators in an influential research initiative on cognitive science and education. Classroom Lessons describes their collaborations with classroom teachers aimed at improving teaching and learning for students in grades K-12. The eight projects cover writing, mathematics, history, social science, and physics. Together they illustrate that principles emerging from cognitive science form the basis of a science of instruction that can be applied across the curriculum.
The book is divided into three sections: applications of cognitive research to teaching specific content areas; applications for learning across the curriculum; and applications that challenge traditional concepts of classroom-based learning environments.
Chapters consider explicit models of knowledge with corresponding instruction designed to enable learners to build on that knowledge, acquisition of specified knowledge, and what knowledge is useful in contemporary curricula.
Contributors Kate McGilly. Sharon A. Griffin, Robbie Case, and Robert S. Siegler. Earl Hunt and Jim Minstrell. Kathryn T. Spoehr. Howard Gardner, Mara Krechevsky, Robert J. Sternberg, and Lynn Okagaki. Irene W. Gaskins. The Cognition and Technology Group at Vanderbilt. Marlene Scardamalia, Carl Bereiter, and Mary Lamon. Ann L. Brown and Joseph C. Campione. John T. Bruer.
Bradford Books imprint
Everyone is in favor of "high education standards" and "fair testing" of student achievement, but there is little agreement as to what these terms actually mean. High Stakes looks at how testing affects critical decisions for American students. As more and more tests are introduced into the country's schools, it becomes increasingly important to know how those tests are used--and misused--in assessing children's performance and achievements.High Stakes focuses on how testing is used in schools to make decisions about tracking and placement, promotion and retention, and awarding or withholding high school diplomas. This book sorts out the controversies that emerge when a test score can open or close gates on a student's educational pathway. The expert panel: Proposes how to judge the appropriateness of a test. Explores how to make tests reliable, valid, and fair. Puts forward strategies and practices to promote proper test use. Recommends how decisionmakers in education should--and should not--use test results. The book discusses common misuses of testing, their political and social context, what happens when test issues are taken to court, special student populations, social promotion, and more.High Stakes will be of interest to anyone concerned about the long-term implications for individual students of picking up that Number 2 pencil: policymakers, education administrators, test designers, teachers, and parents.
A major hurdle in implementing project-based curricula is that they require simultaneous changes in curriculum, instruction, and assessment practices-changes that are often foreign to the students as well as the teachers. In this article, we share an approach to designing, implementing, and evaluating problem- and project-based curricula that has emerged from a long-term collaboration with teachers. Collectively, we have identified 4 design principles that appear to be especially important: (a) defining learning-appropriate goals that lead to deep understanding; (b) providing scaffolds such as "embedded teaching," "teaching tools," sets of "contrasting cases," and beginning with problem-based learning activities before initiating projects; (c) ensuring multiple opportunities for formative self-assessment and revision; and (d) developing social structures that promote participation and a sense of agency. We first discuss these principles individually and then describe how they have been incorporated into a single project. Finally, we discuss research findings that show positive effects on student learning and that show students' reflections on their year as 5th graders were strongly influenced by their experiences in problem- and project-based activities that followed the design principles.
Classroom Lessons: Integrating Cognitive Theory and Classroom Practice (see record 1994-98346-000 ) presents current research about applying cognitive theory to classroom instruction in grades kindergarten-12 and addresses the content and nature of teaching. The eight projects described here cover the areas of writing, mathematics, history, social science, and physics and demonstrate how the contributions of cognitive science may improve curricula. Among other topics, contributors describe explicit models of knowledge with corresponding instruction designed to enhance learning; games and activities aimed at helping students acquire specific knowledge and skills; and methods for combining the teaching of specific subjects with teaching learning and communication skills. Organized in three sections, the book focuses on domain-specific applications; across-the-curriculum applications; and classrooms as learning communities. Chapter topics include providing conceptual prerequisites of arithmetic to students at risk for school failure; a cognitive approach to teaching physics; enhancing the acquisition of conceptual structures through hypermedia; enhancing students' practical intelligence for school; teaching poor readers how to learn, think, and problem solve; visual word problems and learning communities; the CSILE project; guided discovery in a community of learners; and classroom problems, school culture, and cognitive research. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
A major hurdle in implementing project-based curricula is that they require simultaneous changes in curriculum, instruction, and assessment practices-changes that are often foreign to the students as well as the teachers. In this article, we share an approach to designing, implementing, and evaluating problem- and project-based curricula that has emerged from a long-term collaboration with teachers. Collectively, we have identified 4 design principles that appear to be especially important: (a) defining learning-appropriate goals that lead to deep understanding; (b) providing scaffolds such as "embedded teaching," "teaching tools," sets of "contrasting cases," and beginning with problem-based learning activities before initiating projects; (c) ensuring multiple opportunities for formative self-assessment and revision; and (d) developing social structures that promote participation and a sense of agency. We first discuss these principles individually and then describe how they have been incorporated into a single project. Finally, we discuss research findings that show positive effects on student learning and that show students' reflections on their year as 5th graders were strongly influenced by their experiences in problem- and project-based activities that followed the design principles.
How do people know as much as they do with as little information as
they get? The problem takes many forms; learning vocabulary from
text is an especially dramatic and convenient case for research.
A new general theory of acquired similarity and knowledge representation,
latent semantic analysis (LSA), is presented and used to successfully
simulate such learning and several other psycholinguistic phenomena.
By inducing global knowledge indirectly from local co-occurrence
data in a large body of representative text, LSA acquired knowledge
about the full vocabulary of English at a comparable rate to schoolchildren.
LSA uses no prior linguistic or perceptual similarity knowledge;
it is based solely on a general mathematical learning method that
achieves powerful inductive effects by extracting the right number
of dimensions (e.g., 300) to represent objects and contexts. Relations
to other theories, phenomena, and problems are sketched.
Remembering and Understanding
A L Brown
J D Bransford
R A Ferrara
J C Campione
Learning
Brown, A. L., Bransford, J. D., Ferrara, R. A., and Campione, J. C. “Learning, Remembering and Understanding.” In J. H. Flavell and E. M. Markman (eds.), Handbook of Child Psychology, Vol. 3: Cognitive Development. (4th ed.) New York: Wiley, 1983
Inside the Black Box: Raising Standards Through Classroom Assessment Learning, Remembering and Understanding
Jan 1998
P Black
D Wiliam
A L Brown
J D Bransford
R A Ferrara
J C Campione
Black, P., and Wiliam, D. " Inside the Black Box: Raising Standards Through Classroom Assessment. " Phi Delta Kappan, 1998, 80, 139. Brown, A. L., Bransford, J. D., Ferrara, R. A., and Campione, J. C. " Learning, Remembering and Understanding. " In J. H. Flavell and E. M. Markman (eds.), Handbook of Child Psychology, Vol. 3: Cognitive Development. (4th ed.) New York:
A Classroom Environment for Learning: Guiding Students' Reconstruction of Understanding and Reasoning Innovations in Learning: New Environments for Education
Jan 1989
J Minstrell
V Stimpson
N J Mahwah
Minstrell, J., and Stimpson, V. " A Classroom Environment for Learning: Guiding Students' Reconstruction of Understanding and Reasoning. " In L. Schauble and R. Glaser (eds.), Innovations in Learning: New Environments for Education. Mahwah, N.J.: Erlbaum, 1996. National Council of Teachers of Mathematics. Curriculum and Evaluation Standards for School Mathematics. Reston, Va.: National Council of Teachers of Mathematics, 1989. National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va.: National Council of Teachers of Mathematics, 2000. National Research Council. National Science Education Standards. Washington, D.C.: National Academy Press, 1996.
Problem-Based Macro Contexts in Science Instruction: Theoretical Basis, Design Issues, and the Development of Applications Toward a Cognitive-Science Perspective for Scientific Problem Solving
Jan 1995
R D Sherwood
Sherwood, R. D., and others. " Problem-Based Macro Contexts in Science Instruction: Theoretical Basis, Design Issues, and the Development of Applications. " In D. Lavoie (ed.), Toward a Cognitive-Science Perspective for Scientific Problem Solving. Manhattan, Kan.: National Association for Research in Science Teaching, 1995.
Benchmark Lessons and the World Wide Web: Tools for Teaching Statistics
Jan 1997
A Schaffner
Schaffner, A., and others. " Benchmark Lessons and the World Wide Web: Tools for Teaching Statistics. " In Proceedings of the Second International Conference on the Learning Sciences. Mahwah, N.J.: Erlbaum, 1997.
Student-Centered Classroom Assessment
Jan 1994
R Stiggins
Stiggins, R. Student-Centered Classroom Assessment. Columbus, Ohio: Merrill, 1994.
Creating Contexts for Community-Based Problem Solving: The Jasper Challenge Series Thinking and Literacy: The Mind at Work. Mahwah, N.J.: Erlbaum, 1995. Barron, B. J., and others Doing with Understanding: Lessons from Research on Problem and Project-Based Learning
Jan 1998
271-312
B Barron
Barron, B., and others. " Creating Contexts for Community-Based Problem Solving: The Jasper Challenge Series. " In C. N. Hedley, P. Antonacci, and M. Rabinowitz (eds.), Thinking and Literacy: The Mind at Work. Mahwah, N.J.: Erlbaum, 1995. Barron, B. J., and others. " Doing with Understanding: Lessons from Research on Problem and Project-Based Learning. " Journal of Learning Sciences, 1998, 7, 271–312.
Creating Contexts for Community-Based Problem Solving: The Jasper Challenge Series
Jan 1995
B Barron
Barron, B., and others. "Creating Contexts for Community-Based Problem Solving: The
Jasper Challenge Series." In C. N. Hedley, P. Antonacci, and M. Rabinowitz (eds.),
Thinking and Literacy: The Mind at Work. Mahwah, N.J.: Erlbaum, 1995.
A Collaborative Classroom for Teaching Conceptual Physics
Jan 1994
E Hunt
J Minstrell
Hunt, E., and Minstrell, J. "A Collaborative Classroom for Teaching Conceptual
Physics." In K. McGilly (ed.), Classroom Lessons: Integrating Cognitive Theory and the
Classroom. Cambridge, Mass.: MIT Press, 1994.
From Visual Word Problems to Learning Communities: Changing Conceptions of Cognitive Research
Jan 1983
A L Brown
J D Bransford
R A Ferrara
J Campione
Brown, A. L., Bransford, J. D., Ferrara, R. A., and Campione, J. C. "Learning,
Remembering and Understanding." In J. H. Flavell and E. M. Markman (eds.),
Handbook of Child Psychology, Vol. 3: Cognitive Development. (4th ed.) New York:
Wiley, 1983.
Cognition and Technology Group at Vanderbilt. "From Visual Word Problems to
Learning Communities: Changing Conceptions of Cognitive Research." In K. McGilly
(ed.), Classroom Lessons: Integrating Cognitive Theory and Classroom Practice.
Cambridge, Mass.: MIT Press, 1994.
Cognition and Technology Group at Vanderbilt. The Jasper Project: Lessons in
Curriculum, Instruction, Assessment, and Professional Development. Mahwah, N.J.:
Erlbaum, 1997.
Problem-Based Macro Contexts in Science Instruction: Theoretical Basis, Design Issues, and the Development of Applications