ArticlePDF Available

Formative Assessment: Getting the Focus Right

Authors:
Formative assessment: getting the focus right
Dylan Wiliam, ETS
Writing in 1967, Michael Scriven suggested two roles that evaluation might play. On the
one hand, “ it may have a role in the on-going improvement of the curriculum” (Scriven,
1967 p.41) while in another role, “ the evaluation process may serve to enable
administrators to decide whether the entire finished curriculum, refined by use of the
evaluation process in its first role, represents a sufficiently significant advance on the
available alternatives to justify the expense of adoption by a school system” (pp. 41-42).
He then proposed “to use the terms 'formative' and 'summative' evaluation to qualify
evaluation in these roles.” (p. 43).
Two years later, Benjamin Bloom suggested that the same distinction might be applied to
the evaluation of student learning—what we today would tend to call “assessment”. He
acknowledged the traditional role that tests played in judging and classifying students,
but noted that there was another role for evaluation:
Quite in contrast is the use of "formative evaluation" to provide feedback and correctives at
each stage in the teaching-learning process. By formative evaluation we mean evaluation by
brief tests used by teachers and students as aids in the learning process. While such tests may
be graded and used as part of the judging and classificatory function of evaluation, we see
much more effective use of formative evaluation if it is separated from the grading process
and used primarily as an aid to teaching (Bloom 1969, p.48)
Explicit in these early uses is that the term “formative” cannot be a property of an
assessment. As Bloom makes clear, the same tests could be used for formative or
summative uses, although he suggests that the formative use will be less effective if the
tests are part of the grading process. The crucial feature of formative evaluations, for both
Scriven and Bloom, is that the information is used in some way to make changes.
Whether it is a curriculum or student achievement that is being evaluated, the evaluation
is formative if the information generated is used to make changes to what would have
happened in the absence of such information. In the same way that one’s formative
experiences are those experiences that shape us as individuals, formative evaluations are
those that shape whatever is being evaluated. An assessment of a curriculum is formative
if it shapes the development of that curriculum. An assessment of a student is formative if
it shapes that student’s learning. Assessments are formative, therefore, if and only if
something is contingent on their outcome, and the information is actually used to alter
what would have happened in the absence of the information.
Of course, the time-scale for this depends on the decisions that need to be made. For
example, a science supervisor in a school district may need to plan in the spring the
workshops that she will offer to teachers during the summer. She may look at the scores
obtained by students in last year’s state tests, and find that the levels of performance on
some domains is lower, relative to the average for the state, than others. She could then
use this information to plan workshops on the areas where performance was weakest,
thus meeting the learning needs of the science teachers in the district in a way that would
not have been possible, or at least would have been unlikely, without that information. In
this example, the information about the performance of the students in the district was
used by the supervisor to adapt her plans for the summer workshops in order to better
meet the teachers’ learning needs. At the other extreme, a language arts teacher might ask
a class of students the following question:
Which of these is a good thesis statement?
A) The typical TV show has 9 violent incidents
B) There is a lot of violence on TV
C) The amount of violence on TV should be reduced
D) Some programs are more violent than others
E) Violence is included in programs to boost ratings
F) Violence on TV is interesting
G) I don’t like the violence on TV
H) The essay I am going to write is about violence on TV
and require each student to respond by holding up one (or more) of a set of cards labeled
A, B, C, D, E, F, G and H. At this point, the teacher has created a “moment of
contingency”—a point in the instructional sequence where the instruction can change
direction in light of evidence about the students’ achievement, thus allowing her to adapt
the instruction to better meet their learning needs. If all students choose option C, the
teacher can move on, reasonably confident that all students in her class understand what
constitutes a good thesis statement. If most of the students have not answered correctly,
then the teacher has created a “teachable moment.” If most of the students have chosen
incorrectly, she might choose to review the work on thesis statements with the class. But
if some have answered correctly while others have not, then she might initiate a class
discussion. Moreover, because she knows which students chose which option, she can
use this information to guide the discussion more effectively.
These two examples illustrate the extreme ends of a continuum. The science supervisor is
engaged in what we might term “long-cycle” formative assessment. The cycles here can
be years long. The results of tests that students took in March 2005 might be used to plan
workshops for teachers in the summer of 2006, but these could not affect student scores
on state tests until the tests taken in March 2007—and these results might not be known
until summer 2007. At the other extreme, the language arts teacher is using a cycle that is
minutes, if not seconds, in duration—what we might call “short-cycle” formative
assessment. The focus of many of the studies in this issue has been somewhere between
these two extremes—what we might call “medium-cycle” formative assessment. In table
1 below, the foci and typical durations of these differences, based on Wiliam and
Thompson (2006) are given.
What makes an assessment formative, therefore, is not the length of the feedback loop,
nor where it takes place, nor who carries it out, nor even who responds. The crucial
feature is that evidence is evoked, interpreted in terms of learning needs, and used to
make adjustments to better meet those learning needs.
Type Focus Length
Long-cycle between instructional units four weeks to one year or more
Medium-cycle between lessons one day to two weeks
Short-cycle within a single lesson five seconds to one hour
Table 1: Types of formative assessment
All the papers in this special issue highlight these aspects to a greater or lesser extent.
The analysis by Ruiz-Primo and Aracelli shows that the teachers who most consistently
elicit the right kinds of information (conceptual eliciting questions), who have ways of
interpreting the students’ responses in terms of learning needs, and who can use this
information to adapt their instruction, generate higher levels of student achievement.
The paper by Niemi et al examines the use of representations in mathematics classrooms
in terms of a five-step model of the teaching and assessment process. Their model is
prescriptive rather than descriptive; while there can be little doubt of the utility of proper
content analysis before beginning instruction, there is little evidence that this is what
teachers actually do, nor is there much evidence that teachers take much account of
students’ prior knowledge. Their analysis focuses on the key role of the process of
representation, although here representations are restricted to external (rather than, say,
cognitive) representations, akin to what Bruner (1996) calls oeuvres (“works”). Here
again the importance of eliciting the right information (what representations should we
get students to think about?), and making sense of their responses, comes through clearly.
In this regard, the “archaeology” of the assessment may be relevant. It is notable that in
many of the papers in this volume, and in most of the research reported in recent years,
teachers have tried to adapt assessments originally designed for summative purposes (e.g.
grading) for formative purposes. In the paper by Gearhart et al., while one teacher did use
white-boards to elicit responses from groups that the teacher could use to adapt
instruction in real time, she, and the other two teachers, focused more on using more or
less formal assessment episodes with rubrics that could support summative inferences. As
a result, the evidence collected by the teachers was more useful for describing where
students were than for informing how to move them forward.
Similar issues are raised in the paper by Aschbacher and Alonzo. Science notebooks have
potential as formative assessments—i.e. they can generate evidence of student
achievement that would allow a teacher to adapt her instruction—but this happens only
when the prompt used for eliciting evidence is structured appropriately. For students in
Mrs Cruz’s class, because the task was too highly structured, all we learn is whether the
students can follow directions. In contrast, in those classes where too little guidance was
given, we get useful information on the understanding of some students, but for other
students, we learn very little. In particular, we do not know whether the lack of evidence
is due to lack of understanding of the science, or lack of understanding about what they
were being asked to do. In contrast, the structure for the notebook entries in Mrs Perez’s
class increases the disclosure of the task (Wiliam, 1992)—the likelihood that “if they
know it, they show it”.
The nature of the domain being studied also has profound implications for the kinds of
assessments that are likely to be useful. In science, we can often itemize the knowledge
we want students to acquire. This makes it relatively straightforward to move from
monitoring (is learning taking place?) to diagnosis (what is not being learned?) to action
(what to do about it?). In a domain such as reading, however, the cause of the problem is
much less clear. Scarborough (2001) shows that skilled reading involves the simultaneous
articulation of a large number of skills, including phonological awareness, decoding,
sight recognition of familiar words, background knowledge of the subject of the text,
vocabulary, knowledge about language structures, inferential skills, and even knowledge
of literacy concepts such as genre. The paper by Bailey and Drummond shows that early-
years teachers can generally identify which students are struggling, but are less skilled at
identifying the causes of the failure to progress. As with the other papers in this issue, the
first challenge is not how to interpret the evidence in terms of student learning needs, but
in generating the right evidence in the first place.
Taken as a whole, the papers in this issue make important contributions to our
understanding of how difficult it is likely to be to improve teachers’ use of formative
assessment strategies. In particular, they suggest that while the provision of high-quality
tools may be a necessary condition, it is certainly not a sufficient condition for the
improvement of formative assessment practice. Tools for formative assessment will only
improve formative assessment practices if teachers can integrate them into their regular
classroom activities. In other words, the task of improving formative assessment is
substantially, if not mainly, about teacher professional development.
Fifteen years ago, this would have resulted in a gloomy prognosis. There was little if any
evidence that the quality of teachers could be improved through teacher professional
development, and certainly not at scale. Indeed, there was a widespread belief that
teacher professional development had simply failed to “deliver the goods”:
Nothing has promised so much and has been so frustratingly wasteful as the thousands of workshops
and conferences that led to no significant change in practice when teachers returned to their classrooms
(Fullan, 1991, p. 315).
In recent years, however, we have learned that to be effective professional development
needs to attend to both process and content elements (Reeves, McCall, and MacGilchrist,
2001; Wilson and Berne, 1999). On the process side, professional development is more
effective when it is related to the local circumstances in which the teachers operate
(Cobb, McClain, Lamberg, and Dean, 2003), takes place over a period of time rather than
being in the form of one-day workshops (Cohen and Hill, 1998), and involves the teacher
in active, collective participation (Garet, Birman, Porter, Desimone, and Herman, 1999).
On the content side professional development is more effective when it has a focus on
deepening teachers’ knowledge of the content they are to teach, the possible responses of
students, and strategies that can be utilized to build on these (Supovitz, 2001).
The creation of teacher learning communities (TLCs) focused on formative assessment
appear to show the greatest potential for improving teaching practice and student
achievement (Wiliam and Thompson, 2006), but a note of caution is in order here.
As noted above, most of the studies reported in this issue have focused on medium cycle
formative assessment, but my own reading of the research (e.g. Black and Wiliam, 1998)
suggests that medium-cycle formative assessments have shown only modest impact on
student learning. Why this is so is not clear. In some cases, it may be because many of the
assessments being used were summative assessments pressed into service for formative
purposes, rather than being designed from the outset to be formative (the too easy
equation of “formative assessment” with “classroom assessment” may be partly
responsible here). It may be that it is just too hard for teachers to use information at the
end of a sequence of learning to adapt instruction, due to the pressure from curriculum
pacing guides or sequencing charts. Certainly the studies that have shown impact on
student learning (e.g. Wiliam, Lee, Harrison & Black, 2004) have tended to be those
where the introduction impacted teachers’ day-to-day and minute-to-minute classroom
practices, either by an explicit focus on short cycle assessment (Leahy, Lyon, Thompson
and Wiliam, 2005) or where a focus on medium or long cycle formative assessment was
implemented in such a way as to require teachers to change their regular classroom
practice. Further work is needed to elaborate the “logic model” of formative assessment
to clarify exactly what we believe the interventions are changing, and how much impact
they have on student learning.
My final concern in all this is that many, if not most, research efforts on supporting
teachers in the use of formative assessment represent a “counsel of perfection.” There is a
focus on meeting the needs of all students that is laudable, but simply unlikely to be
possible in most American classrooms. American teachers are some of the most hard-
working in the world, with around 1130 contact-hours per year compared to the OECD
average of 803 hours for primary and 674 hours for upper secondary (OECD, 2003). If
we are to effect substantial change at scale, we need to focus on the changes that we can
produce most easily. Of course, we must not tolerate interventions that exacerbate
existing inequality, but the pressing need now is to move teachers to action. As Robert
Slavin (1987) remarked in another context, “Do we really know nothing until we know
everything?”
References
Black, P. J., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in
Education: Principles Policy and Practice, 5(1), 7-73.
Bloom, B. S. (1969). Some theoretical issues relating to educational evaluation. In R. W.
Tyler (Ed.), Educational evaluation: new roles, new means: the 63rd yearbook of the
National Society for the Study of Education (part II) (Vol. 69(2), pp. 26-50). Chicago, IL:
University of Chicago Press.
Bruner, J. S. (1996). The culture of education. Cambridge, MA: Harvard University
Press.
Cobb, P., McClain, K., Lamberg, T. d. S., & Dean, C. (2003). Situating teachers'
instructional practices in the institutional setting of the school and district. Educational
Researcher, 32(6), 13-24.
Cohen, D. K., & Hill, H. C. (1998). State policy and classroom performance:
mathematics reform in California. Philadelphia, PA: University of Pennsylvania
Consortium for Policy Research in Education.
Desimone, L., Porter, A. C., Garet, M. S., Yoon, K. S., & Birman, B. F. (2002). Effects of
professional development on teachers’ instruction: results from a three-year longitudinal
study. Educational Evaluation and Policy Analysis, 24(2), 81-112.
Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S. (2001). What
makes professional development effective? Results from a national sample of teachers.
American Educational Research Journal, 38(4), 914-945.
Leahy, S., Lyon, C., Thompson, M., & Wiliam, D. (2005). Classroom assessment:
minute-by-minute and day-by-day. Educational Leadership, 63(3), 18-24.
Organisation for Economic Cooperation and Development. (2003). Education at a
glance. Paris, France: Organisation for Economic Cooperation and Development.
Reeves, J., McCall, J., & MacGilchrist, B. (2001). Change leadership: planning,
conceptualization and perception. In J. MacBeath & P. Mortimore (Eds.), Improving
school effectiveness (pp. 122-137). Buckingham, UK: Open University Press.
Scarborough, H. (2001). Connecting early language and literacy to later reading
(dis)abilities: evidence, theory and practice. In S. B. Neuman & D. K. Dickinson (Eds.),
Handbook of early literacy research. New York, NY: Guilford Press.
Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagné & M.
Scriven (Eds.), Perspectives of curriculum evaluation (Vol. 1, pp. 39-83). Chicago, IL:
Rand McNally.
Slavin, R. E. (1987). Ability grouping in elementary schools: do we really know nothing
until we know everything? Review of Educational Research, 57(3), 347-350.
Supovitz, J. A. (2001). Translating teaching practice into improved student achievement.
In S. H. Fuhrman (Ed.), From the capitol to the classroom: standards-based reform in
the States (Vol. Part 2, pp. 81-98). Chcago, IL: University of Chicago Press.
Wiliam, D. (1992). Some technical issues in assessment: a user’s guide. British Journal
for Curriculum and Assessment, 2(3), 11-20.
Wiliam, D., Lee, C., Harrison, C., & Black, P. J. (2004). Teachers developing assessment
for learning: impact on student achievement. Assessment in Education: Principles Policy
and Practice, 11(1), 49-65.
Wiliam, D., & Thompson, M. (2006). Integrating assessment with instruction: what will
it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: shaping
teaching and learning. Mahwah, NJ: Lawrence Erlbaum Associates.
Wilson, S. M., & Berne, J. (1999). Teacher learning and the acquisition of professional
knowledge: an examination of research on contemporary professional development. In A.
Iran-Nejad & P. D. Pearson (Eds.), Review of research in education (Vol. 24, pp. 173-
209). Washington, DC: American Educational Research Association.
... [16]. [17] argues that formative assessment provides teachers with the opportunity to assess their students' knowledge at various points in their learning. ...
... It helps the teachers to know who understand and who is not. [17] argues that formative assessment provides teachers with the opportunity to assess their students' knowledge at various points in their learning. Results from such assessments, in turn, inform future instruction. ...
... writing (Hyland, 2004;), yet the contribution of portfolio assessment into EFL learners' genre-based writing performance is largely under-documented (Lam, 2019;Wiliam, 2006). To void the gap in the literature, this study aimed to set a genre-based PA platform to investigate the role of teacher formative assessment in EFL learners' degree of engagement in descriptive and narrative writing progress. ...
Preprint
Full-text available
With controversial implications in language learning context, there is not enough evidence for learners’ engagement in genre-based portfolio assessment (PA) and its impacts on their writing improvement. To fill the gap, this case study examined 46 EFL undergraduate students’ performance on descriptive and narrative writing tasks. In 12-week PA design, feedback points were collected from teacher formative assessment of the students’ descriptive and narrative writing, with reference to the genre-specific indicators in West Virginia Department of Education descriptive writing rubric and Smarter Balanced narrative writing rubric, respectively. Statistical results reported the significant impact of PA on improving components of word choice and grammar, development and organization of ideas in student descriptive writing, with no sign of improvement in their performance on descriptive writing post-test. Statistics also supported the positive impact of PA on improving the components of elaboration of narrative, language and vocabulary, organization and convention in student narrative writing, and their performance on narrative writing post-test. The qualitative data on students’ engagement in PA was collected from their reflective journals. After inductive content analysis, the students’ self-reports were schematized, and their level of engagement was rendered in terms of their approval of usefulness and novelty of PA, frequent mismatch between student self-assessment and teacher written feedback both in quality and quantity, ‘sensitivity’ or focus of teacher feedback to some writing features over others, applicability of teacher feedback to revision process, and overall perception of writing improvement. The paper provided teaching implications for EFL practitioners and suggestions for future research.
... This type of assessment is often awarded a small number of points towards the overall final grade, if at alldepending on the situation in which it is used. Some ways that this has been implemented, on top of discussion question forums and homework problem sets, is through the utilization of rough drafts when the final summative assessment is a term or research paper, an exit ticket that the students fill out before leaving the classroom for the day, and/or through a pre and post-class quiz where students can check their understanding and the teacher can have a clear understanding of areas where much of the class is struggling (Wiliam, 2017;Wiliam, 2006). Through the proper utilization of these assignments, student learning is bolstered as the teacher can now focus on areas where the students are struggling with concepts the mostadapting to the needs of the students (Brookhart et al., 2008). ...
Article
Full-text available
Purpose The purpose of this paper is to create a “go-to-guide” of best practices in the creation of asynchronous courses. Due to the global pandemic, millions of students around the world transitioned from in-class instruction to online programs, which ranged from completely synchronous classrooms to completely asynchronous classrooms. Students were forced to learn how to engage within an online classroom environment with minimal notice and instructors were abruptly thrusted into a different operational environment, with many required to construct educational ecosystems in an unfamiliar and digitized interface. This led to several actions and the utilization of a multitude of different teaching techniques, many of which were poorly implemented. Design/methodology/approach Key words, “Asynchronous learning”, “Learning”, “Feedback”, “Online Instruction”, and “Classroom Design” were searched in online data bases (Google Scholar, PubMed, EBSCO and Data Base of Open Access Journals). These then were read by the authorial team and authoritative papers were selected by the team based on the frequency of utilization by other papers in the field and the utility of these papers for the design of asynchronous courses. Findings This paper explores asynchronous learning from the perspective of how instructional science and learning science can be applied to create the best classroom for both pupil and instructor. Originality/value It looks to provide a go-to-guide for best practices in asynchronous learning and the development of K-12 classrooms, graduate and medical school classrooms and finally continuous medical education classrooms. Finally, this guide looks to facilitate the development of master instructors through statements on how to properly provide feedback to students.
... In 1967, Michael Scriven coined the terms 'summative' and 'formative' assessment, and around the mid-1980s, research into whether formative assessment is a more effective way of facilitating learning took off, with Sadler (1989; providing a theoretical framework for formative evaluation. Originally described by Scriven as a way to assess the effectiveness of a curriculum, formative assessment purpose has changed to include a way to assess the student's knowledge and to facilitate learning, rather than just testing the level of understanding, as suggested by Benjamin Bloom in 1969(Bloom 1969Wiliam 2006;Yorke 2003). The function and structure of formative assessment in schools has been developed in a number of papers by Wiliam and Black (Black, 2003b,c;Black & Wiliam, 1998;Wiliam, 2000;Wiliam et al., 2004;Wiliam & Black, 1996), and Torrance and Pryor (1993;2001), and in post-secondary education by Boud (1995) and Cowan (1998). ...
Article
Full-text available
Introduction: Assessment, historically, has been done in a summative manner in post-secondary education (HE). Whilst useful for the purposes of grading and assessment of competency, there is also increasing pressure from post-secondary education institutions to meet certain standards in terms of education quality and graduate numbers, putting pressure on teachers to produce evidence of students’ level of understanding and thus putting a greater emphasis on the use of summative assessments. The formative assessment approach for student learning is preferable in some fields, but how useful is this format for the science subjects? Purpose: To discuss the utility of either summative assessments or formative assessments (or both) in science teaching at university level. Methods: Exploration of the literature involving teaching science in university undergraduate courses (i.e., no formal search criteria). Conclusions: A new category of assessment is needed - the integration of formative and summative assessment.
... PSTs reflected on what influenced teachers' strategies and responsiveness in dialogic questioning. Questioning and dialogue play key roles in learning (Wiliam, 2006) so we considered this a fruitful lens for examining any gender bias. Inferential statistics are being used to analyse expected against observed question frequencies. ...
Conference Paper
Science teachers are continually under scrutiny as researchers explore how they attempt to enable their students to ‘do science’ both in school and in preparation for ‘becoming scientists’. Questions have been raised, for instance, about the tendency of boys to dominate in science lessons by gaining more of teacher attention compared with girls during dialogic interactions. Drawing on notions of gendered learning within the context of science teacher questioning, this research examines whether there is a tendency for teachers to address questions during classroom dialogue to boys over girls in secondary science classrooms. Data were collected in 211 science classes in London, UK. Pre-service science teachers mapped whom teachers asked questions to during randomly selected lessons, and reflected on their strategies and intentions. In addition, a teacher-researcher carried out teacher and student surveys and an in-depth student focus group interview in their school, in which 14-15 year old students are taught science in both mixed- and single-gendered settings, to examine any perceptions of gendered learning. Early evidence shows there may be a gender bias towards teachers asking boys questions more frequently, sometimes as part of behaviour management agendas rather than as direct scaffolding for learning ideas. Teacher perceptions of boys responding more readily to competitive learning were refuted by girls in our sample. Students did not perceive gendered grouping as aiding their learning or their enjoyment of it. This study has implications for teacher training, and teachers’ preconceptions of how girls and boys may respond to different learning approaches in the science classroom.
... Assessment is a complex educational practice that teachers generally find difficult, especially in relation to technological systems (Hattie 2012;Schooner, Klasander & Hallström, 2018a;Wiliam, 2006;. Different subject traditions support the teacher in his or her assessment practices to varying degrees (Kimbell, 2007), but in technology education teachers do not have a long and well-defined subject tradition to lean on when assessing students' knowledge and skills. ...
Article
Full-text available
In technology education, assessment is challenging and underdeveloped as it is a nascent practice and teachers do not have a well-defined subject tradition to lean on when assessing students. The aim of this study is to explore Swedish secondary technology teachers' cognitive beliefs about assessing students' learning of technological systems, in relation to the assessment tools they use. Data for the study were collected through a questionnaire which was completed by 511 Swedish technology teachers in lower secondary education (grades 7-9). The data were analysed statistically in three main steps. Exploratory factor analysis revealed underlying dimensions in teachers' cognitive beliefs, which was followed by correlation analysis to discern associations between dimensions of cognitive beliefs. Finally, comparisons were made between groups of teachers to discern how teach-ers' cognitive beliefs are influenced by their experience and educational background. The results show that additional education in the technology and engineering fields relates to more positive cognitive beliefs concerning teachers' ability to assess students' learning of technological systems. Teachers' cognitive beliefs about assessment therefore did not primarily relate to the content of technological systems per se but to increased engineering and technology competence more broadly, which may indicate the importance of a comprehensive technological knowledge base in order to be confident in assessment. Furthermore , strong cognitive beliefs about assessment were connected specifically to local, regional and national technological systems, which are generally well-known and visible types of systems, and to the human, socio-technical dimensions of the systems. Cognitive beliefs about knowledge for assessment were also associated with positive attitudes to assessment tools that followed the formative tradition, which may be explained by the prevalence of procedural epistemic practices and modelling in the design and understanding of technological systems. Technology teachers would need additional in-service courses in engineering to broaden their knowledge and increase their cognitive beliefs about assessment. Formative assessment should also be preferred, and it might be appropriate to begin teaching and assessment with well-known local and regional infrastructural systems with a clear socio-technical dimension.
... Η ιδιαίτερη ευκολία στη χρήση, η στιγμιαία και σε πραγματικό χρόνο ανατροφοδότηση, τόσο σε ατομικό όσο και σε ομαδικό επίπεδο, αποτελούν πλεονεκτήματα αυτών των εφαρμογών (Dervan, 2014;Sun & Hsieh, 2018). Ο Wiliam επισημαίνει ότι η διαμορφωτική αξιολόγηση μέσω έγκαιρης ανατροφοδότησης και συζήτησης των απαντήσεων από τους μαθητές, βοηθά στην πρόοδό τους και στη βελτίωση της μεταγνωστικής λειτουργίας (Wiliam, 2006). Η ενσωμάτωση στη διδασκαλία ΣΑ αποτελεί τρόπο μέτρησης -χωρίς άμεση βαθμολόγηση-του επιπέδου της εννοιολογικής κατανόησης, που ενισχύει παράλληλα και την εξατομικευμένη μάθηση (Black & Wiliam, 2009;Langford & Damsa, 2020). ...
Conference Paper
Full-text available
Σκοπός της παρούσας μελέτης είναι να αναδείξει τη δυναμική της θεωρίας της Μετασχηματίζουσας Μάθησης σχετικά με την κοινωνική αλλαγή. Ο στοχαστικός διάλογος, η κριτική συνειδητοποίηση και η αλλαγή των δυσλειτουργικών αντιλήψεων, είναι δυνατόν να βοηθήσουν τα άτομα να αντιμετωπίσουν τις προκλήσεις μιας μεταβαλλόμενης κοινωνίας. Από την κριτική ανάλυση των ευρημάτων φαίνεται ότι η Μετασχηματίζουσα Μάθηση μπορεί να οδηγήσει -πέρα από την κριτική συνειδητοποίηση της πραγματικότητας- στη συλλογική μάθηση, στη διαμόρφωση κοινών στόχων και στην ανάληψη δράσης για κοινωνική αλλαγή.
Article
Purpose This paper aims to propose an innovative method for deploying a personalized instructor-created software-aided assessment system, that will disrupt traditional learning environments by allowing students to confidentially and with indirect supervision from the instructor, assess their knowledge and ability to achieve the course outcomes. Design/methodology/approach Through empirical evaluation in real-world educational settings, the authors examine the impact of augmenting human activity in the classroom with an innovative software platform to transform the learning process. Findings Findings indicate that this software-aided assessment system effectively augments human interactivity by providing timely instructor-designed feedback to increase knowledge retention and skillsets. Practical implications This study has shown that incorporating disruptive innovation through the use of software-aided assessment systems increases the effectiveness of the faculty in the classroom and enhances student learning and retention. Thus, a transformative software-aided assessment system design that incorporates artificial intelligence into the learning pathway should be pursued. These software-aided assessments are disruptive innovation as they are formative, frequent and require little direct involvement from the instructor. Originality/value To the best of the authors’ knowledge, this study is the first of its kind to incorporate artificial intelligence into the assessment process by analyzing results of pilot programs at several universities. The results demonstrate how using software-aided transformative assessments in various courses have helped instructors assess students’ preparedness and track their learning progress. These software-aided systems are the first step in bringing disruptive innovation to the classroom as these software-aided assessment instruments rapidly assess learners’ knowledge and skills based on short, easily created, multiple-choice tests, with little direct engagement from the faculty.
Conference Paper
Fluency in understanding chemical kinetics is widely recognized as critical in learning further concepts of chemical reaction and chemical process. However, the traditional way of learning does not foster the desirable results, thus implementation of active learning could improve students’ achievement. In this study, we evaluate the impact of the pair- test over individual assessment approaches in active learning on grade scores. The participants of this study consist of two different cohorts of second-year undergraduate students at Universiti Teknologi Malaysia. The students were evaluated using multiple-choice questions related to reaction rates and order of reaction topics. The findings suggest that majority of the students performed well by undertaking the test together with their partner as compared to the individual method. Besides, students reported pleasant experiences in the pair-test assessment, which also improves their motivation in further learning the topics. However, several students do not show significant improvement when comparing their pair-test over individual results. This might be due to the disagreement over the answer and the time constraint to instill teamworking among the partners. Overall, the pair-test approach received positive feedback and should be considered as an alternative assessment in teaching challenging topics such as chemical kinetics.
Chapter
Improving education is a priority for all countries. Increasing the level of educational achievement brings benefits to the individual, such as higher lifetime earnings, and to society as a whole, both in terms of increased economic growth and lower social costs such as health care and criminal justice costs (Gritz & MaCurdy, 1992; Hanushek, 2004; Levin, 1972; Tyler, Murnane, & Willett, 2000). Indeed, the total return on investments in education can be well over $10 for every $1 invested (Schweinhart et al., 2005). This means that even loosely focused investments in education are likely to be cost-effective. Given public skepticism about such long-term investments, however, and given too the reluctance of local, state, and federal governments to raise taxes, there is a pressing need to find the most cost-effective ways of improving student achievement.