To read the full-text of this research, you can request a copy directly from the authors.
Abstract
The higher education shift to remote learning due to mobility restrictions imposed during the COVID-19 pandemic highlighted the need to improve the student learning experience using more active learning models. One model is peer assessment. Despite positively impacting student learning, peer assessment uptake remains low, partly because designing effective peer assessment processes is complex. Frameworks provide good coverage of the necessary design considerations; however, a systematic synthesis of the literature on how to design effective peer assessment processes is needed. We find strong evidence that peer assessment is most effective as formative peer feedback whereby students can apply feedback to support their performance and learning. Assessor training, multiple peer review iterations, assessment flexibility, collaborative assessment and providing resources to engage students and educators in peer assessment processes can also improve student experience and learning outcomes. Conversely, we find mixed evidence for the effectiveness of anonymity, online v offline settings and peer marking. Based on these findings, we provide guidance for educators in designing effective peer assessment processes, which, we hope, will drive greater uptake of peer assessment in higher education and support students to benefit from enhanced learning opportunities.
To read the full-text of this research, you can request a copy directly from the authors.
... Formative assessment is most effective when integrated into structured peer review, as research suggests scaffolded feedback improves students' evaluation skills and overall learning (Fleckney, et al., 2024). In peer feedback, scaffolding involves giving specific prompts, models, or rubrics to help students assess each other's work effectively (Alemdag & Yildirim, 2022). ...
Foreign language teaching employs diverse methods and approaches that have evolved significantly over time. In response to critiques of earlier methodologies, the communicative approach emerged, emphasizing effective, sustainable, and practical language instruction. The communicative approach, along with its methodological and didactic principles, has reshaped teaching and learning practices, particularly in the assessment and evaluation of language skills. This study aims to explore the perspectives of German teacher candidates regarding the application of formative assessment techniques in communicative grammar instruction. The central research question guiding this inquiry is: "What are the perceptions of German teacher candidates on the integration of formative assessment techniques in communicative grammar lessons?" The study employs a qualitative research design and adopts a descriptive analysis framework. The purposive sample consists of 14 first-year German teacher candidates enrolled in the German Teaching Department at Trakya University. Over the course of four weeks, communicative grammar lessons were conducted, followed by data collection through semi-structured focus group interviews. The qualitative data were transcribed and systematically analyzed using the MAXQDA software, adhering to rigorous content analysis protocols. The findings offer valuable insights into German teacher candidates’ perceptions, emphasizing the role of assessment techniques in practice-oriented teacher training. The results underline the significance of these techniques in further developing communicative teaching methods, particularly in fostering practical and student-centered grammar instruction. Future research could build on these findings by involving larger and more diverse samples or by conducting interdisciplinary comparative studies.
This paper examines the critical challenge facing contemporary educational leaders: fostering individual autonomy while nurturing social solidarity in increasingly diverse and complex educational environments. Drawing from diverse philosophical traditions—including Kantian ethics, Ubuntu philosophy, Confucian thought, Cherokee wisdom, Durkheimian sociology, and Habermasian theory—a pluriversal framework is developed for educational leadership that transcends traditional dichotomies between individual agency and collective responsibility. Through careful analysis of recent empirical research and theoretical scholarship, the argument demonstrates how this tension manifests in pressing challenges such as student disengagement, cultural conflicts, and achievement disparities across both K-12 and post-secondary contexts. The paper advances a comprehensive strategic framework for implementing and evaluating leadership practices that balance individual empowerment with community cohesion. Our analysis reveals that successful educational transformation requires sophisticated approaches to leadership that honor both philosophical complexity and practical efficacy. This framework provides educational leaders with theoretical grounding and practical strategies for creating more inclusive, equitable, and transformative learning environments while maintaining commitment to both individual flourishing and collective well-being in an increasingly interconnected world.
The growing number of peer assessment studies in the last decades created diverse design options for researchers and teachers to implement peer assessment. However, it is still unknown if there are more commonly used peer assessment formats and design elements that could be considered when designing peer assessment activities in educational contexts. This systematic review aims to determine the diversity of peer assessment designs and practices in research studies. A literature search was performed in the electronic databases PsycINFO, PsycARTICLES, Web of Science Core Collection, Medline, ERIC, Academic Search Premier, and EconLit. using data from 449 research studies (derived from 424 peer-reviewed articles), design differences were investigated for subject domains, assessment purposes, objects, outcomes, and moderators/mediators. Arts and humanities was the most frequent subject domain in the reviewed studies, and two-third of the studies had a formative purpose of assessment. The most used object of assessment was written assessment, and beliefs and perceptions were the most investigated outcomes. Gender topped the list of the investigated moderators/mediators of peer assessment. Latent class analysis of 27 peer assessment design elements revealed a five-class solution reflecting latent patterns that best describe the variability in peer assessment designs (i.e. prototypical peer assessment designs). Only ten design elements significantly contributed to these patterns with an associated effect size R2 ranging from .204 to .880, indicating that peer assessment designs in research studies are not as diverse as they theoretically can be.
With the advancement of information and communication technologies, technology-supported peer assessment has been increasingly adopted in education recently. This study systematically reviewed 134 technology-supported peer assessment studies published between 2006 and 2017 using a developed analysis framework based on activity theory. The results found that most peer assessment activities were implemented in social science and higher education in the past 12 years. Acting assignments such as performance, oral presentations, or speaking were the least common type of assignments assessed across the studies reviewed. In addition, most studies conducted peer assessment anonymously and assessors and assessees were randomly assigned. However, most studies implemented only one round of peer assessment and did not provide rewards for assessors. Across studies, it was more often the case that students received unstructured feedback from their peers than structured feedback. Noticeably, collaborative peer assessment did not receive enough attention in the past 12 years. Regarding the peer assessment tools, there were more studies that adopted general learning management systems for peer assessment than studies that used dedicated peer assessment tools. However, most tools used within these studies only provide basic functionalities without scaffolding. Furthermore, the results of cross analysis reveal that there are significant relationships between learning domains and anonymity as well as learning domains and assessment durations. Significant relationships also exist between assignment types and learning domains as well as assignment types and assessment durations.
Self-assessment is believed to complement peer assessment in the classroom. However, whether, how and why this is done remains unclear. This study, by investigating the combined use of self and peer assessment for an academic writing task among a group of undergraduate students in Hong Kong, aims to shed light on how self-assessment complements peer assessment. Self-assessment is found to complement peer assessment in five ways: 1) it guides students to revise when peer assessment is lacking; 2) self-assessment effectively supplements peer assessment when the students had access to peer assessment; 3) even if a student has access to quality peer assessment, self-assessment complements peer assessment due to the different reflection involved in the two processes; 4) self-assessment can supplement peer assessment in terms of issues related to social-affective burdens for the latter; 5) self-assessment also complements peer assessment in that it benefits high and low-achieving students. Two problems surfaced. One is the students’ antipathy against self-assessment despite their overall positive perception about peer assessment; the other is the inadequacy of combining self with peer assessment in fostering learning outcomes regardless that the combined use of self and peer assessment helped the students with revision considerably.
The world‐wide pivot to remote learning due to the exogenous shocks of COVID‐19 across educational institutions has presented unique challenges and opportunities. This study documents the lived experiences of instructors and students and recommends emerging pathways for teaching and learning strategies post‐pandemic. Seventy‐one instructors and 122 students completed online surveys containing closed and open‐ended questions. Quantitative and qualitative analyses were conducted, including frequencies, chi‐square tests, Welch Two‐Samples t ‐tests, and thematic analyses. The results demonstrated that with effective online tools, remote learning could replicate key components of content delivery, activities, assessments, and virtual proctored exams. However, instructors and students did not want in‐person learning to disappear and recommended flexibility by combining learning opportunities in in‐person, online, and asynchronous course deliveries according to personal preferences. The paper concludes with future directions and how the findings influenced our planning for Fall 2021 delivery. The video abstract for this article is available at https://www.youtube.com/watch?v=F48KBg_d8AE .
Practitioner notes
What is already known about this topic
Emergency Remote Teaching (ERT) allowed institutions across the world to continue teaching and learning at all levels of education during the COVID‐19 pandemic. However, this form of delivery, created under conditions of uncertainty, was developed out of an urgency to keep education going rather than maintaining it at the same level.
What this paper adds
This study comes after ERT, and is situated between ERT and the return to campus, with some social distancing restrictions still active, in a delivery method widely viewed as “remote delivery”.
This is a case study of an entire Canadian higher education institution that implemented remote learning for over one full academic year, documenting and examining instructors' and students' experiences and challenges of the remote learning course delivery format.
Quantitative and qualitative data were collected to provide a holistic overview of instructors' and students' experiences of delivery method and assessments including the use of face‐tracking proctoring software.
Implications for practice and/or policy
Compared to ERT, remote delivery was a thoughtful and deliberate way to transform in‐person courses into virtual learning experiences.
Instructors and students were able to successfully replicate many features of in‐person learning and assessments experiences in remote delivery of courses by using effective online tools to teach and learn.
As a result, instructors and students called for the use of elements of remote delivery to create more flexible learning opportunities by combining in‐person, live streaming, and asynchronous learning options.
One of the most important teaching points is to evaluate what students should do and learn before or during the lesson, what they have done or learned after the lesson, and give information about them. Peer assessment helps students to develop learning autonomy and increases their success. Providing feedback to the student on what he has learned can be done by both the teacher and his peers. The literature also accepts peer assessment as a learning tool. In this context, peer assessment, which is accepted as a student-centered and collaborative learning style, is based on evaluating students' work by their peers in the same or similar situation in terms of value, level, quality, or success. In this study, the effect of single and two-cycle peer assessment on university students' writing skills was investigated. In this study, which is conducted with 160 student teachers (n = 160) studying in the first year of the university, an experimental design without a control group was used. In the analysis of the data, firstly, the arithmetic means of the pre-tests were compared to determine the equivalence of the groups, and whether there was a significant difference was determined by the t-test. Within the scope of the research, demographic information will be presented in a way that does not violate personal privacy; In the analysis of the opinions, utmost attention was paid to scientific and research ethics rules, assuring that the participants would be coded in a way that would not evoke their identity information. According to the research results, the post-test mean score of the group in which one-cycle traditionalized peer assessment was applied was 66.3, while the average of the group in which two-cycle peer assessment was applied was 72.7. The arithmetic mean difference of the post-test scores determined that two-cycle peer assessment contributed more to students' written expression skills. It was also determined that the two-cycle peer assessment method has a more positive effect on writing education success than the one-cycle traditionalized peer assessment. Therefore, another important result of this study is that a two-cycle peer assessment, in which peer assessment is also evaluated by the peer, is more efficient in increasing writing skills.
Teachers’ feedback literacy is a focus of increasing attention in higher education. It may be framed through intentional design decisions, inter-relational aspects of engagement and pragmatic considerations of enacted curricula. Thus, teachers’ feedback literacy is connected to both the enacted curriculum and students’ relationship to feedback. How these connections take shape in particular approaches to the curriculum, affecting students’ roles in feedback and evaluation is not as well-understood. This paper presents findings from an intervention aimed at developing students’ peer feedback and self-evaluation skills in an undergraduate business course. Peer feedback and self-evaluation are increasingly common modes of engaging students as active participants in feedback and evaluation processes. It is therefore worthwhile to understand the ways in which these processes affect and link teacher and student feedback literacy. Data was analysed from a 14-week course aimed at developing students’ competencies in self-evaluation, peer feedback and teamwork. Results are presented and discussed according to three major areas: how teacher’s feedback on student engagement with feedback served as affective ‘meta’ scaffolding, the trajectory of students’ growth in feedback literate self-evaluation, and the relationship of feedback literacy to trajectory of growth in teamwork competencies. The paper concludes with suggestions for further, crossdisciplinarity research.
This study investigated the impact of an online peer-review script on students’ argumentative peer-review quality and argumentative essay writing. A pre- and post-test experimental design was used with 42 undergraduate students in the field of educational science. Students were randomly divided over 21 dyads and assigned to two conditions (unscripted and scripted peer-review). Students were first asked to write an original argumentative essay about the topic at hand. Then, students in the scripted condition had to review their peer’s argumentative essay based on a peer-review script while students in the unscripted condition reviewed their peer’s essay without the script. Finally, all students had to revise their original essay based on the comments of their peers. Students in the scripted peer-review condition outperformed students in the unscripted condition in terms of quality of their argumentative peer-review and argumentative essay writing. These results are discussed and implications are provided.
Purpose
In spite of the potential of peer feedback, research related to the international classroom and the development of intercultural competences remains limited. This paper aims to further explore this combination and associated gaps by presenting students’ perceptions of peer feedback on individual behaviour in group work.
Design/methodology/approach
Several studies have shown that peer feedback can be a powerful instrument in higher education. For this reason, this instrument is increasingly being deployed in the international classroom of a Dutch Business School (DBS), which has a student population of about 60 different nationalities. The present paper adopts an embedded case-study design in studying peer feedback within the international classroom.
Findings
The primary results of this study are twofold. First, they show that before joining DBS, the vast majority of international students have never been exposed to group work peer feedback. And second, they reveal that cultural background (bias) is a critical factor in how students provide and perceive peer feedback. Students from high-context cultures struggle with direct feedback provided by students from low-context cultures. Furthermore, the results show that domestic cultural values “lack consideration” when dealing with the contrasts in cultural values of non-domestic (international) students.
Originality/value
This study indicates that several aspects of the students’ cultural background have a direct impact on how they provide and perceive individual peer feedback on their behaviour in group work. Furthermore, it argues that peer feedback, when used as an instrument, requires specific training and guidance of students with regard to cultural differences, values and perceptions.
Recent growth in research on feedback has focussed on the importance of developing student feedback literacy. That is, the capabilities students need to make good use of feedback processes. To date there have been few investigations of how ideas about student feedback literacy can be translated into course design. This paper therefore examines student feedback capabilities in the context of an undergraduate course intervention based on an empirically based feedback literacy framework. 237 student journals written in response to self and peer feedback information were coded for student feedback literacy features and the effectiveness of pedagogical approaches for building the needed capabilities. Findings highlight the presence, extent and trajectories of feedback capabilities over time within the course. Based on these, pedagogical approaches which incorporate feedback affordances are identified.
With the current advancement of technology and its potential for better teaching and learning outcomes, this paper compares the use of peer review in face-to-face settings and online platforms. The study recruited 142 students and 20 instructors from an American public mid-southern university. Data were collected over two academic semesters and included three instruments: questionnaires, observations, and interviews. Findings indicated that the participants generally hold a positive stance towards peer evaluation. They found face-to-face peer assessment during writing class time to be the most common and effective mode for they preferred immediate feedback in person. Contrary to laudable prior research findings, the majority of participants considered online review ineffective. They found various forms of technology quite distracting. Analyzing the extent to which native English speakers, non-native speakers, and instructors find virtual and face-to-face types of review worthwhile makes the study a valuable factor for instructors who wish to incorporate peer editing into their teaching.
This study compared the effects of support for peer feedback, peer feedforward and their combination on students’ peer learning processes, argumentative essay quality and domain‐specific learning. Participants were 86 BSc students who were randomly divided over 43 dyads. These dyads, in a two‐factorial experimental design, were assigned to four conditions including: peer feedback (n = 22), peer feedforward (n = 22), mixed (n = 20) and control group (n = 22) conditions. An online peer feedback environment named EduTech was designed which allowed us to implement various types of support in the form of question prompts. In this online environment, students were asked to write an argumentative essay on a controversial topic, to engage in peer learning processes and to revise their essay. Overall, the results showed that students in the three experimental conditions (peer feedback, peer feedforward and their combination) benefited more than students in the control group condition (without any support) in term of peer learning processes, argumentative essay quality and domain‐specific learning. However, there was no significant difference among the three experimental conditions. This implies that peer feedforward can be as important as peer feedback in collaborative learning environments which is often neglected both in theory and practice.
Practitioner Notes
What is already known about this topic? Writing argumentative essays is a common practice for higher education students in various disciplines which deal with controversial issues.
Writing argumentative essay requires solid argumentation strategies which makes it a challenging task for higher education students.
Additional instructional support is needed to help students write high‐quality argumentative essays.
What this paper adds? Peer learning is a promising instructional strategy for improving students’ argumentative essay writing and learning.
Online support in the form of question prompts to guide students during peer learning can improve their argumentative essay writing and learning.
Next to the peer feedback, peer feedforward is also a promising instructional approach to support students’ argumentative essay writing and learning.
Implications for practice and/or policy Given the positive effects of peer learning processes, the use of peer feedback and peer feedforward should be given more attention by teachers to support students write high‐quality argumentative essays for controversial issues.
Teachers and educational designers should not only provide opportunities for students to engage in peer feedback processes (how I am doing?) but also in peer feedforward processes (where to next?).
Objectives
Formative peer assessment focuses on learning and development of the student learning process. This implies that students are taking responsibility for assessing the work of their peers by giving and receiving feedback to each other. The aim was to compile research about formative peer assessment presented in higher healthcare education, focusing on the rationale, the interventions, the experiences of students and teachers and the outcomes of formative assessment interventions.
Design
A scoping review.
Data sources
Searches were conducted until May 2019 in PubMed, Cumulative Index to Nursing and Allied Health Literature, Education Research Complete and Education Research Centre. Grey literature was searched in Library Search, Google Scholar and Science Direct.
Eligibility criteria
Studies addressing formative peer assessment in higher education, focusing on medicine, nursing, midwifery, dentistry, physical or occupational therapy and radiology published in peer-reviewed articles or in grey literature.
Data extractions and synthesis
Out of 1452 studies, 37 met the inclusion criteria and were critically appraised using relevant Critical Appraisal Skills Programme, Joanna Briggs Institute and Mixed Methods Appraisal Tool tools. The pertinent data were analysed using thematic analysis.
Result
The critical appraisal resulted in 18 included studies with high and moderate quality. The rationale for using formative peer assessment relates to giving and receiving constructive feedback as a means to promote learning. The experience and outcome of formative peer assessment interventions from the perspective of students and teachers are presented within three themes: (1) organisation and structure of the formative peer assessment activities, (2) personal attributes and consequences for oneself and relationships and (3) experience and outcome of feedback and learning.
Conclusion
Healthcare education must consider preparing and introducing students to collaborative learning, and thus develop well-designed learning activities aligned with the learning outcomes. Since peer collaboration seems to affect students’ and teachers’ experiences of formative peer assessment, empirical investigations exploring collaboration between students are of utmost importance.
Peer feedback benefits students’ learning at the university level. However, how it makes an impact on students’ learning and which defining factors in its design wield more significant influence are aspects that continue to require further analysis. This study is dedicated precisely to analyse how different feedback conditions impact students’ perception of their learning. Through a questionnaire administered to a sample of 410 university students, we inquire about how the different conditions under which the feedback is designed – such as privacy (anonymous or not), contact (personal or virtual), delivery channel (oral, written or mixed) and consensus (individual or in a group) – impact the improvement of learning tasks and the development of the students’ inter- and intrapersonal skills. The results reveal that students perceive that they learn more when they give feedback than when they receive it, and that there are certain conditions that are better suited to others for absorbing what has been learnt. The study reveals that in order to maximise its effects, the instructional design of peer feedback must offer spaces to carry it out face-to-face – anonymously – with a mixed channel of delivery (complementing written comments with oral feedback) and that the feedback be agreed upon in a group, both when it is given and when it is received.
This paper presents the findings of a four-year mainly qualitative study of peer and self-assessment in university teaching. Peer and self-assessment activities were introduced with the intention of supporting students’ learning, but they also formed part of the formal grading of the course assignment. Thus, the research aimed to explore the students’ experiences with these activities in this specific learning situation – which benefits they perceived from these activities, what challenges they faced, and what supported their learning. The students completed the survey after participating in the activities, but before receiving their grades and peer feedback so as to capture their authentic experiences with the activities. A total of 103 students completed the survey. The data were analysed using descriptive statistics and thematic analysis. The results indicate that despite being stressful and uncomfortable for many, peer assessment was more beneficial for the students’ learning than self-assessment. The students expressed concerns related to their competence to grade and responsibility for their peers’ grades. However, by addressing students’ needs for autonomy, competence and relatedness, these initial worries can be transformed into drivers for learning. It may be concluded that peer assessment can play a role in supporting students to self-assess.
Feedback has a powerful influence on learning. However, feedback practices in higher education often fail to produce the expected impact on learning. This is mainly because of its implementation as a one-way transmission of diagnostic information where students play a passive role as the information receivers. Dialogue around feedback can enhance students’ sense making from feedback and capacities to act on it. Yet, dialogic feedback has been mostly implemented as an instructor-led activity, which is hardly affordable in large classrooms. Dialogic peer feedback can offer a scalable solution; however, current practices lack a systematic design, resulting in low learning gains. Attending to this gap, this paper presents a theoretical framework that structures dialogic feedback as a three-phase collaborative activity, involving different levels of regulation: first, planning and coordination of feedback activities (involving socially shared regulation), second, feedback discussion to support its uptake (involving co-regulation), and last, translation of feedback into task engagement and progress (involving self-regulation). Based on the framework, design guidelines are provided to help practitioners shape their feedback practices. The application of the principles is illustrated through an example scenario. The framework holds great potential to promote student-centred approaches to feedback practices in higher education.
Feedback processes are difficult to manage, and the accumulated frustrations of teachers and students inhibit the learning potential of feedback. In this conceptual paper, challenges to the development of effective feedback processes are reviewed and a new framework for teacher feedback literacy is proposed. The framework comprises three dimensions: a design dimension focuses on designing feedback processes for student uptake and enabling student evaluative judgment; a relational dimension represents the interpersonal side of feedback exchanges; and a pragmatic dimension addresses how teachers manage the compromises inherent in disciplinary and institutional feedback practices. Implications discuss the need for partnership approaches to feedback predicated on shared responsibilities between teachers and students, and the interplay between teacher and student feedback literacy. Key recommendations for practice are suggested within the design, relational and pragmatic dimensions. Avenues for further research are proposed, including how teacher and student feedback literacy might be developed in tandem.
How can students' competence be developed through peer assessment? This paper focuses on how relevant variables such as participation, evaluative judgement and the quality of the assessment interact and influence peer assessment. From an analysis of 4 years of data from undergraduate classes in project management, it develops a model of causal relationships validated using the PLS-SEM method. It demonstrates relationships between these variables and considerers the influence of students' competence and the mediating nature of feedback and self-regulation on the process. It points to how peer assessment practices can be improved whilst highlighting how evaluative judgement and feedback are two key elements that can be addressed to deliver the effective development of students' competence.
Peer assessment has been the subject of considerable research interest over the last three decades, with numerous educational researchers advocating for the integration of peer assessment into schools and instructional practice. Research synthesis in this area has, however, largely relied on narrative reviews to evaluate the efficacy of peer assessment. Here, we present a meta-analysis (54 studies, k = 141) of experimental and quasi-experimental studies that evaluated the effect of peer assessment on academic performance in primary, secondary, or tertiary students across subjects and domains. An overall small to medium effect of peer assessment on academic performance was found (g = 0.31, p < .001). The results suggest that peer assessment improves academic performance compared with no assessment (g = 0.31, p = .004) and teacher assessment (g = 0.28, p = .007), but was not significantly different in its effect from self-assessment (g = 0.23, p = .209). Additionally, meta-regressions examined the moderating effects of several feedback and educational characteristics (e.g., online vs offline, frequency, education level). Results suggested that the effectiveness of peer assessment was remarkably robust across a wide range of contexts. These findings provide support for peer assessment as a formative practice and suggest several implications for the implementation of peer assessment into the classroom.
Background:
Peer evaluation can provide valuable feedback to medical students, and increase student confidence and quality of work. The objective of this systematic review was to examine the utilization, effectiveness, and quality of peer feedback during collaborative learning in medical education.
Methods:
The PRISMA statement for reporting in systematic reviews and meta-analysis was used to guide the process of conducting the systematic review. Evaluation of level of evidence (Colthart) and types of outcomes (Kirkpatrick) were used. Two main authors reviewed articles with a third deciding on conflicting results.
Results:
The final review included 31 studies. Problem-based learning and team-based learning were the most common collaborative learning settings. Eleven studies reported that students received instruction on how to provide appropriate peer feedback. No studies provided descriptions on whether or not the quality of feedback was evaluated by faculty. Seventeen studies evaluated the effect of peer feedback on professionalism; 12 of those studies evaluated its effectiveness for assessing professionalism and eight evaluated the use of peer feedback for professional behavior development. Ten studies examined the effect of peer feedback on student learning. Six studies examined the role of peer feedback on team dynamics.
Conclusions:
This systematic review indicates that peer feedback in a collaborative learning environment may be a reliable assessment for professionalism and may aid in the development of professional behavior. The review suggests implications for further research on the impact of peer feedback, including the effectiveness of providing instruction on how to provide appropriate peer feedback.
Students' dissatisfaction with peer assessment has been widely documented. While most relevant literature places focus on the cognitive (content and uptake of feedback) or structural (feedback design) dimensions, students' emotions in peer assessment have received scant attention. This study investigates the social-affective impacts of peer assessment by analysing students' appeal letters addressed to their tutors, reflective posts in the online discussion forum and responses to a survey. A thematic analysis of data indicated three main aspects of students' (dis)satisfaction: content, scores and process of peer assessment. The most negative emotion that students expressed was related to 'disrespectful' behaviour and attitudes of peer reviewers, whereas the feeling of appreciation was triggered by the helpful feedback attributes which were perceived as reflecting reviewers' respect to others' works. Students generally held mixed feelings toward peer assessment, valuing learning in the process of providing and receiving feedback but showing resistance to using peer assessment for summative purposes. The findings highlight the significance of respect in peer assessment and argue that the perceived lack of mutual respect seems to underpin the nature of students' dissatisfaction. This study carries implications for nurturing students' respectful attitudes and behaviour in and through peer assessment.
In recent years, there has been an increasing use of peer assessment in classrooms and other learning settings. Despite the prevailing view that peer assessment has a positive effect on learning across empirical studies, the results reported are mixed. In this meta-analysis, we synthesised findings based on 134 effect sizes from 58 studies. Compared to students who do not participate in peer assessment, those who participate in peer assessment show a .291 standard deviation unit increase in their performance. Further, we performed a meta-regression analysis to examine the factors that are likely to influence the peer assessment effect. The most critical factor is rater training. When students receive rater training, the effect size of peer assessment is substantially larger than when students do not receive such training. Computer-mediated peer assessment is also associated with greater learning gains than the paper-based peer assessment. A few other variables (such as rating format, rating criteria and frequency of peer assessment) also show noticeable, although not statistically significant, effects. The results of the meta-analysis can be considered by researchers and teachers as a basis for determining how to make effective use of peer assessment as a learning tool.
Peer assessment has proven to have positive learning outcomes. Importantly, peer assessment is a social process and some claim that the use of anonymity might have advantages. However, the findings have not always been in the same direction. Our aims were: (a) to review the effects of using anonymity in peer assessment on performance, peer feedback content, peer grading accuracy, social effects and students’ perspective on peer assessment; and (b) to investigate the effects of four moderating variables (educational level, peer grading, assessment aids, direction of anonymity) in relation to anonymity. A literature search was conducted including five different terms related to peer assessment (e.g., peer feedback) and anonymity. Fourteen studies that used a control group or a within group design were found. The narrative review revealed that anonymous peer assessment seems to provide advantages for students’ perceptions about the learning value of peer assessment, delivering more critical peer feedback, increased self-perceived social effects, a slight tendency for more performance, especially in higher education and with less peer assessment aids. Some conclusions are that: (a) when implementing anonymity in peer assessment the instructional context and goals need to be considered, (b) existent empirical research is still limited, and (c) future research should employ stronger and more complex research designs.
Peer feedback is frequently implemented with academic writing tasks in higher education. However, a quantitative synthesis is still lacking for the impact that peer feedback has on students’ writing performance. The current study conveyed two types of observations. First, regarding the impact of peer feedback on writing performance, this study synthesized the results of 24 quantitative studies reporting on higher education students’ academic writing performance after peer feedback. Engagement in peer feedback resulted in larger writing improvements compared to (no-feedback) controls (g = 0.91 [0.41, 1.42]) and compared to self-assessment (g = 0.33 [0.01, 0.64]). Peer feedback and teacher feedback resulted in similar writing improvements (g = 0.46 [-0.44, 1.36]). The nature of the peer feedback significantly moderated the impact that peer feedback had on students’ writing improvement, whereas only a theoretically plausible, though non-significant moderating pattern was found for the number of peers that students engaged with. Second, this study shows that the number of well-controlled studies into the effects of peer feedback on writing is still low, indicating the need for more quantitative, methodologically sound research in this field. Findings and implications are discussed both for higher education teaching practice and future research approaches and directions.
Whilst the importance of online peer feedback and writing argumentative essays for students in higher education is unquestionable, there is a need for further research into whether and the extent to which female and male students differ with regard to their argumentative feedback, essay writing, and content learning in online settings. The current study used a pre-test, post-test design to explore the extent to which female and male students differ regarding their argumentative feedback quality, essay writing and content learning in an online environment. Participants were 201 BSc biotechnology students who wrote an argumentative essay, engaged in argumentative peer feedback with learning partners in the form of triads and finally revised their original argumentative essay. The findings revealed differences between females and males in terms of the quality of their argumentative feedback. Female students provided higher-quality argumentative feedback than male students. Although all students improved their argumentative essay quality and also knowledge content from pre-test to post-test, these improvements were not significantly different between females and males. Explanations for these findings and recommendations are provided.
Student feedback literacy denotes the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies. In this conceptual paper, student responses to feedback are reviewed and a number of barriers to student uptake of feedback are discussed. Four inter-related features are proposed as a framework underpinning students’ feedback literacy: appreciating feedback; making judgments; managing affect; and taking action. Two well-established learning activities, peer feedback and analysing exemplars, are discussed to illustrate how this framework can be operationalized. Some ways in which these two enabling activities can be re-focused more explicitly towards developing students’ feedback literacy are elaborated. Teachers are identified as playing important facilitating roles in promoting student feedback literacy through curriculum design, guidance and coaching. The implications and conclusion summarise recommendations for teaching and set out an agenda for further research.
Within the higher education context, peer feedback is frequently applied as an
instructional method. Research on the learning mechanisms involved in the
peer feedback process has covered aspects of both providing and receiving
feedback. However, a direct comparison of the impact that providing and
receiving peer feedback has on students’ writing performance is still lacking.
The current study compared the writing performance of undergraduate
students (N = 83) who either provided or received anonymous written peer
feedback in the context of an authentic academic writing task. In addition,
we investigated whether students’ peer feedback perceptions were related
to the nature of the peer feedback they received and to writing performance.
Results showed that both providing and receiving feedback led to similar
improvements of writing performance. The presence of explanatory
comments positively related both to how adequate students perceived the
peer feedback to be, as well as to students’ willingness to improve based
upon it. However, no direct relation was found between these peer feedback
perceptions and students’ writing performance increase.
The anonymized data and analyses (syntaxes) are accessible via the following link: https://osf.io/awkd9
Although previous research has indicated that providing anonymity is an effective way to create a safe peer assessment setting, continuously ensuring anonymity prevents students from experiencing genuine two-way interactive feedback dialogues. The present study investigated how installing a transitional approach from an anonymous to a non-anonymous peer assessment setting can overcome this problem. A total of 46 bachelor’s degree students in Educational Studies participated in multiple peer assessment cycles in which groups of students assessed each other’s work. Both students’ evolution in peer feedback quality as well as their perceptions were measured. The content analysis of the peer feedback messages revealed that the quality of peer feedback increased in the anonymous phase, and that over time, the feedback in the consecutive non-anonymous sessions was of similar quality. The results also indicate that the transitional approach does not hinder the perceived growth in peer feedback skills, nor does it have a negative impact on their general conceptions towards peer assessment. Furthermore, students clearly differentiated between their attributed importance of anonymity and their view on the usefulness of a transitional approach. The findings suggest that anonymity can be a valuable scaffold to ease students’ importance level towards anonymity and their associated need for practice.
There does not appear to be consensus on how to optimally match students during the peer feedback process: with same-ability peers (homogeneously) or different-ability peers (heterogeneously). In fact, there appears to be no empirical evidence that either homogeneous or heterogeneous student matching has any direct effect on writing performance. The current study addressed this issue in the context of an academic writing task. Adopting a quasi-experimental design, 94 undergraduate students were matched in 47 homogeneous or heterogeneous reciprocal dyads, and provided anonymous, formative peer feedback on each other’s draft essays. The relations between students’ individual ability or dyad composition, feedback quality and writing performance were investigated. Neither individual ability nor dyad composition directly related to writing performance. Also, feedback quality did not depend on students’ individual ability or dyad composition, although trends in the data suggest that high-ability reviewers provided more content-related feedback. Finally, peer feedback quality was not related to writing performance, and authors of varying ability levels benefited to a similar extent from peer feedback on different aspects of the text. The results are discussed in relation to their implications for the instructional design of academic writing assignments that incorporate peer feedback.
... See OSF link for supplementary online materials: osf.io/3b48u
In recent years, the technical possibilities of educational technologies regarding online peer feedback have developed rapidly. However, the impact of online peer feedback activities compared to traditional offline variants has not specifically been meta-analyzed. Therefore, the aim of the current meta-analysis is to do an in-depth comparison between online versus offline peer feedback approaches. An earlier and broader meta-analysis focusing on technology-facilitated peer feedback in general, was used as a starting point. We synthesized 12 comparisons between online and offline peer feedback in higher education, from 10 different studies. Moreover, we reviewed student perceptions of online peer feedback when these were included in the studies. The results show that online peer feedback is more effective than offline peer feedback, with an effect size of 0.33. Moreover, online peer feedback is more effective when the outcome measure is competence rather than self-efficacy for skills. In addition, students are mostly positive towards online peer feedback but also list several downsides. Finally, implications for online peer feedback in teaching practice are discussed and leads are identified for further research on this topic.
While the peer feedback process has an important role to play in student learning and has many benefits, it is not without its challenges. One of these is the effect that emotions may have on the way that students engage with the feedback. Yet, the specific emotions experienced during peer feedback is relatively under-explored. Therefore, this exploratory qualitative study unpacks the range of emotions experienced by students during peer feedback. Using Plutchnik’s Wheel of Emotions to analyse students’ questionnaire responses, the study found that students largely exhibited positive emotions, which may be due to their perceptions of themselves in relation to the process, as well as the various scaffolds put in place. Knowing which emotions students experienced during peer feedback may enable a greater understanding of the role of emotions in peer feedback, as well as enabling student feedback literacy development.
Few studies have explored the difficulties that undergraduates encounter in peer assessment from the dual perspectives of feedback givers and receivers and framed them in terms of specific needs for student feedback literacy development. To address this research gap, this study explored the obstacles that Hong Kong university students experienced during peer assessment in a General Education course based on 51 retrospective journal entries and 21 individual post-journal interviews. The findings reveal that the participants appeared to face more cognitive difficulties when giving feedback to peers and more socio-affective difficulties when receiving peer feedback, which indicates their specific needs for feedback capacities and dispositions. To prepare students cognitively for peer assessment, teachers must build up students’ evaluative judgment and self-regulation capacities. To prepare them socio-affectively for peer assessment, teachers should help students to realise the benefits of giving feedback and their active roles and enhance their resilience and volition.
Maximising the accuracy and learning of self and peer assessment activities in higher education requires instructors to make several design decisions, including whether the assessment process should be individual or collaborative, and, if collaborative, determining the number of members of each peer assessment team. In order to support this decision, a quasi-experiment was carried out in which 82 first-year students used three peer assessment modalities. A total of 1574 assessments were obtained. The accuracy of both the students’ self-assessment and their peer assessment was measured. Results show that students’ self-assessment significantly improved when groups of three were used, provided that those with the 20% poorest performances were excluded from the analysis. This suggests that collaborative peer assessment improves learning. Peer assessment scores were more accurate than self-assessment, regardless of the modality, and the accuracy improved with the number of assessments received. Instructors need to consider the trade-off between students’ improved understanding, which favours peer assessment using groups of three, and a higher number of assessments, which, under time constraints, favours individual peer assessment.
We contribute to the growing evidence of the positive effect of use of online peer feedback tools on students’ teamwork skills development. We do so by exploring individual and contextual factors underlying satisfaction with using a peer feedback system alongside team projects. Employing path analytical framework and bootstrap methods, we analysed data from an international sample of 100 project teams in management studies. Drawing on procedural justice theory, we theorised and found support that students’ uncertainty avoidance orientation and virtuality in collaboration were positively related to their satisfaction with use of a peer feedback system. Such satisfaction in turn allowed them to be more effective team members. Our findings provide evidence for higher education institutions and instructors considering the adoption of online peer feedback systems alongside teamwork in their curricula. Specifically, peer feedback appears to be effective in the development of teamwork skills and students appreciate the opportunity to provide feedback to their peers in a structured and dedicated environment. Our findings are timely and of important practical significance as educational institutions increasingly rely on the use of computer-mediated technology during the COVID-19 pandemic.
This study contributes to a better understanding of the potential of student peer review in higher education by examining how repeated practice influences student learning. The study reports on the experiences of undergraduate science students who were systematically trained in peer review over three years. Twelve were interviewed in both their second and third year. It was found that multiple experiences had a positive influence in shaping and embedding a culture of peer review in the programme. The reviews used both formal and informal dialogic processes, and through these, students developed an advanced skill set that enabled them to provide and utilise quality feedback. Students saw peer review as a type of research inquiry that led to a deeper understanding of (a) disciplinary knowledge, (b) being a peer reviewer, (c) knowledge about self and (d) knowledge of others. These results demonstrate the impact of long-term training in peer review on students’ learning experiences in higher education.
In contemporary higher education systems, the processes of assessment and feedback are often seen as coexisting activities. As a result, they have become entangled in both policy and practice, resulting in a conceptual and practical blurring of their unique purposes. In this paper, we present a critical examination of the issues created by the entanglement of assessment and feedback, arguing that it is important to ensure that the legitimate purposes of both feedback and assessment are not compromised by inappropriate conflation of the two. We situate our argument in the shifting conceptual landscape of feedback, where there is increasing emphasis on students being active players in feedback processes working with and applying information from others to future learning tasks, rather than regarding feedback as a mechanism of transmission of information by teachers. We surface and critically discuss six problems created by the entanglement of assessment and feedback: students’ focus on grades; comments justifying grades rather than support learning; feedback too late to be useful; feedback subordinated to all other processes in course design; overemphasis on documentation of feedback; and the downgrading of feedback created by requirements for anonymous marking. We then propose a series of strategies for preserving the learning function of feedback, through models that give primacy to feedback within learning cycles. We conclude by offering suggestions for research and practice that seek to engage with the challenges created by the entanglement of assessment and feedback, and that maintain the unique purposes of assessment and feedback.
Peer feedback is a strategy that allows students to be involved in the assessment process, making them more conscious about the teaching and learning activities. However, different instructional designs can influence learning in different ways. Our paper aims to identify whether peer feedback instructional designs influence students’ learning perceptions. We performed a comparative study at a Faculty of Education, tracking students during their first two years of a teacher education program. Students participated in two consecutive peer feedback experiences using different instructional designs. Results show that students perceive that long-term interventions with prior training and double-loop feedback processes are more useful for their performance than a short-term experience without face-to-face training and single-loop feedback processes. They perceive more benefits when they provide feedback than when they receive it. Lecturers should take these variables into account when designing peer feedback activities in order to maximise the impact on students’ learning.
Purpose
The purpose of this paper is to examine the influence of peer-assessment training as a catalyst to enhance student assessment knowledge and the ability to effectively evaluate reflective journal writing assignments when using the online peer assessment (PA) tool Expertiza.
Design/methodology/approach
Over a two-year period, end-of-unit assessment test scores and reflective writing samples from a peer-assessment participation group were compared to a no peer-assessment control group. Analysis of covariance was used to control for existing writing skill and ongoing feedback on writing samples.
Findings
No significant increases were observed in student assessment knowledge when participating in peer-assessment training. Comparison of matched participant samples revealed that after controlling for existing writing skill, students participating in PA graded reflective writing assignments significantly lower than instructor-graded assessments from students not afforded peer-assessment participation.
Research limitations/implications
No distinction was made on the relative influence of giving or receiving PA influenced performance on the outcome measures. Second, students making multiple revisions based on feedback were not analyzed. Third, the Expertiza system does not control for the number of reviews performed, thus differential weighting of assessment outcomes may be realized unless all students submit and perform the same number of assessments. Finally, in absence of any qualitative analysis as to what factors students consider when grading writing samples, it is unknown as to how individual difference factors or adherence to scoring rubrics may have influenced the obtained results.
Practical implications
Students may be reticent evaluating peers or utilize grading criteria beyond the mandatory evaluation rubrics. Clear distinctions should be provided to students indicating how instructional content aligns with skills needed to conduct assessment. Training that addresses the theoretical and transactional components of PA are important, but teachers should recognize that when developing assessment skills learners undergo a developmental catharsis related to building trust and establishing a secure and comfortable identity as an assessor. Peer review systems should quantify the relative contribution of each reviewer through the measurement of frequency, timeliness and accuracy of the feedback, compared with instructor standards/evaluations.
Originality/value
This paper reduces the gap in the literature concerning how PA evolves over time and identifies factors related to the etiology of the peer-review process. In addition, the paper reveals new information regarding the calibration between instructor and peer evaluations.
As the service industry moves toward self-service, peer feedback serves a critical role in this shift for educational services. Peer feedback is a process by which students provide feedback to each other. One of its major benefits is that it enables students to become actively involved in the learning and assessment process and play an integral role in the delivery and quality of their education. However, a primary concern is that students do not consistently provide each other with quality feedback, especially in science, technology, engineering, and mathematics (STEM) disciplines in which gender stereotypes may hinder the ability of women to provide critical peer feedback. A potential way to improve peer feedback is to create anonymous review settings. This study examines how anonymity alters the nature of peer feedback in a large introductory undergraduate statistics class for computer science and engineering majors. In this class, peers review a series of team video projects as either anonymous or nonanonymous reviewers. Our results show that female peer reviewers were more affected by the anonymity setting than the male peer reviewers. We discuss the implications of these findings for promoting greater participation and retention of women in underrepresented STEM disciplines and the design of effective peer-review processes for improved student achievement and satisfaction.
Increasing classroom sizes and decreasing financial and human resources have encouraged educators to seek innovative strategies to manage large classrooms. Several instructors have begun using web-based peer reviews as a way to increase open-ended feedback. Recent work in design-based classes has revealed that students struggle to provide meaningful peer feedback. Furthermore, it remains unclear how best to increase student motivation and engagement with the process. In a sophomore mechanical engineering class, we investigated the effect of a collaborative team of reviewers (a team of reviewers generating a single review) on the quality of feedback generated and on student perception of the process. Feedback generated by 117 students on their peers’ design projects over two assignments was analyzed using a mixed-methods approach. We found that collaborative team of reviewers produced higher quality feedback than did individual reviewers. Students spent more time on reviews in teams but found the process engaging and more fun than with individual reviews. Furthermore, students perceived individual and team review tasks as requiring similar levels of effort. Our findings indicate that team review approach could help reviewers provide better feedback in engineering design reviews. Additionally, collaboration improved student engagement in the process. Over the past two decades, peer reviews have remained a solitary endeavor—this study is the first group process implementation of peer review and provides a basis for future exploration of the topic.
Peer feedback often has positive effects on student learning processes and outcomes. However, students may not always be honest when giving and receiving peer feedback as they are likely to be biased due to peer relations, peer characteristics and personal preferences. To alleviate these biases, anonymous peer feedback was investigated in the current research. Research suggests that the expertise of the reviewer influences the perceived usefulness of the feedback. Therefore, this research investigated the relationship between expertise and the perceptions of peer feedback in a writing assignment of 41 students in higher education with a multilevel analysis. The results show that students perceive peer feedback as more adequate when knowing the reviewer perceives him/herself to have a high level of expertise. Furthermore, the results suggest that students who received feedback from a peer who perceives their expertise as closer to the reviewee’s own perceived expertise was more willing to improve his or her own assignment.
Articles discussing and analysing student peer-review activities proliferate the educational literature, typically describing one or more class exercises where students provide feedback on each other’s work. These papers usually focus on a peer-review activity designed as a scholarly study, and make conclusions about its success or otherwise. There is not one standard model for ‘peer-review’, and information on the many different assessment designs used is distributed over an increasing number of publications and websites. This paper provides a meta-review of peer-review activities as they are implemented in practice, using configuration data from over a thousand assignments conducted using an online peer-review system during an eight-year period. We present data on the wide variety of assignment designs and the parameters that comprise them, their rubrics, and comparisons between subject areas. Information on the norms and range of all decisions to be made will encourage instructors (both new to and experienced in conducting peer-review activities) to reflect on and justify the choices they make.
Self and peer-assessment are becoming central aspects of student-centred assessment processes in higher education. Despite increasing evidence that both forms of assessment are helpful for developing key capabilities in students, such as taking more responsibility for their own learning, developing a better understanding of the subject matter, assessment criteria and their own values and judgements, and developing critical reflection skills, both forms of assessment are still not the norm at universities. This paper provides the findings of a two-year study of formative self and peer-assessment at an Australian university. The study supports other research showing that students tend to regard formative self and peer-assessment as beneficial for gaining more insights about the assessment process and for improving their own work. We argue that self and peer-assessment requires careful design and implementation for it to be an effective tool for formative assessment processes; and that the development of students’ capacities for giving feedback, and the continuous and timely involvement of the teacher, are central aspects for successful self and peer-assessment. The move to self and peer-assessment is not simple for teachers and students but is worthwhile and necessary for twenty-first century higher education.
The term ‘peer assessment’ may apply to a range of student activities. This imprecision may impact on the uptake of peer assessment pedagogies. To better describe peer assessment approaches, typologies of peer assessment diversity were previously derived from the education literature. However, these typologies have not yet been tested with ‘real-life’ peer assessment examples, nor do they consider broader contextual matters. We present an augmented peer assessment framework, refined through analysing faculty accounts of their peer assessment practices. Our framework subsumes previous attempts to classify peer assessment, and extends them to include technology use, resources and policy, which were new features of our data not present in previous frameworks. In the current higher education climate, these considerations may be crucial for the scalability and success of peer assessment. The framework proposed in this paper provides both precision and concision for researchers and educators in studying and implementing peer assessment.
Despite compelling evidence of its potential effectiveness, uptake of self and peer assessment in higher education has been slower than expected. As with other assessment practices, self and peer assessment is ultimately enabled, or inhibited, by the actions of individual academics. This paper explores what academics see as the benefits and challenges of implementing self and peer assessment, through the analysis of interviews with 13 Australian academics. Thematic analysis of our qualitative data identified seven themes of benefits and five challenges. Our academics showed strong belief in the power of self and peer assessment as formative assessment, contrary to past literature which has focussed on the accuracy of students’ marking. This paper therefore brings insights as to not only what academics value about self and peer assessment but also identifies potential inhibitors in practice. Recommendations are made about improving the design and implementation of self and peer assessment in higher education.
Despite increasing demands by stakeholders to instil teamwork skills in accounting graduates, the assessment practices associated with teamwork in the accounting curricula are not yet well developed. This study examined the association between students’ perceptions of peer assessment attributes (i.e. anonymity, question relevance and mark allocation) and the perceived effectiveness of peer assessment in preventing free riding, reducing conflict, improving communication and enhancing the quality of contributions to teamwork. A peer assessment approach was trialled at a master's accounting course at an Australian university and data were collected via a survey of students’ perceptions of their experiences. Quantitative and qualitative data were gathered to address the research objectives. The results suggested that students who found the mark allocation appropriate, also found that the peer evaluation system had a positive effect in reducing free riding, improving communication within the team and enhancing the quality of contribution of the team members. Students who highly valued the anonymity in peer assessment, also found that peer assessment reduced free riding among team members. Students’ qualitative comments suggested that additional mechanisms are needed in peer assessment, including formative feedback from peers and having teaching staff moderate the marks for summative assessment purposes.
Teachers often complain about the quality of students' written essaysin higher education. This study explores the relations between scripted online peer feedback processes and quality of written argumentative essay as they occur in an authentic learning situation with direct practical relevance. Furthermore, the effects of the online argumentative peer feedback script on students' written argumentative essay are studied. A pre-test, post-test design was used with 189 undergraduate students who were assigned to groups of three. They were asked to explore various perspectives, and the ‘pros and cons’ on the topic of ‘Genetically Modified Organisms (GMOs)’ in order to write an argumentative essay in the field of biotechnology. The findings reveal that successful students and groups differ in terms of their feedback quality than less-successful students and groups. This implies that when students engage in high-quality, elaborated and justified peer feedback processes, they write high-quality argumentative essays. Furthermore, the results show that the online argumentative peer feedback script enhances the quality of students' written argumentative essay. Explanations for these results, limitations, and recommendations for further research are provided.