ArticlePublisher preview available

From plans to actions: A process model for why feedback features influence feedback implementation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Implementing peer feedback in revisions is a complex process involving first planning to fix problems and then actual implementing feedback through revisions. Both phases are influenced by features of the peer feedback itself, but potentially in different ways, and yet prior research has not examined their separate role in planning or the mediating role of planning in the relationship of feedback features and implementation. We build on a process model to investigate whether feedback features had differing relationships to plans to ignore or act on feedback versus actual implementation of feedback in the revision, and whether planning mediated the relationship of feedback features and actual implementation. Source data consisted of peer feedback comments received, revision plans made, and revisions implemented by 125 US high school students given a shared writing assignment. Comments were coded for feedback features and implementation in the revision. Multiple regression analyses revealed that having a comment containing a specific solution or a general suggestion predicted revision plans whereas having a comment containing an explanation predicted actual implementation. Planning mediated the relationship to actual implementation for the two feedback features predicting plans, suggestion and solution. Implications for practice are discussed.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
Instructional Science (2021) 49:365–394
https://doi.org/10.1007/s11251-021-09546-5
1 3
ORIGINAL RESEARCH
From plans toactions: Aprocess model forwhy feedback
features influence feedback implementation
YongWu1,2 · ChristianD.Schunn2
Received: 3 March 2019 / Accepted: 24 May 2021 / Published online: 25 June 2021
© The Author(s), under exclusive licence to Springer Nature B.V. 2021
Abstract
Implementing peer feedback in revisions is a complex process involving first planning to
fix problems and then actual implementing feedback through revisions. Both phases are
influenced by features of the peer feedback itself, but potentially in different ways, and
yet prior research has not examined their separate role in planning or the mediating role
of planning in the relationship of feedback features and implementation. We build on a
process model to investigate whether feedback features had differing relationships to plans
to ignore or act on feedback versus actual implementation of feedback in the revision, and
whether planning mediated the relationship of feedback features and actual implementa‑
tion. Source data consisted of peer feedback comments received, revision plans made, and
revisions implemented by 125 US high school students given a shared writing assignment.
Comments were coded for feedback features and implementation in the revision. Multi‑
ple regression analyses revealed that having a comment containing a specific solution or
a general suggestion predicted revision plans whereas having a comment containing an
explanation predicted actual implementation. Planning mediated the relationship to actual
implementation for the two feedback features predicting plans, suggestion and solution.
Implications for practice are discussed.
Keywords Feedback features· Implementation· Peer review· Planning· Revision
Introduction
Peer feedback involves students exchanging information about their performance with
the aim of narrowing the gap of their current performance and the desired performance
(Panadero etal., 2018; Shute, 2008). Peer feedback is increasingly included in a variety
of educational settings for different purposes (e.g., summative and/or formative purposes,
collaborative learning) (Kluger & DeNisi, 1996; Topping, 1998; van Gennip etal., 2010).
* Yong Wu
yongwu@pitt.edu
1 Center forResearch onTechnology‑Enhanced Language Education, School ofHumanities, Beijing
University ofPosts andTelecommunications, Beijing100876, China
2 Learning Research andDevelopment Center, University ofPittsburgh, 3939 O’Hara Street,
Pittsburgh, PA15217, USA
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... Previous research showed that three of the four (weaknesses identification, suggestion, and example) were indeed common elements of peer feedback. Weaknesses were identified in between 24 and 45% of comments (Nelson & Schunn, 2009;Patchan et al., 2016;Wu & Schunn, 2020); general suggestions or specific solutions were identified in between 26 and 55% of comments (Jin et al., 2022;Nelson & Schunn, 2009;Patchan et al., 2018;Wu & Schunn, 2021b); and examples were identified in between 25 and 42% of comments (Nelson & Schunn, 2009;Patchan et al., 2016Patchan et al., , 2018. However, it is especially interesting that suggestions and examples actually were associated with decreases in comment length. ...
Article
Full-text available
Peer feedback can be highly effective for learning, but only when students give detailed and helpful feedback. Peer feedback systems often support student reviewers through instructor-generated comment prompts that include various scaffolding features. However, there is little research in the context of higher education on which features tend to be used in practice nor to which extent typical uses impact comment length and comment helpfulness. This study explored the relative frequencies of twelve specific features (divided into metacognitive, motivational, strategic, and conceptual scaffolds) that could be included as scaffolding comment prompts and their relationship to comment length and helpfulness. A large dataset from one online peer review system was used, which involved naturalistic course data from 281 courses at 61 institutions. The degree of presence of each feature was coded in the N = 2883 comment prompts in these courses. Since a given comment prompt often contained multiple features, statistical models were used to tease apart the unique relationship of each comment prompt feature with comment length and helpfulness. The metacognitive scaffolds of prompts for elaboration and setting expectations, and the motivational scaffolds of binary questions were positively associated with mean comment length. The strategic scaffolds of requests for strength identification and example were positively associated with mean comment helpfulness. Only the conceptual scaffold of subdimension descriptions were positively associated with both. Interestingly, instructors rarely included the most useful features in comment prompts. The effects of comment prompt features were larger for comment length than comment helpfulness. Practical implications for designing more effective comment prompts are discussed.
... Students' perceptions of AWE feedback affect their engagement with AWE feedback (Zhang, 2020). When students perceive the feedback as being detailed and instructive, they are more inclined to incorporate it into their writing (Wu & Schunn, 2021). Students may choose not to respond to AWE feedback if they perceive the feedback counters their beliefs (Swain, 2006). ...
Article
Full-text available
Automated writing evaluation (AWE) provides an instant and cost-effective alternative to human feedback in assessing student writing, and therefore is widely used as a pedagogical supportive tool in writing instruction. However, studies on how students perceive the usage of AWE as a surrogate writing tutor in out-of-class autonomous learning are rare. This study employs a convergent parallel mixed methods approach to explore how students perceive the effects of an AWE program, iWrite, in autonomous learning context. The subjects of the current study are 146 non-English major undergraduates at a public university in China. The findings indicate that students are overall satisfied with using iWrite as a surrogate writing tutor with minimal human facilitation in autonomous learning. They are willing to make repeated revisions to their writing based on the feedback from the automated writing tutor. The results also suggest that the accessibility of AWE tool in out-of-class use could enhance learner autonomy, as students exhibit increased engagement and improved self-regulation following the 16-week intervention. Students perceive iWrite’s language-based feedback very positively, but their perceptions of content-based feedback from iWrite are comparatively negative. Findings have implications for the implementation of AWE in autonomous learning as well as the design of AWE systems in education setting.
... With regard to the contribution to the peer feedback research, these LLMs can be integrated with existing peer feedback implementation/learning improvement models (e.g., Wu & Schunn, 2020b;Wu & Schunn, 2021;Fong & Schallert, 2023). This integration will make it possible to evaluate peer feedback in real-time, providing immediate insights and predictions about learning improvements and knowledge gains (Lin et al., 2024), which is more timely compared to traditional post-class assessments. ...
Article
Peer feedback is a pedagogical strategy for peer learning. Despite recent indications of Large Language Models (LLMs) ' potential for content analysis, there is limited empirical exploration of their application in supporting the peer feedback process. This study enhances the analytical approach to peer feedback activities by employing state-of-the-art LLMs for automated peer feedback feature detection. This research critically compares three models—GPT-3.5 Turbo, Gemini 1.0 Pro, and Claude 3 Sonnet— to evaluate their effectiveness in automated peer feedback feature detection. The study involved 69 engineering students from a Singapore university participating in peer feedback activities on the online platform Miro. A total of 535 peer feedback instances were collected and human-coded for eleven features, resulting in a dataset of 5,885 labeled samples. These features included various cognitive and affective dimensions, elaboration, and specificity. The results indicate that GPT-3.5 Turbo is the most effective model, offering the best combination of performance and cost-effectiveness. Gemini 1.0 Pro also presents a viable option with its higher throughput and larger context window, making it particularly suitable for educational contexts with smaller sample sizes. Conversely, Claude 3 Sonnet, despite its larger context window, is less competitive due to higher costs and lower performance, and its lack of support for training and fine-tuning with researchers' data weakens its learning capabilities. This research contributes to the fields of Al in education and peer feedback by exploring the use of LLMs for automated analysis. It highlights the feasibility of employing and fine-tuning existing LLMs to support pedagogical design and evaluations from a process-oriented perspective.
... For example, students may refuse to accept peer feedback because they do not perceive its accuracy or find it hard to comprehend (Zhan, 2021). Therefore, recent research emphasizes ways of motivating students to incorporate the feedback they receive (Wu & Schunn, 2021;Wichmann, Funk, & Rummel, 2018). ...
Article
Full-text available
The study examined the influence of feedback features on revision uptake in dialogic peer feedback activities, and the moderating effect of self-efficacy and prior knowledge on this relationship. Data were collected over a 10-week course at a comprehensive university in China, involving 29 students and resulting in 242 revision-oriented comments. To understand peer feedback features, we analyzed the feedback received by students in terms of cognition (identification, explanation, suggestion, or solution) and affect (positive, negative, positive-and-negative, or neutral). Binary logistic regression analysis revealed that: (1) explanation, suggestion and positive-and-negative evaluation negatively predicted revision uptake; (2) self-efficacy had a significant positive effect on revision uptake, and also played a role in moderating the relationship between explanation and uptake; (3) although prior knowledge could not directly predict revision uptake, it moderated the relationship between positive-and-negative evaluation and feedback uptake. These findings have instructional implications for designing and organizing peer feedback activities.
... The effectiveness of peer feedback depends on its quality. High-quality feedback, characterized by praise, problem identification, solutions, and actionable advice, is more likely to be implemented by students (Wu & Schunn, 2020, 2021Banihashem et al., 2022). However, challenges include distrust in peers' competence to provide reliable feedback. ...
Article
Full-text available
Feedback plays a pivotal role in language acquisition and writing skill development. Despite its effectiveness in some contexts, traditional teacher-to-student assessment faces considerable limitations, particularly in large higher education classrooms where personalized feedback is scarce. In such settings, peer feedback has emerged as a viable and promising alternative. However, student scepticism towards its effectiveness presents a significant obstacle to its broader adoption. Negative attitudes, often rooted in doubts about peers’ competence or the value of their comments, can undermine the potential benefits of peer review. This paper revisits the findings of a quasi-experimental study conducted among 60 first-year students at Ibn Zohr University in Agadir, Morocco, which examined the impact of peer reviewing on writing development. A follow-up survey was employed to assess participants’ levels of trust in the feedback provided by peers. Based on the findings of this survey and their implications within the context of the study, this paper aims to offer pedagogical recommendations for improving the adoption and success of peer feedback through initial peer review training. Effective trust-building strategies are discussed, focusing on two fundamental types: communication trust and competence trust.
... Moreover, existing digital tools developed to facilitate peer feedback generation were not based on a comprehensive definition of effective feedback features in writing. According to the literature (Kerman et al., 2022;Patchan et al., 2016;Wu & Schunn, 2021), writing feedback features include (1) affective (inclusion of positive emotions such as praise or compliments and negative emotions such as anger or disappointments), (2) description (summary statement of the essay), (3) identification (identification and localisation of the problem in the essay), (4) justification (elaborations and justifications of the identified problem) and (5) constructive (inclusion of recommendations and action plans for further improvements). However, in designing and developing digital tools for supporting peer feedback generation, many studies have overlooked essential features, such as neglecting the affective aspect, despite its known importance. ...
Article
Full-text available
As a vital learning activity in second language (L2) writing classrooms, peer feedback plays a crucial role in improving students' writing skills. However, student reviewers face challenges in providing impactful feedback on peers' essays. Low‐quality peer reviews emerge as a persistent problem, adversely affecting the learning effect of peer feedback. To enhance students' peer feedback provision, this study introduces EvaluMate, an AI‐supported peer review system, which incorporates a chatbot named Eva, designed to evaluate and provide feedback on student reviewers' comments on peers' essays. Forty‐four Chinese undergraduate students engaged with EvaluMate, utilising its features to generate feedback on peers' English argumentative essays. Chat log data capturing the students' interactions with the chatbot were collected, including the comments they wrote on peer essays and the feedback offered by the chatbot on their comments. The results indicate that the integration of AI supervision improved the quality of students' peer reviews. Students employed various strategies during their comment revision in response to AI feedback, such as introducing new points, adding details, and providing illustrative examples, which helped improve their comment quality. These findings shed light on the benefits of AI‐supported peer review systems in empowering students to provide more valuable feedback on peers' written work. Practitioner notes What is already known about this topic Scholars have extensively investigated diverse pedagogical strategies to enhance students' peer feedback provision skills in second language (L2) writing classrooms. Artificial intelligence (AI) technologies have been utilised to monitor and evaluate the peer feedback generated by student reviewers. AI‐enabled peer feedback evaluation tools have demonstrated the ability to provide valid assessments of student reviewers' peer feedback. What this paper adds In the context of L2 writing, there is a lack of bespoke AI‐enabled peer feedback evaluation tools. To address this gap, we have developed an AI‐supported peer review system, EvaluMate, which incorporates a large language model‐based chatbot named Eva. Eva is designed to provide feedback on L2 students' comments on their peers' writing. While previous studies have primarily focused on assessing the validity of AI‐enabled peer feedback evaluation tools, little is known about how students incorporate AI support into improving their peer review comments. To bridge this gap, our study examines not only whether using the system (EvaluMate) can enhance the quality of L2 students' peer review comments but also how students respond to Eva's feedback when revising their comments. Implications for practice and/or policy The development of the AI‐supported peer review system (EvaluMate) introduces an innovative pedagogical approach for L2 writing teachers to train and enhance their students' peer feedback provision skills. Integrating AI supervision into L2 students' peer feedback generation improves the quality of comments provided by student reviewers on their peers' writing. Students employ various strategies when revising their comments in response to Eva's feedback, and these strategies result in varying degrees of improvement in comment quality. L2 writing teachers can teach effective revision strategies to their students.
... Peer feedback is in relation to peer review. Peer review, in a broader perspective, consists of rating and scores (i.e., peer assessment) and written or oral comments (i.e., peer feedback) (Wu & Schunn, 2021). Although studies (Boud et al., 1999;Boud & Bearman, 2024) pointed out that scoring in peer assessment inhibits cooperation, provision of comments in peer feedback enhances cooperation and collaboration to achieve the goal. ...
... Critically, the quality of the feedback produced is key to the learning for both provider and recipient (K. Cho & MacArthur, 2011;van Popta et al., 2017;Wu & Schunn, 2021a, 2023aYu & Schunn, 2023). Nonetheless, investigations into the quality of peer feedback have shown that students often provide short, vague, and nonconstructive comments during peer feedback activities (Hovardas et al., 2014). ...
Article
The learning benefits of peer feedback depend upon students actively participating and providing high-quality comments. Theoretically, students should come to see this relationship and thus generally provide more constructive feedback with experience. However, even with experience, students often provide short and relatively unhelpful comments. Given that providing longer feedback—which tends to include more useful content—is consistently associated with learning, the relationship between peer feedback experiences and feedback length requires exploration. We analyzed online peer feedback data from 418 assignments across 197 courses at 57 different institutions. We first validate that there is a consistent linear increase in probability of including useful comment features when comment length increases and that most peer comments tend to be so short that useful features are rare. Then, utilizing negative binomial regression and meta-analysis, we sought to examine the effect size and consistency of the relationship between specific peer feedback experiences and changes in peer feedback length, conceptualized as other-regulation. Our findings revealed several other-regulation patterns that were highly consistent across courses and assignments, conceptualized in terms of modeling via receiving longer feedback, potential changes in self-efficacy due to the perceived helpfulness of peers’ feedback, and positive reinforcement from recognition for their own review quality.
Article
Recent research highlights the concept of feedback literacy, focusing on students’ active engagement in feedback processes, including giving and receiving peer feedback. However, most studies rely on self-assessments through surveys and interviews, with few examining actual feedback behaviors. This study explores the behavioral aspects of peer feedback literacy among 844 high school students using an online system for peer feedback and revisions. Multiple measures related to provided ratings and comments as well as use in revisions were extracted using systematic coding. Factor analysis and structural equation modeling identified two distinct elements of high-quality feedback provision—providing features and rating validity—and confirmed a moderate correlation between the ability to provide quality feedback and the use of feedback. These skills were weakly correlated with writing ability. This research contributes to the emerging literature of peer feedback literacy and supports the development of more effective peer feedback training approaches.
Article
Full-text available
Background Medicine and public health are shifting away from a purely “personal responsibility” model of cardiovascular disease (CVD) prevention towards a societal view targeting social and environmental conditions and how these result in disease. Given the strong association between social conditions and CVD outcomes, we hypothesize that accelerated aging, measuring earlier health decline associated with chronological aging through a combination of biomarkers, may be a marker for the association between social conditions and CVD. Methods We used data from the Coronary Artery Risk Development in Young Adults study (CARDIA). Accelerated aging was defined as the difference between biological and chronological age. Biological age was derived as a combination of 7 biomarkers (total cholesterol, HDL, glucose, BMI, CRP, FEV1/h², MAP), representing the physiological effect of “wear and tear” usually associated with chronological aging. We studied accelerated aging measured in 2005-06 as a mediator of the association between social factors measured in 2000-01 and 1) any incident CVD event; 2) stroke; and 3) all-cause mortality occurring from 2007 through 18. Results Among 2978 middle-aged participants, mean (SD) accelerated aging was 3.6 (11.6) years, i.e., the CARDIA cohort appeared to be, on average, 3 years older than its chronological age. Accelerated aging partially mediated the association between social factors and CVD (N=219), stroke (N=36), and mortality (N=59). Accelerated aging mediated 41% of the total effects of racial discrimination on stroke after adjustment for covariates. Accelerated aging also mediated other relationships but to lesser degrees. Conclusion We provide new evidence that accelerated aging based on easily measurable biomarkers may be a viable marker to partially explain how social factors can lead to cardiovascular outcomes and death.
Article
Full-text available
Peer feedback has become a common practice in MOOCs for its capacity to scale formative assessment and feedback on higher-order abilities. Though many practices for improving peer assessment have been examined, there is a lack of knowledge of how instructional design and platform features affect the quality of peer assessment and the relative frequency of different types of peer feedback comments. This study aimed to improve understanding of the relationship between quality of feedback and peer-feedback' pedagogical design. Peer feedback instructional design and peer feedback comment data were examined from two MOOCS in a similar domain of personal relevance but with substantially different designs. Country of origin of the feedback provider was also examined to control for cultural/linguistic effects. Differences between the two courses were observed in both the pedagogical designs and in the focus of peer comments, suggesting that peer feedback design is an important guide for the focus of peer feedback comments. Furthermore, the results support the idea that instructional design features, mainly the guide' structure and focus, determine the type of comments that participants will produce and hence receive.
Preprint
Full-text available
"We propose to change the default P-value threshold forstatistical significance for claims of new discoveries from 0.05 to 0.005."
Article
Full-text available
Within the higher education context, peer feedback is frequently applied as an instructional method. Research on the learning mechanisms involved in the peer feedback process has covered aspects of both providing and receiving feedback. However, a direct comparison of the impact that providing and receiving peer feedback has on students’ writing performance is still lacking. The current study compared the writing performance of undergraduate students (N = 83) who either provided or received anonymous written peer feedback in the context of an authentic academic writing task. In addition, we investigated whether students’ peer feedback perceptions were related to the nature of the peer feedback they received and to writing performance. Results showed that both providing and receiving feedback led to similar improvements of writing performance. The presence of explanatory comments positively related both to how adequate students perceived the peer feedback to be, as well as to students’ willingness to improve based upon it. However, no direct relation was found between these peer feedback perceptions and students’ writing performance increase. The anonymized data and analyses (syntaxes) are accessible via the following link: https://osf.io/awkd9
Article
Research has shown that engaging students in peer feedback can help students revise documents and improve their writing skills. But the mechanistic pathways by which skills develop have remained untested: Does receiving and providing feedback lead to learning because it produces more extensive revision behavior or is such immediate implementation of feedback unnecessary? These pathways were tested through analyses of the relationships between feedback provided and received, feedback implemented and overall revisions, and improved writing quality in a new article. Overall, the number of revisions predicted growth in writing ability, and both amount of received and provided feedback were associated with being more likely to make revisions. However, providing feedback was also directly related to growth in writing ability.
Article
Technology-facilitated peer assessment is gaining increasing attention. However, evidence for the contribution of technology-facilitated peer assessment to learning achievements has not been investigated. The present meta-analysis integrated findings on the effects of technology-facilitated peer assessment based on two main elements: (1) technology-facilitated peer assessment, (2) the use of extra supporting strategies in technology-facilitated peer assessment. A total of 37 empirical studies published from 1999 to 2018 were selected and analysed. Results indicated that technology-facilitated peer assessment had a significant and medium effect on learning achievements with an overall mean effect size of 0.576. The use of extra supporting strategies in technology-facilitated peer assessment also produced a positive and medium effect on students’ learning achievements with an overall mean effect size of 0.543. Different moderator variables, such as task types, assessment modes, training for assessors, durations, grouping types and assessment methods were related to different effect sizes. The results together with the implications for both practice and research are discussed.
Article
Peer assessment has proven to have positive learning outcomes. Importantly, peer assessment is a social process and some claim that the use of anonymity might have advantages. However, the findings have not always been in the same direction. Our aims were: (a) to review the effects of using anonymity in peer assessment on performance, peer feedback content, peer grading accuracy, social effects and students’ perspective on peer assessment; and (b) to investigate the effects of four moderating variables (educational level, peer grading, assessment aids, direction of anonymity) in relation to anonymity. A literature search was conducted including five different terms related to peer assessment (e.g., peer feedback) and anonymity. Fourteen studies that used a control group or a within group design were found. The narrative review revealed that anonymous peer assessment seems to provide advantages for students’ perceptions about the learning value of peer assessment, delivering more critical peer feedback, increased self-perceived social effects, a slight tendency for more performance, especially in higher education and with less peer assessment aids. Some conclusions are that: (a) when implementing anonymity in peer assessment the instructional context and goals need to be considered, (b) existent empirical research is still limited, and (c) future research should employ stronger and more complex research designs.
Article
The act of revising is an important aspect of academic writing. Although revision is crucial for eliminating writing errors and producing high-quality texts, research on writing expertise shows that novices rarely engage in revision activities. Providing information on writing errors by means of peer feedback has become a popular method in writing instruction. However, despite its popularity, students have difficulties in leveraging the potential of peer feedback: feedback uptake is low and students engage in little revision. Instructional support might help learners to make sense of peer feedback and to reflect on the provided information more deeply. The present study investigated the effect of sense-making support on feedback uptake as well as on revision skills, in particular problem detection and problem correction. In an experimental study, 73 university students were randomly assigned to conditions with or without sense-making support. The results indicate that feedback uptake improved concerning two out of three variables: students in the condition with sense-making support made fewer new errors and rejected more incorrect feedback comments. Students’ revision skills only improved with regard to problem detection. Overall, we were able to show that peer feedback alone might not be sufficient to make successful changes in the text and improve revision skills. Sense-making support proved to be effective to some extent and partially helped to maximize the benefits of peer feedback.