ArticlePDF Available

Exploring tertiary English as a Foreign Language writing tutors’ perceptions of the appropriateness of peer assessment for writing

Authors:

Abstract and Figures

Despite the increasing volume of research in peer assessment for writing, few studies have been conducted to explore teachers’ perceptions of its appropriateness for writing instruction. It is essential to understand teachers’ perceptions of peer assessment as teachers play an important role in whether and how peer assessment is implemented in their instruction. The current study investigated tertiary English writing tutors’ perceptions of the appropriateness of peer assessment for English as a Foreign Language writing in China, where peer assessment has been increasingly discussed and researched but only occasionally used in teaching. The current study scrutinised the reasons behind its limited use via in-depth exploratory interviews with 25 writing tutors with different teaching backgrounds. The interview data showed tutors’ limited knowledge of peer assessment and unanimous hesitation in using it. The former was explained by insufficient instruction and training in peer assessment. The latter relates to the incompatibility of peer assessment with the examinations-oriented education system, learners’ low English language proficiency and learning motivation, and the conflict of peer assessment with the entrenched teacher-driven learning culture. Suggestions are made about training and engaging teachers to effectively use peer assessment in instruction.
Content may be subject to copyright.
1
Exploring tertiary EFL writing tutors' perceptions of the
appropriateness of peer assessment for writing
Dr. Huahui Zhao
Address: School of Education, University of Leeds, Leeds LS2 9JT, United Kingdom
Email: H.Zhao1@leeds.ac.uk
Telephone numbers: (00)44 0113 343 5593
Abstract
Despite the increasing volume of research in peer assessment for writing, few
studies have been conducted to explore teachers’ perceptions of its
appropriateness for local writing instruction. It is essential to understand
teachers’ perceptions of peer assessment as teachers play an important role in
whether and how peer assessment would be implemented in their instruction. The
current study investigated tertiary English writing tutors’ perceptions of the
appropriateness of peer assessment for EFL writing in China where peer
assessment has been increasingly discussed and researched but only occasionally
used in teaching. The current study scrutinised the reasons behind its limited use
via in-depth exploratory interviews with 25 writing tutors with different teaching
backgrounds.
The interview data showed tutors’ limited knowledge of peer assessment and
unanimous hesitation in using peer assessment. The former was explained with
regard to the insufficient instruction and training in peer assessment. The latter
was further elucidated with the incompatibility of peer assessment with the
examinations-oriented education system, learners’ low English language
proficiency and learning motivation, and the conflict of peer assessment with the
entrenched teacher-driven learning culture. Based on the findings, suggestions
were made about training and engaging teachers to effectively use peer
assessment in instruction.
Key words: Teachers’ perceptions of the appropriateness of peer assessment;
resistance in using peer assessment; constraints of using peer assessment; cultural
of learning in China
2
Introduction
Reviews of existing research on peer assessment have shown its predominant focus on
the roles peer assessment in learning and learners’ preference for peer assessment (Yu
and Lee 2016). Few studies have been conducted to explore teachers’ perceptions of the
appropriateness of peer assessment for instruction (Ngar-Fun Liu and Carless 2006;
Adachi,Tai and Dawson 2017). However, we should explore teachers’ perceptions of
peer assessment to understand what enables or hampers them to use peer assessment
(Adachi et al. 2017). As has been substantiated in Panadero and Brown (2017),
teachersbeliefs in peer assessment exert significant effects on their use of peer
assessment.
Literature review
Existing studies primarily investigate peer assessment from the students’ perspectives,
leaving teachers’ perceptions of peer assessment widely unexplored.
Studies on the roles of peer assessment, students’ perceptions of peer assessment, and
training in peer assessment
Existing studies have predominantly employed a comparative method to suggest the
roles of peer assessment in writing, using teacher assessment as the comparison
baseline. Three research lines were followed primarily, including the roles of peer
assessment in revisions, students’ perceptions of peer assessment and teacher
assessment, and training in peer assessment.
Learners have been observed to use peer feedback to revise writing drafts, albeit
less frequently than teacher feedback (Paulus 1999; Miao Yang,Badger and Yu 2006;
Cho and MacArthur 2010; Gielen et al. 2010; Hu and Lam 2010). However, they have
also been found to understand peer feedback better than teacher feedback mainly due to
more interactive discussions of feedback with peers than tutors (Zhao 2010).
3
Learners have expressed their willingness to have peer assessment alongside
their preferred teacher assessment (Nelson and Carson 1998; Zhang 1999; Hu and Lam
2010; Lee 2015; Lei 2017). However, they have also casted their doubt on the reliability
of peer feedback in view of learners developing language proficiency (Ngar-Fun Liu
and Carless 2006; Kaufman and Schunn 2011; Wang 2014).
To relieve learners concern over peer assessment, training in conducting
effective peer assessment has been suggested. Training has been observed to reduce the
discrepancies between teacher and peer feedback (Xiongyi Liu and Li 2014) and
improve both the quantity and quality of peer feedback (Hu 2005; Rahimi 2013; Y.-F
Yang and Meng 2013). Dynamic and ongoing teacher support for peer assessment has
also been observed to encourage and facilitate learners to provide focused and
constructive peer feedback (Zhao 2014).
Limited studies on teachers’ perceptions of peer assessment
In the large volume of literature on peer assessment, studies on teachers’ perspectives
on peer assessment have been few and far between, despite of their vital roles in the
effective use of peer assessment (Adachi et al. 2017). Beach and Bridwell (1984, p.
312), for instance, suggest that:
The attitudes that teachers have toward writing strongly influence their own
teaching practices, particularly their evaluation of student writing. Their beliefs
serve as filters that train their attention to qualities (or lack thereof) in student
writing.
As far as peer assessment is concerned, Falchikov (1998, p. 18) argues that:
Teacher factors seem to involve traditional conceptions of student and teacher
roles, in which teachers ‘run the show’ and students receive the benefits of teacher
4
experience rather than of their own. Involving students in an important process
such as assessment requires a change in the traditional teacher (and student) role.
Changes of teacher and student roles required by peer assessment have concerned tutors
and researchers for decades across different contexts. Freedman’s survey with 560
writing tutors suggested that most tutors expressed a substantial level of doubt about the
helpfulness of peer assessment for English writing (Freedman 1985). The five ESL
writing tutors in Mangelsdorf (1992)’s study were worried about peer feedback being
too vague and learners’ incorporation of incorrect peer feedback into their revisions.
Rollinson (2005) suggested that the time-consuming preparation of peer assessment
could drive teachers away from using peer assessment considering the course or
examination constraints. Similarly, Liu and Carless (2006) ascertained through
interviewing eight teachers in Hong Kong that time constraints, unreliable peer grading,
and developing student knowledge inhibited teachers from using peer assessment there.
Similar challenges of using peer assessment were reported in the Australian context in
Adachi and her colleagues (2017). They identified five challenges of implementing peer
assessment, including time constraints, learners’ and teachers’ low motivation for
getting involved in peer assessment, students’ superficial engagement in peer
assessment, insufficient feedback skills, and technical challenge posed by online
assessment. In Spain, Panadero and Brown (2017), based on surveys with 751 teachers
across education sectors and subjects, stipulated that unreliable peer feedback, negative
learning climate generated by peer assessment, and students’ distrusting in peer
feedback led to the infrequent use of peer assessment there.
Studies above have generated valuable information of why peer assessment
being frequently excluded from local instruction contexts, calling for more similar
studies in other contexts as “there has not been factor invariance for this instrument in
5
every context (Baird 2014, p. 362)”. This could be especially vital for Confucian
heritage culture contexts including China. The Confucian discourse focuses on the study
of classic texts, prioritises consequences to processes, and regards teachers as role
models and students as bystanders or listeners (Scollon 2003). On the contrary, peer
assessment utilises peer writing as the learning resource, emphasises process-
approached learning, and encourages learners to get actively involved in learning.
Research questions
The current study explored teachers’ perceptions of the appropriateness of peer
assessment for tertiary EFL writing instruction in China wherein resistance to peer
assessment has been found magnified compared to that from other parts of the world
(Carson and Nelson 1994; Connor and Asenavage 1994; Carson and Nelson 1996;
Chang 2016; Yu and Lee 2016). The following research question was asked:
What were English writing tutors’ perceptions of the appropriateness of peer
assessment for tertiary EFL writing instruction in China?
Through answering this question, the current study attempted to reveal the underlying
reasons for the underuse of peer assessment in China where the large class size, the
staffing shortage and the urgent need of developing learning autonomy appeal for
learner-centred teaching methods including peer assessment.
Research context
The Chinese educational context is well known for its prolonged examination-driven
and teacher-centred pedagogy (Berry 2011). Classroom observation of the participating
teachers’ instruction resonated with the entrenched learning culture. The following
features were observed:
6
(1) Instruction strictly aligned the assessment criteria of examination essays,
focusing on grammatical accuracy and the variety of vocabulary and sentence
structures;
(2) Students were asked to memorise and use the words and sentence structures
extracted from ‘modelarticles from previous exams;
(3) Final examinations were carefully explained in class often alongside
examination coping strategies;
(4) Few interactions occurred in class, teachers referring to textbooks and lecturing
throughout the whole class whilst students copying notes from the whiteboard;
and
(5) None of the tutors employed peer assessment in their writing classes.
The aforementioned characteristics indicate possible obstacles to use peer assessment in
that instructional context. Because the existing examination-oriented and teacher-driven
instruction seemed to be out of tune with peer assessment which emphasises process-
orientation and learner-centredness. Therefore, understanding the writing tutors
perceptions of peer assessment is vital to introduce and use peer assessment in that local
context.
Participants
Twenty-five Chinese English writing tutors (10 males and 15 females) from five
Chinese colleges and universities in two big cities in southern China were invited to
attend the interviews. Convenience and snowball sampling strategies were employed.
The first batch of participants was the writing tutors in the host research institution (the
large-scale university) who introduced their teacher friends for additional interviews.
The backgrounds of participants were depicted in Table 1.
7
Table 1 Background of interviewees
Interviewees
Institutions
Teaching
experiences
Target students
4
Two vocational colleges
2: ten years
1: first year English majors
1: second year non-English
majors
2: one and half a year
1: second year English major
1: first year non-English
majors
11
Two smallscale
universities
4: seven years
1: second year non-English
majors
1: second year English majors
2: first year English majors
6: five years
2: second year English majors
2: first year non-English
majors
2: second year non-English
majors
1: three years
1: first year English majors
10
One large-scale university
1: 15 years
7: four years;
2: one year
2: second English majors
3: third year English majors
5: freshman English majors
In total
4: vocational college
10: large-scale university
11: small-scale university
3: ten years or above
10: five-seven years
12: less than five
years
3: third year English majors
3: first year non-English
majors
4: second year non-English
majors
6: second year English majors
9: first year English majors
Table 1 shows the varied teaching experience and instructed student groups
among the 25 interviewees. The different scales of institutions required distinct entry
requirements. The large-scale university required the highest score of the Entrance
Examination to College and University, followed by the small-scale universities and
vocational colleges. The different entry scores could indicate differed levels of students’
English language proficiency and English learning motivation. The variety of the
interviewees’ backgrounds helped to generate a relatively full picture of English writing
tutors’ perceptions of peer assessment in the researched region.
8
Research methods
Considering the objective of this study as collecting original and exploratory data on
EFL writing tutors’ perceptions of peer assessment, semi-structured interviews were
employed for three reasons. One, although questionnaires are appropriate to investigate
opinions, attitudes, views, and beliefs (Denscombe 1998), the questionnaire data are
necessarily thin and do not help to understand or explore answers; however, the
overpowering feature of the interview is the richness and vividness of the material it
turns up (Gillham 2000, p. 10). In other words, the interview data provide more details
and depth than the questionnaire data (Ritchie 2003). Two, the interview data have
relatively higher validity than the questionnaire data because they are collected through
direct contact with participants, enabling the researcher to check for accuracy and
relevance by probing and observing non-verbal communication during interviews
(Denscombe 1998). Lastly but most importantly, semi-structured interviews would
allow dialogic discussions about peer assessment between interviewees and the
researcher. This was particularly essential for the current study considering the underuse
of peer assessment in the local educational context. Semi-structured interviews helped
to reach shared understanding of peer assessment among the researcher and the
interviewees.
It was important to establish a shared understanding of peer assessment. Firstly,
the ultimate purpose of the current project was to introduce peer assessment to the local
context for the formative purpose, following Hu’s (2005) definition of peer assessment
as involving learners in reading, critiquing, and providing feedback on each other’s
writing to improve immediate textual and writing competence over time (pp. 321-22).
Therefore, it was vital to understand if peer assessment was introduced for those
objectives, what hurdles had to overcome from the teachers’ perspectives. Secondly, the
9
alignment of interviewees’ and the researcher’s understanding of peer assessment was
critical for valid interpretation of the interview data in the study. Finally, the teacher
interviewees were keen to know about peer assessment because at the time of the
research project was carried out, the Minister of Education in China was promoting the
use of peer assessment for English language teaching in Higher Education but little
instruction was provided.
Three broad questions were asked to guide the semi-structured interviews.
(1) What is your understanding of peer assessment?
(2) Would you consider using peer assessment in your writing classes?
(3) Do you think peer assessment is an appropriate writing pedagogy for your
students? Why do you think so?
The first question was to elicit interviewees’ understanding of peer assessment which
led to discussions about different forms of peer assessment and potential steps of using
peer assessment. Based on the discussions, interviewees were asked about possible
(un)use of peer assessment in their writing classroom (Q2). Their reasons for using or
not using peer assessment were elicited via Q3 to elucidate the appropriateness or
inappropriateness of peer assessment for local instruction contexts.
The interviews were conducted in their L1 (i.e. Chinese) as requested by the
interviewees. The use of mother tongue helped to enhance the depth of interview data.
More importantly, it shortened the distance between the interviewer and interviewees as
a shared language and encouraged interviewees to openly discuss pitfalls of the Chinese
education system and their obstacles to the use of peer assessment.
Each interview lasted for approximately one and half an hour, allowing the
generation of thick and rich data. The interview data were audio recorded and verbatim
10
transcribed. The data were then thematically analysed via NVivo10 until no more new
themes emerged from the data.
Results
The results were reported in the order of interviewees’ understanding of peer
assessment (Q1), their potential (un)use of peer assessment (Q2), and their perceived
appropriateness of peer assessment for their EFL writing instruction (Q3).
Teachers’ narrow understanding of peer assessment
The interview data revealed teachers’ limited understanding of peer assessment. Most of
the teacher interviewees (18 of the 25) viewed peer assessment as a student grading
each other’s writing. One of the tutors in the large-scale university equalised peer
assessment to peer grading by stating:
The meaning of assessment, I am accustomed to meaning evaluation, or assigning
a grade or grading a performance. (Interviewee1)
Their understanding of assessment as grading is not surprising in view of the existing
teacher assessment practice. For all teacher interviewees, teacher assessment solely
assigned a mark to student writing with few written comments mainly due to the large
number of students per tutor had (at least 80 students) as explained by the interviewees.
The rest seven tutors believed that apart from giving a mark, peer assessment could also
be used for students to spot problematic areas in each other’s writing such as spelling
and grammatical errors. None of the teacher interviewees mentioned benefits arising
from the process of peer assessment such as learning from each other’s writing via
reading and commenting (Lundstrom and Baker 2009) and developing higher thinking
order and critical skills (van Zundert,Sluijsmans and van Merriënboer 2010).
11
Their narrow understanding of peer assessment was expected considering the
limited instruction and training in peer assessment they had received. All the
interviewees admitted that they knew little about peer assessment which had not been
discussed as an alternative teaching method in their institutions. Most of them viewed
peer assessment as a ‘westernised’ teaching method with autonomous learners in a
small class. Limited instruction in peer assessment leading to narrow understanding of
peer assessment echoed Harrisa and Brown’s three case studies in New Zealand:
Participants in their study showed limited understanding of various roles of peer (and
self) assessment for learning due to insufficient instruction in understanding and using
peer assessment (Harrisa and Brown 2013).
To expand their understanding of peer assessment as peer grading or spotting
errors, the researcher introduced the four-step peer assessment (i.e. reading,
commenting, discussing and revising) during interviews and invited interviewees to
consider potential benefits of each step and their possible use in writing classes.
Hesitation in using peer assessment
Despite of various benefits of peer assessment perceived by the interviewees (e.g.
learning via reading and commenting, practising spoken English, and improving writing
quality if feedback was correct), unanimous hesitation in using peer assessment was
expressed by all interviewees. Only five writing tutors expressed their possible
employment of peer assessment only if requested by senior members of their
institutions. All the rest of interviewees indicated low possibility of using peer
assessment on account of their limited knowledge of how to use it and the potential
negative impact on their current teaching practice. One writing tutor from one of the
vocational colleges explained:
12
Introducing something new to the classroom asks for lots of preparation, let alone
the change of my and students’ roles in the classroom. It’s safe and easy to just
follow what I am doing at the moment. After all, my writing tutors taught me in
this way when I was a student and I prefer to stay in my comfort zone.
(Interviewee22)
Similar viewpoints were reiterated during interviews which suggested teachers’
low motivation in engaging with peer assessment as reported in Adachi et al. (2017).
Their reluctance to use peer assessment also indicated and was further explicated by
their perceived inappropriateness of peer assessment for local writing instruction.
Inappropriateness of peer assessment for English writing instruction
Four salient reasons were presented by interviewees to explicate the inappropriateness
of peer assessment for instruction, composing of (a) the incompatibility of peer
assessment with the examination-oriented education system, (b) learners’ developing
language proficiency, (c) learners’ low English learning motivation, and (d) the conflict
of peer assessment with teacher-driven learning culture.
Incompatibility of peer assessment with the examination-oriented education
system
All the teacher interviewees contended that the examination-oriented education system
made it unlikely to use peer assessment in their instruction. During the interviews, the
writing tutors restated the importance of preparing students for the diverse types of
examinations (e.g. middle- and final-term examinations, English proficiency certificates
and other national wide high-stake examinations). Considering exams were heavily
syllabus-based, they emphasised the necessity of completing the syllabi before exams
took place. However, they were worried that the time-consuming conduct of peer
assessment would take up the class time and stop them from finishing the syllabus
13
before exams. Ms Cheng, an English writing tutor for over ten years in the large-scale
university, explained:
The curriculum makes it impossible to use peer assessment. We don’t have enough
time to involve students in it because we must finish the teaching tasks in the
syllabus within the 90-minute class time so that students could be ready for their
exams. (Interviewee2)
Likewise, Miss Yan, a second-year writing tutor in one of the vocational colleges, stated
that:
The curriculum designed by the department must be completed within the term
time. Peer assessment as a time-consuming activity will lead to the failure of
finishing the tasks covered in the heavy curriculum and later on tested in exams. In
this sense, it is not surprising that peer assessment is not popular within Chinese
English writing tutors. (Interviewee18)
In addition, all interviewees elucidated that peer assessment was more time-
consuming yet less effective than teacher assessment for preparing students for exams.
The existing teacher-fronted pedagogy allowed them to cover more content than peer
assessment within the limited class time. They further stated that spending limited class
time on peer assessment could result in students’ low achievement in their exams. That
would consequently hamper their reputation among students and risk their positions in
institutions. Miss Zheng in the small-scale university made this point saliently by
arguing that:
The institution and students measure our teaching quality based on students’
performance in exams. The higher marks they’ve obtained, the higher reputation
for us as a teacher builds up. I think none of us could afford to spend time on peer
assessment and risk students’ exam performance. After all, no one would judge my
teaching based on whether or not I use innovative teaching methods such as peer
14
assessment. They judge my teaching based on how well my students perform in
their exams. (Interviewee11)
In addition to the examination-based evaluation of teaching performance,
learners’ examination-driven learning style was listed as another aspect of the
incompatibility of peer assessment with the examination-oriented education system. Ms
Cheng observed a dramatic change of her students’ motivation to learn English writing
prior to and after the Testing for English Majors Band 4:
I don’t understand how this could happen. Before they sat in Testing for English
Majors Band 4, they were so diligent to learn how to write a good essay.
However, after Testing for English Majors Band 4, they seemed to lose their
motivation and started to be absent from classes. Their learning styles are so
pragmatic. I mean they seem to learn for passing examinations rather than learn for
knowledge. Peer assessment would not contribute to the examination that much, so
I think students might not take it seriously. (Interviewee8)
It is worthy of pointing it out that Testing for English Majors- Band 4 decided whether
English majors could get their bachelor’s degree or not. Cheng’s views revealed that
examination-driven learning could demotivate learners to get involved in peer
assessment because of its limited role in exams.
Similarly, Ms Lu, a writing tutor for freshmen in the large-scale university
complained about the tedious and unrewarding chore (Hyland 1990, p. 279) of
commenting on students’ writing:
None of us are willing to take the writing module. I taught the module because I
was on maternity leave last semester and left no choice but the writing module this
semester. Teaching writing is a tedious and unrewarding job because you work
hard but students don’t feel in that way. We spend a lot of time commenting on
students’ work, but they pay little attention to it and don’t make much progress in
their writing. They keep making similar mistakes and they seem not to be
interested in writing at all. Writing is not a one-day job. They are more willing to
15
spend time on other aspects such as reciting vocabulary to achieve high marks in
examinations. Peer assessment is not valuable for examinations, so I doubt its
popularity among students. (Interviewee4)
Lu’s statement above shows the examination-oriented learning style shifted students’
attention from learning to exams and the limited value of peer assessment for exams
could make it unpopular among students.
The discussions above imply that the incompatibility of peer assessment with
examination-oriented teaching and learning styles could bring about the limited use of
peer assessment. The finding corroborated Panadero and Brown (2017)’s assertion
about the constraints from systemic realities on the implementation of peer assessment.
In the current study, the dominant role of exams in education seemed to constitute an
essential part of systemic realities that impeded teachers from using peer assessment.
Constraint of students’ developing English language proficiency
Students’ developing English language proficiency was presented as the second most
frequently cited reason for the underuse of peer assessment. All but two interviewees
claimed that their students were not sufficiently proficient to provide correct peer
feedback, despite of students at the large-scale university obtained an average 120 out
of 150 the total score in entrance English exams. This had been made particularly
salient by interviewees from the vocational colleges and the small-scale universities
where students had a lower level of English language proficiency than those in the
large-scale university. They highlighted that peer assessment was useful only if the
students had sufficient English language knowledge to make correct judgment on peers’
writing; unfortunately, their students did not have that level of language capability. For
instance, Miss Li, who taught writing for tourism students in one of the vocation
colleges explained:
16
My students’ English proficiency is too low. This makes it impossible to use peer
assessment with them because it is hard for them to find mistakes for their peers;
instead, they might provide wrong advices. (Interviewee25)
Teachers’ worries about students’ limited English knowledge to provide correct
feedback aligned students’ concerns about invalid peer feedback reported in other
studies (Ngar-Fun Liu and Carless 2006; Kaufman and Schunn 2011; Wang 2014).
Developing English language proficiency was also believed to result in learners’
ignorance of peer feedback. Ms Cai, the writing tutor who taught English for ten years
in one of the vocational colleges elucidated:
To use peer assessment, the pre-condition is to improve students’ English level.
But it is nearly impossible to improve their English proficiency within a brief
period. Because their English is not good, they might mislead their peers by
providing incorrect feedback. Their classmates may also not trust the feedback
they’ve received from peers. In this case, they would think peer assessment as a
waste of time. (Interviewee23)
Two messages can be derived from her assertion: students’ low level of English
proficiency could make (a) students incapable of providing correct peer feedback and
(b) peers doubt the reliability of peer feedback and thus reject to use it in revisions. The
latter has also been asserted by Nelson and Murphy (1993, p. 136) who argued:
English is not the native language of L2 students. Because L2 students are still in
the process of learning English, they may mistrust other learners’ responses to their
writings and, therefore, may not incorporate their suggestions while revising.
To avoid misleading students with invalid peer feedback, seven writing tutors
suggested teachers checking peer feedback before students used it in revisions.
However, this would result in extra assessment time and heavier workload and thereby
made peer assessment more time-consuming. Ms Gao explained:
17
If we have to check on the correctness of peer feedback, why don’t we spend that
time providing teacher feedback which would be more helpful than peer feedback?
Plus, it is embarrassing and discouraging for students whose feedback was marked
as wrong comments. Their peers wouldn’t trust their feedback in subsequent
writing tasks. (Interviewee9)
Gao’s assertion substantiated teachers’ unawareness of the complementary role of peer
to teacher feedback (Villamil and DE Guerrero 1998; Hyland and Hyland 2006; Miao
Yang et al. 2006; Zhao 2010). His argument also indicated the complex social and
cultural phenomenon underlying the use of peer assessment: the face-threatening issue
related to incorrect feedback and students’ low tolerance of mistakes in the learning
process. As Harrisa and Brown (2013) asserted, without considering mistakes as
learning opportunities, the implementation and effectiveness of peer assessment could
not be viable.
Constraint of students’ low English learning motivation
Interviewees reported students’ low English learning motivation as another main aspect
of the inappropriateness of peer assessment. They believed that low motivation would
lead to students’ lack of commitment to peer assessment. This was especially
highlighted by tutors who taught non-English majors and senior English majors. For
example, Miss Zhang, a writing tutor in one of the vocation colleges, suggested that
students’ high motivation for learning English as a key exam subject in their secondary
schools “disappeared” after they entered colleges where English played a less decisive
role in their academic performance; students’ low motivation led to their lack of
engagement with time-consuming learning tasks and peer assessment was one of those
tasks. Similarly, Miss Wang, another writing tutor in the small-scale university, argued
that:
18
Some students don't submit their assignments on time even though they were told
the mark assigned to each assignment would be counted as a part of the final score.
Some students even did not come to the class. If you let them to take responsibility
for their own learning, I believe they would put their responsibility aside and do
something unrelated to study at all. With such a learning attitude, it was not
possible to ask these students to do peer assessment. It might work with students
who are enthusiastic about learning English such as those in a high-ranked
university, but it would not work with most of my students here. (Interviewee17)
The negative impacts of students low learning motivation on teachers’ use of
peer assessment resonated with Jacobs and Ratmanida’s (1996) finding. The Asian EFL
teachers in their study believed that learners’ lack of motivation to learn English made
group work less appropriate in their contexts. This could be theoretically supported by
Okada, Oxford and Abo (1996) who indicated that low learning motivation could
decrease the impetus for language learners’ involvement in and their effort to learn
language. Their viewpoints have also been substantiated by Cheng and Warren’s (2005)
observation of peer assessment among 51 college ESL learners in Hong Kong: more
highly motivated students made more realistic peer assessment.
Conflict with the existing teacher-driven culture of learning
The teacher interviewees postulated the conflict between learner-centred peer
assessment and the existing teacher-dominated learning culture. All the interviewees
articulated that their students had been exposed to teacher-led learning experience since
their kindergarten. The prolonged teacher-driven learning culture would make students
lack confidence and skills in providing peer feedback. Mr Zheng from the large-scale
university illuminated it in a vivid way:
If I asked a student to write a paragraph and project it on the wall and said “Ok, he
wrote this. What do you see is wrong with it?”. Nobody will say anything because
19
students have not yet learned how to be thoughtfully critical of their peers and how
to say “well, this is not the best sentence you ever written, let’s work on and fix it”
because they are afraid of hurting their peer’s feelings. (Interviewee9)
Zheng’s statement corroborated previous studies which asserted that students from the
collectivist culture such as the Chinese students might refrain from giving critical
comments to avoid tension and disagreement with peers and to maintain group harmony
(Carson and Nelson 1994; 1996; Nelson and Carson 1998). However, similar results
were reported in other cultural contexts. For instance, Harrisa and Brown (2013)
reported peer feedback with sympathy and affection which worried peer learners and
teachers in New Zealand.
On the other hand, Miss Wu, the teacher in the large-scale university, indicated
that both students and teachers were unready for trusting students to assess each other’s
writing. She said:
They were told to learn by themselves since primary school, because the large class
size made teachers impossible to cater to each student’s need. They had to find a
way to suite themselves to become an efficient learner. To make them work in
groups, they need time to adjust themselves to each other. Teachers also have been
used to evaluating students’ writing. They would worry how students would react
to each other’s writing and whether they would provide incorrect feedback for their
peers. (Interviewee6)
The teachers’ perceived incompatibility of teacher-driven learning with peer assessment
seemed to corroborate students’ self-explanation of their reluctance to participate in
peer assessment owing to their long exposure to teacher-driven culture of learning
(Nelson and Carson 2006).
Jin and Cortazzi (1998, p. 749) defined culture of learning as follows:
20
A culture of learning might be defined as socially transmitted expectations, beliefs,
and values about what good learning is. The concept draws attention to the usually
taken-for-granted cultural ideas about the roles and relations of teachers and
learners, and about appropriate teaching and learning styles and methods, about the
use of textbooks and materials, and about what constitutes good work in
classrooms.
A vital aspect of the culture of learning in the researched context was the teachers being
regarded as the legitimate agent with the expertise and social status to judge the quality
of student work and learners as the ones lacking in knowledge and power to do so.
Nelson and Murphy (1993, p. 136) explains:
In China, for example, the teacher is traditionally viewed as an authority figure, the
possessors of knowledge, and the one who is responsible for responding to
students’ work (Hudson-Ross& Dong, 1990). L2 students who view the teacher as
‘the one who knows’ may ignore the responses of other students, not merely
because English is the respondents’ second language, but because of the perception
that fellow students are not knowledgeable enough to make worthwhile comments
about their work. If learners do not value other students’ comments, they may not
take them into consideration when revising.
Their viewpoints were later substantiated in their later study (Nelson and Carson 2006).
They suggested that previous teacher-driven writing assessment made students hesitant
about providing peer feedback or negotiate feedback with writers over its use in their
revisions.
Furthermore, Mr Li, a writing tutor in the large-scale university, argued the use
of peer assessment could possibly contaminate teacher images:
The Chinese students have been used to viewing their writing tutor as the one who
should be responsible for assessing students’ writing. If you asked them to assess
each other’s work, they might think the teacher was shirking his responsibility as a
teacher. (Interviewee7)
21
Likewise, four interviewees from the vocational college and the small scale-university
worried that the use of peer assessment could make their students consider them being
lazy’ in marking their work. The results echoed Liu and Carless’ statement about the
disruption of power relations between teachers and students caused by the use of peer
assessment (Ngar-Fun Liu and Carless 2006). In other words, empowering students to
judge peer writing could challenge the traditional viewpoint of the teachers as the sole
legitimate assessment agent on student work. As students in Harrisa and Brown (2013)
claimed, peer assessment transformed the classroom social relationship and changed the
control and responsibility between teachers and students.
Discussions and implications
The current study interviewed 25 writing tutors across five universities and colleges in a
region of southern China to elicit their perceptions of the appropriateness of peer
assessment for local EFL writing instruction. The results showed their narrow
understanding of peer assessment as peer grading and their hesitation in using peer
assessment in their writing instruction. The former could be explained with their
inexperience and lack of instruction in understanding and using peer assessment. The
latter was largely elucidated by the perceived incompatibility of peer assessment with
examination-oriented education system, learners’ low English language proficiency and
English learning motivation, and the conflict of learner-centred peer assessment with
the entrenched teacher-driven cultural of learning in China.
It is worthy of noticing that these factors are intertwined with each other. The
examination-driven education system generated an examination-oriented learning and
teaching style; therefore, the use of peer assessment will be largely depended on its
effectiveness for preparing students for their examinations. peer assessment has been
22
viewed by teachers to be more time-consuming yet less ineffective than teacher
assessment in preparing students for exams. Consequently, its use in instruction has
been restricted. The entrenched teacher-driven and examination-oriented learning
culture also leads to high expectation of accuracy thus low tolerance of mistakes among
Chinese students and teachers. As a result, worries about incorrect peer feedback pull
back learners and teachers from accepting and conducting peer assessment.
The findings have exemplified and expanded Liu and Carless (2006)’s four
barriers for teachers to use peer assessment in Hong Kong, namely: reliability of
students’ judgements on peer writing, teachers’ expertise, the disruption of power
relations between teachers and students, and time and resources constraints. The current
study further added four other obstacles to introduce and use peer assessment: (a)
teachers’ reluctance to change their current practice, (b) the less effective of peer
assessment than teacher assessment in preparing students for exams, (c) learners’
previous teacher-driven learning experience, and (d) the systemic realities of judging
learners’ and teachers’ performance by exam marks. The findings have provided
qualitative evidence for Panadero and Brown (2017)’s survey result that teachers’
positive attitudes increased the possible use of peer assessment. The current study
ascertained through interviews that teachers’ negative attitudes towards peer assessment
led to infrequent use of peer assessment. The reasons for infrequent use of peer
assessment in the current study have provided useful implications for teacher training in
using peer assessment.
Implications for training in peer assessment
Firstly, writing tutors need to develop a full understanding of peer assessment including
its diverse roles in facilitating learning. As Liu and Carless (2006) stipulates,
understanding peer assessment as peer grading can severely undermine the potential of
23
peer feedback for improving student writing. Instead of equalising peer assessment as
assigning a mark, the writing tutors need to understand that marking could be the
precursor of peer assessment. It must follow or be followed by the process of thinking
about the reasons for giving that mark and communicating the process with the writer
and themselves (Ngar-Fun Liu and Carless 2006). In other words, the writing tutors
need to regard peer assessment as a process of engaging learners to discuss writing and
feedback on it.
Secondly, writing tutors need to underplay the inimical impact of examinations
and teacher-driven culture of learning on peer assessment. Learners mistrusting peer
feedback has been reported across different educational contexts which are not
necessarily teacher-driven and examination-dominated (e.g. Villamil and Guerrero 1996
in the Puerto Rico context; Falchikov 1998 in the British context; Kaufman and Schunn
2011 in the United States context ; Planas Lladó et al. 2014 in the Spanish context). In
particular, in New Zealand where assessment has given a relatively lower stake than
other educational systems, students still prefer teacher feedback and cast suspicion on
the usefulness and accuracy of peer feedback (Harrisa and Brown 2013). Teachers need
to be advised that far more students regarded peer assessment as highly congruent
pedagogy with Chinese learning culture than those questioning its appropriateness for
Chinese learners (Hu and Lam 2010). Students’ resistance to peer assessment would be
dropped significantly following their participation in peer assessment (Kaufman and
Schunn 2011). Moreover, their resistance to peer assessment does not stop them from
using peer feedback in their revisions (Hu and Lam 2010; Kaufman and Schunn 2011).
Thirdly, teachers should be instructed in accommodating peer assessment to
their local learning and teaching culture. peer assessment should be designed creatively
and fluidly to cater to different pedagogical purposes. For instance, use peer assessment
24
to help learners internalise the exam assessment criteria in an examination-driven
context. The effectiveness of peer assessment for helping learners understand
assessment criteria has been noted by teachers in Harris and Brown (2013). Considering
overloaded timetables, peer assessment could be carried out outside classroom or via
computer-mediated communication channels. As far as learners’ different levels of
proficiency and motivation are concerned, the constitution of peer assessment groups
could be varied from tasks and self-selected by learners.
Finally but also very importantly, address tutors’ concerns over peer assessment
and tackle their concerns in training sessions. Create opportunities for teachers to try out
peer assessment and encourage them to express their worries about peer assessment
based upon their experience. Invite solutions from fellow teacher trainees to expand
their understanding of peer assessment. For instance, address constraint of learners’
developing language proficiency via providing students with accessible assessment
criteria, exemplar peer feedback and ongoing support from teachers and other resources.
Increase students’ motivation for conducting peer assessment via employing new
technologies (e.g. online forums). Teachers’ confidence and competence in utilising
peer assessment will be increased with ongoing training in, discussions about, and
reflection on peer assessment.
Conclusions
The current study has revealed that teachers’ perceived appropriateness of peer
assessment is substantially impacted by their understanding of peer assessment, the role
of examinations and teachers in the existing culture of learning, as well as teachers’ and
learners’ readiness for accepting and adopting peer assessment. The study has
demonstrated the values of investigating writing tutors’ perceptions of the
25
appropriateness of peer assessment for their local instruction contexts. It sheds light on
the underlying reasons for the low uptake of peer assessment and generates evidence for
enacting appropriate strategies to support writing tutors to embark on peer assessment.
More similar research should be carried out as the use of peer assessment is context
dependant and distinct strategies need to be discussed within local and national systemic
realities.
References
Adachi, C., J.H.-M. Tai and P. Dawson. 2017. Academicsperceptions of the benefits
and challenges of self and peer assessment in higher education. Assessment &
Evaluation in Higher Education: 1-13.
Baird, J.-A. 2014. Teachersviews on assessment practices. Assessment in Education:
Principles, Policy & Practice 21, no 4: 361-64.
Beach, R. and L.S. Bridwell. 1984. The instructional context. In New directions in
composition research, eds Beach, R and Bridw, LS, 309-14. New York: Guilford
Press.
Berry, R. 2011. Assessment trends in Hong Kong: seeking to establish formative
assessment in an examination culture. Assessment in Education: Principles,
Policy & Practice 18, no 2: 199-211.
Carson, J.G. and G.L. Nelson. 1994. Writing groups: Cross-cultural issues. Journal of
Second Language Writing 3, no 1: 17-30.
Carson, J.G. and G.L. Nelson. 1996. Chinese students' perceptions of ESL peer response
group interaction. Journal of Second Language Writing 5, no 1: 1-19.
Chang, C.Y.-H. 2016. Two decades of research in L2 peer review. Journal of Writing
Research 8, no 1: 81-117.
Cheng, W. and M. Warren. 2005. Peer assessment of language proficiency. Language
Testing 22, no 1: 93--121.
Cho, K. and C. Macarthur. 2010. Student revision with peer and expert reviewing.
Learning and Instruction 20, no 4: 328-38.
Connor, U. and K. Asenavage. 1994. Peer response groups in ESL writing classes: how
much impact on revision. Journal of Second Language Writing 3, no 3: 256-76.
Denscombe, M. 1998. The good research guide: for small-scale research projects.
26
Buckingham: Open University Press.
Falchikov, N. 1998. Involving students in feedback and assessment. In Peer assessment
in practice ed. Brown, S, 9-23. Birmingham: SEDA Administrator
Freedman, S. 1985. Response to and evaluation of writing: A review. Resources in
Education., ed. 605, EDRSNE.
Gielen, S., E. Peeters, F. Dochy, P. Onghena and K. Struyven. 2010. Improving the
effectiveness of peer feedback for learning. Learning and Instruction 20, no 4:
304-15.
Gillham, B. 2000. The research interview. London: Continuum.
Harrisa, L.R. and G.T.L. Brown. 2013. Opportunities and obstacles to consider when
using peer- and self-assessment to improve student learning: Case studies into
teachersimplementation. Teaching and Teacher Education 36: 101-11.
Hu, G. 2005. Using peer review with Chinese ESL student writers. Language Teaching
Research 9, no 3: 321-42.
Hu, G. and S.T.E. Lam. 2010. Issues of cultural appropriateness and pedagogical
efficacy: exploring peer review in a second language writing class. Instructional
Science 38: 371-94.
Hyland, K. 1990. Providing productive feedback. ELT J 44, no 4: 279--85.
Hyland, K. and F. Hyland. 2006. Ed. Long, MH and Richards, JC. Feedback in second
language writing: contexts and issues Cambridge Applied Linguistics.
Cambridge: Cambridge University Press.
Jacobs, G.M. and Ratmanida. 1996. The appropriacy of group activities: views from
some southeast Asian second language educators. RELC Journal 27, no 1: 103--
20.
Jin, L. and M. Cortazzi. 1998. Dimensions of dialogue: large classes in China.
International Journal of Educational Research 29: 739--61.
Kaufman, J.H. and C.D. Schunn. 2011. Studentsperceptions about peer assessment for
writing: their origin and impact on revision work. Instructional Science 39, no 3:
387-406.
Lee, M.-K. 2015. Peer feedback in second language writing: Investigating junior
secondary students' perspectives on inter-feedback and intra-feedback. System
55, no Supplement C: 1-10.
Lei, Z. 2017. Salience of student written feedback by peer-revision in EFL writing class.
English language teaching 10, no 12.
Liu, N.-F. and D. Carless. 2006. Peer feedback: the learning element of peer assessment.
27
Teaching in Higher Education 11, no 3: 279-90.
Liu, X. and L. Li. 2014. Assessment training effects on student assessment skills and
task performance in a technology-facilitated peer assessment. Assessment &
Evaluation in Higher Education 39, no 3: 275-92.
Lundstrom, K. and W. Baker. 2009. To give is better than to receive: The benefits of
peer review to the reviewer's own writing. Journal of Second Language Writing
18, no 1: 30-43.
Mangelsdorf, K. 1992. Peer reviews in the ESL composition classroom: what do the
students think? ELT Journal 46, no 3: 274-85.
Nelson, G.L. and J.G. Carson. 1998. ESL students' perceptions of effectiveness in peer
response groups. Journal of Second Language Writing 7, no 2: 113-31.
Nelson, G.L. and J.G. Carson. 2006. Cultural Issues in Peer Response Revisiting
“culture”. In Feedback in Second Language Writing Contexts and Issues, ed.
Hyland, KaH, F., 42-59. New York: Cambridge University Press.
Nelson, G.L. and J. Murphy. 1993. Peer response groups: Do L2 writers use peer
comments in revising their drafts? TESOL Quarterly 27: 135-42.
Okada, M., R. Oxford and S. Abo. 1996. Not all alike: motivation and learning
strategies among students of Japanese and Spanish in an exploratory study. In
Language learning motivation: pathways to the new century, ed. Oxford, R,
105-19. Honolulu: University of Hawaii.
Panadero, E. and G.T.L. Brown. 2017. Teachersreasons for using peer assessment:
positive experience predicts use. European Journal of Psychology of Education
32, no 1: 133-56.
Paulus, T.M. 1999. The effect of peer and teacher feedback on student writing. Journal
of Second Language Writing 8, no 3: 265-89.
Planas Lladó, A., L.F. Soley, R.M. Fraguell Sansbelló, G.A. Pujolras, J.P. Planella, N.
Roura-Pascual, J.J. Suñol Martínez and L.M. Moreno. 2014. Student perceptions
of peer assessment: an interdisciplinary study. Assessment & Evaluation in
Higher Education 39, no 5: 592-610.
Rahimi, M. 2013. Is training student reviewers worth its while? A study of how training
influences the quality of studentsfeedback and writing. Language Teaching
Research 17, no 1: 67-89.
Ritchie, J. 2003. The applications of qualitative methods to social research. In
Qualitative research practice : a guide for social science students and
researchers, eds Ritchie, J and Lewis, J, 24-46. London: SAGE.
Rollinson, P. 2005. Using peer feedback in the ESL writing class. ELT Journal 59, no 1:
23-30.
28
Scollon, S. 2003. Not to waste words or students: Confucian and Socratic discourse in
the tertiary classroom. In Culture in second language teaching and learning, ed.
Hinkel, E, 13-27. Cambridge: Cambridge University Press.
Van Zundert, M., D. Sluijsmans and J. Van Merriënboer. 2010. Effective peer
assessment processes: Research findings and future directions. Learning and
Instruction 20, no 4: 270-79.
Villamil, O.S. and M.C.M. De Guerrero. 1998. Assessing the Impact of Peer Revision
on L2 Writing. Applied Linguistics 19, no 4: 491-514.
Villamil, O.S. and M.C.M.D. Guerrero. 1996. Peer revision in the L2 classroom: social-
cognitive activities, mediating strategies, and aspects of social behaviour.
Journal of Second Language Writing 5, no 1: 51-75.
Wang, W. 2014. Studentsperceptions of rubric-referenced peer feedback on EFL
writing: A longitudinal inquiry. Assessing Writing 19, no Supplement C: 80-96.
Yang, M., R. Badger and Z. Yu. 2006. A comparative study of peer and teacher feedback
in a Chinese EFL writing class. Journal of Second Language Writing 15, no 3:
179-200.
Yang, Y.-F. and W.-T. Meng. 2013. The effects of online feedback on students' text
revision. Language Learning & Technology 17, no 2: 220-38.
Yu, S. and I. Lee. 2016. Peer feedback in second language writing (2005–2014).
Language Teaching 49, no 4: 461-93.
Zhang, S. 1999. Thoughts on some recent evidence concerning the affective advantage
of peer feedback. Journal of Second Language Writing 8, no 3: 321-26.
Zhao, H. 2010. Investigating learnersuse and understanding of peer and teacher
feedback on writing: A comparative study in a Chinese English writing
classroom. Assessing Writing 15, no 1: 3-17.
Zhao, H. 2014. Investigating teacher-supported peer assessment for EFL writing. ELT
Journal 68, no 2: 105-19.
... Peer feedback activities are favored by many writing teachers and even though this teaching approach is now widely used mainly in writing teaching around the world, its application rate in China is still relatively low. This is because in China, on the one hand, many teachers lack knowledge of peer feedback implementation processes and learner training methods (Zhao, 2018). On the other hand, Chinese EFL learners lack confidence in their own writing feedback ability due to their long-term reliance on written feedback from teachers (Han, 2017;Zhao, 2010), they also do not trust the ability of their peers to give feedback, which results in low acceptance of peer feedback (Hu, 2012;Zhao, 2018). ...
... This is because in China, on the one hand, many teachers lack knowledge of peer feedback implementation processes and learner training methods (Zhao, 2018). On the other hand, Chinese EFL learners lack confidence in their own writing feedback ability due to their long-term reliance on written feedback from teachers (Han, 2017;Zhao, 2010), they also do not trust the ability of their peers to give feedback, which results in low acceptance of peer feedback (Hu, 2012;Zhao, 2018). The teaching practice of using peer feedback in teaching English writing in Chinese high school is affected by to the actual situation of students where high school students tend to focus on the form of peer feedback and ignore the actual content, resulting in teachers' failure to achieve good results in the actual peer feedback (Tsivitanidou et al., 2017). ...
... However, student-centered feedback approaches once encountered resistance from Chinese English writing teachers. For instance, a prior qualitative study involving 25 Chinese university English writing teachers revealed that teachers were unanimously hesitant to use peer feedback, with the incompatibility of peer feedback and the exam-based educational system being the primary reason (Zhao, 2018). Fortunately, growth mindsets teachers "continue to learn along with students" and tend to be receptive to learning and accepting new ideas (Dweck, 2006, p. 205). ...
Article
Second language (L2) writing teachers’ personal beliefs can shape their feedback-giving practices. As a burgeoning area in L2 education, the mindsets framework offers us a novel theoretical perspective to analyze the impact of personal belief on feedback-giving practices. With 288 university English writing teachers from 28 provinces in China, this study examined their growth and fixed mindsets, as well as the five types of feedback-giving practices: scoring feedback, written corrective feedback, process-oriented feedback, expressive feedback, and peer and self-feedback. Paired sample t-test reported that teachers held significantly higher levels of growth than fixed mindsets. Repeated measures analysis of variance results indicated that expressive feedback was the most frequently used feedback approach, whereas written corrective feedback the least used approach. Structural equation modeling results found that growth mindsets were positively associated with all five types of feedback approaches, while fixed mindsets were only positively related to written corrective and process-oriented feedback. We interpreted the underlying mechanisms behind the varied associations between growth versus fixed mindsets with the five types of feedback-giving practices, and proposed suggestions for L2 writing teacher education.
... In contemporary scholarly discourse, heightened emphasis has been placed on peer oral feedback, which research has elucidated as a catalyst for the enhancement of learners' writing skills and motivation (Marefat, 2005;Raibee, 2010;Saadi-Ali, 2021). However, apprehensions surrounding the efficacy of peer oral feedback have been voiced, with some studies revealing a predilection among learners for receipt rather than delivery of feedback (Tian & Li, 2018;Zhao, 2018). It is important to note that positive feedback without constructive suggestions may hinder improvement. ...
Article
Full-text available
The process-genre approach, varied feedback types, and technology integration have been shown to improve students' writing skills, but there is little research on how these three variables interact when implemented together in writing instruction. This study applied a quasi-experimental design with a sequential explanatory design to integrate the process-genre approach, teacher and peer oral and written feedback, and an online technology platform into a Thai university's English writing course. The experimental group received interventions, whereas the control group received standard writing instruction with papers and teacher and peer feedback. Writing pre-and post-tests, formative writing assignments, teacher, peer, oral, and written feedback surveys, and semi-structured interviews were the evaluation instruments. Descriptive statistics, the Wilcoxon Signed Rank Test, the Mann-Whitney-Wilcoxon test, Spear-man's correlation, and Kendall-Theil regression were used to examine quantitative data. A thematic analysis examined qualitative data. The experimental group scored higher on post-tests than pre-tests, indicating that they valued instructor feedback more than other types of feedback. Task response and lexical resource showed substantial gains, although coherence and cohesion, grammatical range, and accuracy did not. The control group had no significant changes in pre-and post-test scores except for task responsiveness. Written feedback was significantly associated with post-test scores and certain post-test criteria in the experimental group. The findings emphasize the importance of a process-genre approach, constant feedback, and technology to improve students' writing.
... For example, some teachers are not fully aware of their own and their students' roles in the assessment process, and there are limited opportunities for students to collaborate and rate their peers' work , Hentasmaka & Cahyono, 2021. In addition, some teachers may not understand certain AfL-AaL techniques as well as summative assessment techniques (Lee & Coniam, 2013;Zhao, 2018), further hindering their attempts to apply AfL-AaL strategies. This is particularly relevant in writing classes, where AfL-AaL principles are critical. ...
Article
Full-text available
Assessment for Learning (AfL) and Assessment as Learning (AaL) are gaining increasing prominence especially in university EFL writing lessons. This study sheds light on the practices and values of Indonesian EFL writing teachers towards AfL and AaL strategies during the constrained context of online learning setting because of the COVID-19 pandemic. By using descriptive statistics and thematic analysis, explanatory sequential mixed method was used in conjunction with a case study research design. Quantitative data were obtained from 54 teachers who responded to the AfL and AaL Strategy Questionnaire (AfL-AaLSQ), indicating their practices and values of AfL-AaL strategies. Meanwhile, the qualitative data were the results of interview sessions with seven selected participants, who described their practices and values in details. The findings show that all respondents accorded high importance to all AfL-AaL strategies and rated themselves as frequently implementing the AfL-AaL strategies. In both practice and values, the highest mean value was obtained for "providing assessment criteria", and the lowest for the "peer-and self-assessment". Interestingly, the charting of the mean scores shows a persistent gap between lower-rated practices and the higher-rated values. Finally, several reasons, including the effect of the constrained context, and ramifications of this discovery are examined in depth.
Article
Peer assessment (PA) has received growing attention recently due to its great impact on students’ autonomy and growth (Bryant & Carless, 2010; Patri, 2002). It requires students to be critical in evaluating their peers’ performance, engage in a discussion about their work, and eventually make improvements. This activity helps to create a collaborative learning environment. Despite being of such importance, peer assessment has still been overlooked by many university teachers, who remain hesitant to put it into practice. Therefore, this study seeks to investigate, through the perceptions of undergraduate applied linguistics students, the contribution and efficiency of using PA in enhancing the student’s learning experience and skills. The researcher implemented PA practice in the classroom in which 16 university-level Saudi female students were asked to assess other students’ tasks and then to write a reflection about their own experience in the PA practice and how it influenced their performance in the course of digital linguistics. The data analysis revealed that students’ participation in the PA significantly promotes students’ classroom engagement and improves their critical thinking, writing, and analysis skills. The study concludes with suggestions for effectively applying peer evaluation practices in different university courses.
Chapter
According to the instructional procedure mentioned in the previous chapter, apart from the face-to-face peer discussions and in-class lectures, most learning activities of the UO pedagogy took place in an online learning environment supported by a newly designed ICT tool. Apparently, the ICT tool acted as an important component of the UO pedagogy. In this chapter, the need of the ICT tool and its task design are outlined to demonstrate the instructional design behind it.
Book
This Element traces the evolution of peer assessment in writing instruction and illustrates how peer assessment can be used to promote the teaching and learning of writing in various sociocultural and educational contexts. Specifically, this Element aims to present a critical discussion of the major themes and research findings in the existing studies on peer assessment regarding the three assessment paradigms (assessment of, for, and as learning), and to identify whether and how peer assessment has served the purposes of assessment of, for, and as learning, respectively in writing instruction. This Element highlights the contextual factors that shape the effect of peer assessment in writing instruction and concludes with directions for future research and implications regarding how peer assessment can be successfully used to improve students' writing development.
Article
Full-text available
This research investigates the effectiveness of utilizing analytic rubrics in peer-assessment (PA) and self-assessment (SA) methodologies to enhance the proficiency of English as a Foreign Language (EFL) students’ essay writing skills in the Vietnamese context. It further contributes to the existing body of literature regarding formative assessment and its potential to improve student learning outcomes. A total of 44 university students, all English majors, were divided into two distinct groups, each consisting of 22 participants. One group applied analytic rubrics for SA, while the other used the same tool for PA. The writing performance of the two groups was assessed and compared in pre and post-tests. The findings revealed no significant difference between the SA and PA groups in the pre-test. However, in the post-test, the SA group demonstrated significantly superior performance compared to the PA group, with noticeable improvements across all evaluated criteria. Moreover, these results showed that the use of analytic rubrics in SA and PA methods positively impacted the EFL students’ writing skills, particularly in the areas of content and language use. This has practical implications for teachers, curriculum developers, and policymakers in designing and implementing formative assessment strategies for EFL learners. Further research is needed to examine the long-term effects of employing analytic rubrics, and to understand the potential influence of other contextual factors on student learning outcomes.
Article
Full-text available
The study investigates the incorporation and effectiveness of student written feedback and their attitudes towards peer feedback in writing class. Taking a qualitative case study approach, this study followes closely a class of thirty-two English juniors over one semester. Data sources include composition drafts, student written feedback and interviews. The data collected demonstrates that students generally accept peer feedback and incorporate most of their peers’ comments and suggestions into their writing revision and that peer feedback provides them with more chances to discuss with their peers and understand their peers’ suggestions on the composition improvement.
Article
Full-text available
One hundred and three (N=103) peer review studies contextualized in L2 composition classrooms and published between 1990 and 2015 were reviewed. To categorize constructs in research studies, this researcher used Lai's (2010) three Ps dimensions (perceptions, process, and product). Perceptions are the beliefs and attitudes of peer review. Process refers to the learning process or implementation procedures of peer review. Product is the learning outcomes of peer review. A thematic analysis of the studies' constructs showed that perception studies examined learners' general perceptions/attitudes, Asian students' perceptions/attitudes (cultural influences), and learner perceptions of peer feedback in comparison to self and/or computerized feedback. Process studies discussed the effects of training, checklists/rubrics, writer-reviewer relationships, the nature of peer feedback, communicative language, timing of teacher feedback on peer feedback, grouping strategies, as well as communicative medium. Product research, on the other hand, investigated peer feedback adoption rates and ratio of peer-influenced revision s, effects of peer review on writers' revision quality, effects of peer review on reviewers' gains, and effects of peer review on writers' self revision. In light of this review, research gaps are identified and suggestions for future research are offered.
Article
Full-text available
Peer assessment (PA) is one of the central principles of formative assessment and assessment for learning (AfL) fields. There is ample empirical evidence as to the benefits for students’ learning when AfL principles are implemented. However, teachers play a critical role in mediating the implementation of intended policies. Hence, their experiences, beliefs, and attitudes towards PA are important factors in determining whether the policy is actually carried out. A survey of over 1500 primary, secondary, and higher education teachers in Spain elicited their beliefs and values around PA as well as other aspects of formative assessment; only 751 teachers provided complete responses to all PA items. Teachers reported occasional use of PA in their classrooms but with positive experience of it. The vast majority did not use anonymous forms of PA and half of the teachers considered the students were accurate when assessing peers. Confirmatory factor analysis and structural equation modeling were used to examine relationships of attitudes and beliefs to self-reported frequency of using of PA. The self-reported frequency of using PA was strongly predicted by teacher experience of PA which included positive reasons for using PA, rather than negative obstacles for avoiding, prior use, and beliefs that students should participate in assessment, and willingness to include PA in grading.
Book
How to provide appropriate feedback to students on their writing has long been an area of central significance to teachers and educators. Feedback in Second Language Writing: Context and Issues provides scholarly articles on the topic by leading researchers, who explore topics such as the socio-cultural assumptions that participants bring to the writing class; feedback delivery and negotiation systems; and the role of student and teacher identity in negotiating feedback and expectations. This text provides empirical data and an up-to-date analysis of the complex issues involved in offering appropriate feedback during the writing process.
Article
Despite compelling evidence of its potential effectiveness, uptake of self and peer assessment in higher education has been slower than expected. As with other assessment practices, self and peer assessment is ultimately enabled, or inhibited, by the actions of individual academics. This paper explores what academics see as the benefits and challenges of implementing self and peer assessment, through the analysis of interviews with 13 Australian academics. Thematic analysis of our qualitative data identified seven themes of benefits and five challenges. Our academics showed strong belief in the power of self and peer assessment as formative assessment, contrary to past literature which has focussed on the accuracy of students’ marking. This paper therefore brings insights as to not only what academics value about self and peer assessment but also identifies potential inhibitors in practice. Recommendations are made about improving the design and implementation of self and peer assessment in higher education.
Article
This article reviews research on peer feedback in second language (L2) writing published in the last decade (i.e. 2005?2014). We analyse first the theoretical underpinnings that have informed both peer feedback research and the pedagogical use of peer feedback in L2 writing instruction. We also provide a critical interpretation of existing peer feedback research and discuss seven important themes emerging from the literature, that is, (1) effectiveness of peer feedback compared with teacher and self-feedback; (2) benefits of peer feedback for feedback-givers; (3) computer-mediated peer feedback; (4) peer feedback training; (5) student stances and motives; (6) peer interaction and group dynamics; and (7) cultural issues in peer feedback. Next, we examine the contextual and methodological issues in peer feedback research and then conclude the article with implications for future research.
Article
How to provide appropriate feedback to students on their writing has long been an area of central significance to teachers and educators. Feedback in Second Language Writing: Context and Issues provides scholarly articles on the topic by leading researchers, who explore topics such as the socio-cultural assumptions that participants bring to the writing class; feedback delivery and negotiation systems; and the role of student and teacher identity in negotiating feedback and expectations. This text provides empirical data and an up-to-date analysis of the complex issues involved in offering appropriate feedback during the writing process.
Article
Oftentimes, college students who learn English as a Foreign Language (EFL) provide their peers with incorrect and misleading feedback during text revision. To improve the effectiveness of peer feedback, this study examined the degree to which online feedback training impacted EFL college students' text revisions. A sample of 50 college students was grouped into the more- and less-proficient groups with 25 students in each. Results of this study reveal that the less-proficient students improved more during text revision than the more-proficient students did after the online feedback training on error correction. They were better able to detect and correct both local errors (i.e., grammatical) and global errors (i.e., text development, organization, and style) in their own and peers' texts. Their texts improved as a result of receiving immediate feedback and having the opportunity to explicitly observe how their more-proficient peers provided corrections and useful suggestions to peers and clarified writing problems for text improvement. The more-proficient students did not trust their peers' suggestions as much and made corrections mainly on local errors. These EFL college students' perceptions toward the effects of online feedback training on text revision were elaborated in this study.