ArticlePDF Available

Assessment and academic writing: A look at the use of rubrics in the second language writing classroom



Abstract One of the most important aspects of the job of an English teacher is giving students the feedback and corrections they need to improve as second language learners. This is especially true for written English. In writing classes the process of providing feedback to students on their writing takes-up significant amounts of time and effort both inside and outside of the classroom. In order to streamline the feedback process teachers often make ...
Assessment and Academic Writing:
A Look at the Use of Rubrics
in the Second Language Writing Classroom
One of the most important aspects of the job of an English
teacher is giving students the feedback and corrections they need to
improve as second language learners. This is especially true for written
English. In writing classes the process of providing feedback to students
on their writing takes-up significant amounts of time and effort both inside
and outside of the classroom. In order to streamline the feedback process
teachers often make use of tools, such as rubrics, to help them provide
their students with feedback. Traditionally rubrics have been seen as tools
that have the potential of “increased consistency of scoring, the possibility
to facilitate valid judgment of complex competencies, and promotion of
learning.” (Jonsson & Svingby, 2007, p.130) However, recent studies in
the L1 writing classroom have shown that there are some significant
problems with the last of these items, using rubrics as a means of
promoting learning. This paper looks at some of the current research on
the use of rubrics in the classroom and attempts to construct a clearer
picture of both the benefits and drawbacks of the use of rubrics for both
grading and as a teaching tool in the L2 writing classroom. It is hoped
that in doing so this research will provide insight into the tools teachers in
Japan are using to respond to their students’ written work and act as a
starting point for further research into how to improve these tools.
Kwansei Gakuin University
Humanities Review
Vol. 17, 2012
Nishinomiya, Japan
Article Contents
I. Introduction
II. Historical Overview of the Rubric
1. Rubrics in the L1 Classroom
2. Rubrics in the ESL/EFL Classroom
III. The Effectiveness of Rubrics in the Language Classroom
1. A case for using rubrics in the L2 writing classroom
2. A case against using rubrics in the L2 writing classroom
IV. Conclusion
1. Discussion
2. Final reflections
Appendix #1
I. Introduction
Written English may be one of the most important skills that students will learn
at university in terms of both their future academic and professional lives. While
many of the students studying English at universities in Japan will never be required
to use English outside of the classroom on a day to day basis they are often asked
to write compositions as part of the language tests that act as gatekeepers to their
future jobs or further studies. The feedback that students receive from their teachers
plays an important role in the students’ development as English language writers.
Because of this, one of the most important questions that we need to ask ourselves
as teachers of written English is: “What is the best way to respond to and guide
students’ writing in a way that enables them to improve in both their current and
future written assignments?” This question is essential, as a large part of the
language teacher’s job in the writing classroom is taken up with grading and
providing feedback to students. In fact, the job of evaluating students’ writing and
giving them feedback is so central to the English language writing class many
teachers do not even question how or why this process is necessary or how it should
be done: “Many of the decisions that both L1 and L2 writing teachers make in their
classes revolve around assessment of students’ writing . . . (and) because a culture
of assessment is built into the schooling enterprise, teachers rarely ask whether they
need to assess their students.” (Casanave, 2007, p.113) However, a closer
examination of how and why they are providing feedback to their students is an
essential part of professional development and a vital experience for teachers
interested in improving themselves as educators.
II. Historical Overview of Rubrics
1. Rubrics in First Language Education
Rubrics were first introduced into the L1 writing classroom as a means of
assessing student writing. In the traditional view of how writing should be assessed
there was an assumption that it was possible to come up with some type of
objective score that could be assigned to a students’ composition and that the
validity of this “‘true’ measure of student ability . . .can only be established through
technical and statistical rigor” (Huot, 1996, p.550). (See Figure #1) Because of this
there was a push for researchers and teachers to come up with a set of tools that
allow the reader to assign a valid score to the student’s writing. One of the most
commonly used of these tools is the rubric. A rubric is defined as “a scoring tool
for qualitative rating of authentic or complex student work. It includes criteria for
rating important dimensions of performance, as well as standards of attainment for
those criteria” (Jonsson & Svingby, 2007, p.131). Rubrics have long been a part of
the writing classroom in first language classrooms around the world.
Rubrics were first proposed as a tool to analyze writing in 1912 when Noyes
suggested the use of a rubric as a means of standardizing the evaluation of student
compositions: “Our present methods of measuring compositions are controlled too
much by personal opinion, which varies with the individual. What is wanted is a
clear-cut, concrete standard of measurement which will mean the same thing to all
people in all places and is not dependent upon the opinion of any individual”
(Noyes, 1912 as cited in Turley & Gallagher, 2008, p.88). Of these scales the most
famous is the Hillegas scale, which was developed in 1912 and “gave English
Traditional Writing Assessment
Procedures, Purposes and Assumptions
Procedure Purpose Assumptions
Scoring Guideline Recognize features of writing quality Writing quality can be defined and
Rater Training Forster agreement on independent rater
One set of features of student writing for
which raters should agree
Scores On Papers Fix degree of writing quality for
comparing writing ability and making
decisions on that ability
Student ability to write can be coded and
communicated numerically
Interrater Reliability Calculate the degree of agreement
between independent raters
Consistency and standardization to be
maintained across time and location
Validity Determine the assessment measures what
it purports to measure
An assessment’s value is limited to distinct
goals and properties in the instrument itself
Figure #1: The traditional view of writing assessment (Huot, 1966, p.551)
Assessment and Academic Writing
teachers the first reliable means of estimating objectively the quality of their pupils’
written production” (Hudelson, 1923, p.164).In1915Thorndikeimprovedupon
Hillegas rubric for grading student compositions by “substituting new specimens for
certain of the original samples and by including several examples in the steps at or
near the middle of the scale” (Hudelson, 1923, p.164).
One of the key benefits of these, and other rubrics, is that they are an attempt
to provide some type of inter-rater reliability. This is done as an attempt to get
around “one of the most vexing dilemmas in writing assessment . . . the
inconsistency with which different readers tend to evaluate the same piece of
writing” (Casanave, 2007, p.124). However, it is important to note that these rubrics
“were never designed to (improve student writing) directly, and any who attempt to
employ them for a such a purpose are certain to be disappointed” (Hudelson, 1923,
p.163). In fact the early rubrics developed by Noyes and his contemporaries were
designed not for the student, but for the administrator as a means “to provide a
standardized form of measurement that wouldallowadministratorsandinvestigators
to ‘measure and express the efficiency of a school system’1) so that comparisons and
rankings could be made between schools across the nation (Turley & Gallagher,
2008, p.88).
However, in the early 70’s, as the “process approach” method for teaching
composition became popular in classrooms around the United States, rubrics had to
evolve from an assessment tools into something that could be used to provide
students with feedback on how well their essay met a certain set of criteria and
some insight into what they can do to improve themselves as writers. (Ferris, 2009)
In the field of first language composition whether or not the rubric is an effective
tool in providing students with the feedback that they need to improve as writers is
a topic of debate in a variety of academic journals. Researchers have come out both
in support of (H. G. Andrade, 2000; H. L. Andrade, Wang, Du, & Akawi, 2009) or
against (Broad, 2000; Kohn, 2006; Wilson, 2007) the use of rubrics as a means of
providing students feedback about their written work. One only needs to look at the
sub-title of the 2008 paper by Turley and Gallagher, “Reframing the Great Rubric
Debate”, to see that there are strong feelings on both sides of this issue. One of the
reasons for this debate is that how writing is being taught in L1 classrooms has
changed significantly over the years and teachers are not being asked to simply
evaluate their students’ writing but to engage in a dialogue with their students in an
attempt to help them improve as writers:
1) Turley and Gallagher take this quotation from Hillegas, Milo B., (1913), A Scale for the
Measurement of Quality in English Composition by Young People. Teachers College, 13(4), 28
a relatively recent shift in writing pedagogy has not translated into a shift in
writing assessment. Teachers are given much more sophisticated and
progressive guidance nowadays about how to teach writing but are still told to
pigeonhole the results, to quantify what can’t really be quantified. Thus, the
dilemma: Either our instruction and our assessment remain “out of synch” or
the instruction gets worse in order that students’ writing can be easily judged
with the help of rubrics. (Wilson as cited in Kohn, 2006, p.14)
2. Rubrics in the ESL/EFL Classroom
Many techniques have entered the second language writing classroom by way
of the first language composition classes because, as “Silva, Leki and Carson (1997:
399) point out, ‘second language writing is situated at the intersection of second
language studies and composition studies’ . . .(and) work that has focused
exclusively on Ll writing assessment contributes greatly to our understanding of
both the process and the product of L2 writing assessment” (Kroll, 1998, p.222).
The rubric is one of those things that has been borrowed by L2 teachers from their
colleagues teaching in the L1 writing classroom. Similar to how they were used in
first language classrooms, rubrics began in second language writing programs as a
means of providing teachers with a standardized way to evaluate their students’
writing. They were also used as a tool to facilitate the placement of students at the
appropriate level. In fact, today “most, if not all writing programs have entry and
exit criteria or grading rubrics to guide teachers at various levels of the program.”
(Ferris, 2009, p.121)
In the field of second language writing the ECP, or ESL Composition Profile
(see Appendix #1), is probably one of themostrecognizablerubricsand“(i)t,orits
offspring, will be familiar from workshop handouts or Xeroxes left behind in faculty
coffee rooms” (Haswell, 2005, p.107). This rubric was developed in 1981 using
research taken from the compositions of first language students. Three researchers
from Educational Testing Services (ETS) took research done in 1953 on the grades
and comments on the written assignments of first-year students studying at
Middlebury College, Cornell and the University of Pennsylvania to come up with a
rubric that was composed of five main traits. Each of these traits was then broken
up into a number of sub-traits that the researchers believed could then be used to
objectively grade English compositions written by second language speakers.
(Haswell, 2005) One benefit of the ESL Composition Profile (Jacob et al., 1981) is
that it has been established to have a high degree of both internal and external
validity, with scores given on the rubric being shown to be both consistent between
raters and as being “highly correlated with (student’s scores on) the TOEFL and
Michigan Test Battery” (Bacha, 2001, p.374) It is no accident that this rubric was
Assessment and Academic Writing
designed by researchers working for a testing organization as the ECP provides
these testing services with an invaluable tool that allows them to grade a large
number of student essays using multiple raters while still maintaining a high level of
inter-rater reliability. Because of this, other ESL/EFL testing companies are now
also using rubrics as a means of grading the writing component of their tests. While
the traits may vary from test to test (see Figure 2) the underlying rationale and
principles remain the same.
In most writing classrooms the teacher has no need for this type of inter-rater
reliability and is more likely to be interested in the pedagogical value of the rubric
and the benefits of students accessing the rubrics to improve the quality of their
writing. However, the debate about the pedagogical effectiveness of rubrics that is
being played out in first language classrooms and research journals is only just
Main Traits of Scoring Rubrics for Six Tests of ESL Writing
Test Trait
Test in English for Education Purposes
(Associated Examining Board)
Certificate in Communicative Skill in English
(Royal Society of Arts/ University of Cambridge
Local Examinations Syndicate)
Accuracy [of mechanics]
Range [of expression]
Complexity [organization and cohesion]
Test of Written English
(Educational Testing Service)
Michigan English Language Battery Topic development
Organization/ coherence
Canadian Test of English for Scholars and Trainees Content
Language use
International English Language Testing System Regtster
Rhetorical organization
Figure 2: Traits measured by various rubrics usedinstandardizedESL/EFLTests(Haswell,2007,p.8)
reaching the field of second language writing. While rubrics are mentioned in both
texts and journals devoted to the study of second language writing (Bitchener,
Young, & Cameron, 2005; Ferris, 1995; Hyland, 2010) they are usually mentioned
in passing as one of a number of possible assessment tools with very little time
given to the analysis of their effectiveness as tools for rating and improving student
writing. For example, a look at the issues of the Journal of Second Language
Writing over a 4 year period, from 2008 to 2011, reveals only 9 original research
articles that even mention rubrics and, in all of these articles, rubrics are used
unquestioningly as a tool for evaluating students’ written work. In fact, a further
search of this journal reveals only 2 articles from 1992 to 2011 that actually
question the effectiveness of rubrics (Paulus, 1999; Weigle, 2007). One of these,
Weigle, simply mentions the current controversy that exists in first language writing
about the use of rubrics before dismissing the issue without providing any sources
or evidence for her position: “while holistic scales are faster and more efficient,
analytic scales tend to be somewhat more reliable than holistic scales, and certainly
provide more useful feedback to students, as scores on different aspects of writing
can tell students where their respective strengths and weaknesses are.” (2007,
This view is changing and second language researchers such as Haswell (1998,
2005) are beginning to ask if rubrics are the best tools for language teachers to use
as a means of improving their students’ ability to write. However, as with the use of
rubrics in first language composition, this is not the type of question that allows
researchers to come down either in favor or against the use of rubrics in the
classroom. In their article “On the Uses of Rubrics”, Turley and Gallagher (2008)
point out that, “instead of declaring all rubrics ‘good’ or ‘bad,’ we need to examine
what they do, why, and in whose interest” (p. 92). They propose a 4 point heuristic
to analyze the value of rubrics (or any pedagogical tool):
1. What is the tool for?
2. In what context is it used?
3. Who decides?
4. What ideological agenda drives those decisions? (Turley & Gallagher, 2008,
It is these 4 questions that provide a starting point for the evaluation of the
effectiveness of rubrics in English language writing classes at Japanese universities.
Assessment and Academic Writing
III. The Effectiveness of Rubrics in the Language Classroom
1. A case for using rubrics in the L2 writing classroom
While they might have their downside, rubrics can be useful tools in the
language classroom. Along with setting relevant tasks, setting a clear topic and
prompts, helping students to choose the appropriate rhetorical modes and giving
students adequate time to complete the writing task; setting an appropriate scoring
criteria and the need to attain valid and reliable scores are essential elements of a
successful English writing program (Jacobs et al., 1981). Rubrics can help teachers
to achieve this goal as they set clear criteria for both the students and the teacher
when it comes to grading written work.
Furthermore, rubrics also make it possible to evaluate components within
written assignment, such as rhetorical structures, grammatical accuracy and the
ability to stay on topic. This is especially important for second language writers as
the level of the various skills essential to writing can vary significantly from student
to student. In the L2 writing classroom we are much more likely to see “varying
levels of proficiency/skill in different aspects of the product (and) products can vary
widely across genres” (Kroll, 1998, p.224). Because second language writers are
more likely to show varying levels of performance on the different traits “if we do
not score for these traits and report the scores, much information is lost” (Hamp-
Lyons, 1995, p.760). Rubrics make it easier for the teacher to record a score for
each of these traits, or sub-traits, so that students are able to receive feedback on,
and improve, the areas that require attention.
Another advantage of rubrics in the second language classroom is that they
help the teacher or evaluator focus on more than just the sentence level structures
found in the written assignment. In a 1993 study that involved six graders
evaluating six samples of student writing Sweedler-Brown found that when a
holistic scale was used to grade two sets of essays, one set that had the grammatical
and spelling errors already corrected by the researcher and one set that had not been
corrected, those essays that had poor mechanics consistently got lower scores,
regardless of the proficiency of the rhetorical structures used in the essay. However,
she found that graders who used a rubric to evaluate the essays were shown to focus
more on “the high quality of the essays’ organization and paragraph development
(and) were not distorted by the different qualities of the sentence-level features in
the original and corrected essays” (Sweedler-Brown, 1993, p.11). The ability to
focus on and encourage students to improve their discourse and rhetorical skills is
essential if we want our students to become better writers of English as a second
2. A case against using rubrics in the classroom
While there are many positive things to be said for using rubrics in the
classroom the use of rubrics in the second language writing class is not without
problems. Recently a number of researchers have begun to raise “significant
concerns about the consequences of writing assessment and the ways in which
assessment practices sometimes seem to be antithetical to teaching practices” (Kroll,
1998, p.222). For example, Haswell (2005) notes that one of the big issues with
rubrics are that they do not solve the problems involved with holistically grading an
essay that they were designed to address. Rather, a rubric with five traits would
simply be asking “the rater to perform theholistic(rating)fivetimes”(Haswell,
2005, p.107). Also, while the rubric may succeed in grading how well the writer has
met the criteria set out by the rubric, it does not do a good job of taking
individuality into account and rubrics will often penalize the use of creativity,
humor, or clever writing. These are often things that second language writing
teachers are trying to encourage in their students. With second language learners the
problem of tailoring their writing to meet the criteria laid out in the rubric is often
compounded by the fact that second language students are often not aware of the
different genres that the rubrics may have been developed to evaluate, genres that
first language speakers are exposed to from an early age. As such, second language
writers will often answer an essay question in a different way than a native speaker
would, and subsequently receive a lower score on a rubric that is designed for, or
by, L1 writers. However, as “(t)here is no single written standard that can be said to
represent the ‘ideal’ written product in English...wecannot easily establish
procedures for evaluating ESL writing intermsofadherencetosomemodelof
native-speaker writing.” (Kroll, 1990, p.141)
Another problem, for both L1 and L2 speakers, is that they may not know how
to use the rubrics to improve their writing. Because rubrics are usually designed as a
way of rating students those students are often not provided with adequate training
on how to use the rubric to improve their writing skills. This problem is further
compounded in the field of second language writing as many of the rubrics being
used are based on rubrics that were designed for first language speakers and are
often incomprehensible to L2 learners as they may contain information about traits
or metaskills that the L2 writer is unable to understand.
IV. Conclusion
1. Discussion
So what can language teachers do? Should we be incorporating rubrics into our
writing classes? Well, as I have already stated, this is not a simple yes/no answer.
Assessment and Academic Writing
Rubrics can provide both teachers and students with a valuable tool for improving
students’ second language writing. However, there are some steps we should be
taking to make sure that the rubrics we use are providing our students with the
support they need to develop as English language writers. To begin with, it is
important that the rubrics we use in the classroom are developed for the type of
assignment we are asking the students to perform. Many teachers take a one-size fits
all approach to grading rubrics, often using a modified version of the same rubric to
grade a wide variety of assignments or making use of one of the standardized
writing rubrics that can be found in language teaching books. This approach can
often lead to confusion on the part of both the rater and the students as the rubric
may not be designed to evaluate the traits that the teacher is hoping to see in his or
her students’ compositions. Teachers need to take into account “the purpose of the
essay task, whether for diagnosis, development or promotion . . . in deciding which
scale is chosen. Revisiting the value of these scales is necessary for teachers to
continue to be aware of their relevance” (Bacha, 2001, p.371). In fact, the most
effective rubrics are those that are “developed on-site for a specific purpose with a
specific group of writers and with the involvement of the readers who will make
judgments in [that] context” (Hamp Lyons 1991 c:248).” (Kroll, 1998, p.228)
However, “the downside of this sort of procedure is that for a thorough analytic
judgment, each writing assignment would need to be scored on a specifically created
assessment instrument” (Kroll, 1998, p.228) the creation of which can be a time
consuming process.
Another issue with the use of rubrics in the second language classroom is their
accessibility to the students. If the students are unable to comprehend the categories
and or sub-categories contained in the rubric they will not be able to use it in any
meaningful way to improve their writing abilities. Even when students are able to
understand the rubric they may not understand how it relates to their composition,
or be unaware of how to use the information provided by the rubric to improve their
writing. The solution to this problem goes beyond just teaching students how to
read the rubric. As they are stakeholders in the writing and assessment process the
most effective rubric is one that has been “created with (the) students and reflects
their values, goals, and language” (Turley & Gallagher, 2008, p.90). While this can
also take up time it is an essential part of the process of using rubrics in the
classroom as it is the only way to ensure that both the students and the teacher
understand the nature of the assessment and it ensures a solution that links together
the concerns of the various “stakeholders” in the assessment process.
2. Final Reflections
While the analysis of the use rubrics in the second language classroom is still
in its infancy I believe that it will become more important in the future. Similarly to
what is happening now in the field of first language writing, both teachers and
researchers working in the field of L2 composition will, in the near future, be forced
to look at the tools they are using to assess their students and decide if these tools
are doing the job for which they were designed. This should not a be viewed as a
negative trend as “rating process research can help us learn more about and improve
writing teachers’ everyday feedback practices” (Connor-Linton, 1995, p.765). Which
in turn will enable us to better help our students reach their full potential as second
language writers.
Assessment and Academic Writing
Appendix #1: Example of ESL Composition Rubric
(Haswell, 2005, p.108)
Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership,
57(5), 1319.
Andrade, H. L., Wang, X., Du, Y., & Akawi, R. L. (2009). Rubric-Referenced Self-Assessment
and Self-Efficacy for Writing. The Journal of Educational Research, 102(4), 287302.
Bitchener, J., Young, S., & Cameron, D. (2005). The effect of different types of corrective
feedback on ESL student writing. Journal of Second Language Writing, 14(3), 191205.
Bacha, N. (2001). Writing evaluation: what can analytic versus holistic essay scoring tell us?
System, 29, 371383.
Broad, B. (2000). Pulling Your Hair out: “Crises of Standardization in Communal Writing
Assessment.” Research in the Teaching of English, 35(2), 213260.
Broad, B., & Boyd, M. (2005). Rhetorical Writing Assessment The Practice and Theory of
Complementarity. Journal of Writing Assessment, 2(1), 720.
Casanave, Christine Pearson. Controversies in Second Language Writing: Dilemmas and
Decisions in Research and Instruction. Ann Arbor, MI: University of Michigan Press,
Clary, R. M., Brzuszek, R. F., & Fulford, C. T. (2011). Measuring Creativity: A Case Study
Probing Rubric Effectiveness for Evaluation of Project-Based Learning Solutions. Creative
Education, 2(4), 333340.
Connor-Linton, J. (1995). Looking behind the curtain: What do L2 composition ratings really
mean? TESOL Quarterly, 29(4), 762765.
Ferris, D. (1995). Student reactions to teacher response in multiple-draft composition
classrooms. TESOL Quarterly, 29(1), 3353.
Ferris, D. (2009). Response To Student Writing: Implications for Second Language Students.
New York, NY: Routledge.
Hamp-Lyons, L. (1995). Rating Nonnative Writing: The Trouble with Holistic Scoring. TESOL
Quarterly, 29(4), 759762
Haswell, R. (1998). Rubrics, prototypes, and exemplars: Categorization theory and systems of
writing placement. Assessing Writing, 5(2), 231268.
Haswell, R. (2005). Researching Teacher Evaluation of Second Language Writing. In T. Silva &
P. K. Matsuda (Eds.), Second Language Writing Research: Perspectives on the Process of
Knowledge Construction (pp.105120). London: Lawrence Erlbaum Associates.
Hudelson, E. (1923). The development and comparative values of composition scales. The
English Journal, 12(3), 163168.
Huot, B. (1996). Toward a new theory of writing assessment. College Composition and
Communication, 47(4), 549566.
Hyland, K. (2010). Teaching and Researching Writing (2nd Edition). Harlow: Pearson ESL.
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and
educational consequences. Educational Research Review, 2, 130144.
Kohn, A. (2006). The trouble with rubrics. English Journal, 95(4), 1215.
Kroll, B. (1990). What does time buy? ESL student performance on home versus class
compositions. In B. Kroll (Ed.), Second Language Writing: Research Insights for the
Assessment and Academic Writing
Classroom (pp.140154). Cambridge: Cambridge University Press.
Kroll, B. (1998). Assessing writing abilities. Annual Review of Applied Linguistics, 18, 219
Paulus, T. (1999). The effect of peer and teacher feedback on student writing. Journal of Second
Language Writing, 8(3), 265289.
Sweedler-Brown, C. O. (1993). ESL essay evaluation: The influence of sentence-level and
rhetorical features. Journal of Second Language Writing, 2(1), 317.
Turley, E., & Gallagher, C. (2008). On the uses of rubrics: Reframing the great rubric debate.
English Journal, 97(4), 8792.
Weigle, S. (2007). Teaching writing teachers about assessment. Journal of Second Language
Writing, 16(3), 194209.
Wilson, M. (2007). Why I Won’t Be Using Rubrics to Respond to Students’ Writing. The
English Journal, 96(4), 6266.
... Using rubrics in assessing students work have become a common practice in the ESL field due to its many benefits confirmed by previous studies. Using rubrics generates many benefits for teachers as a tool in assessment and learning but also for students as a tool for reflection and a map of getting better at the intended learning areas (Brooks, 2013;Jonsson & Svingby, 2007;Stevens & Levi, 2006). The use of rubrics has been intended to tackle the complexity of scoring performance-based tests. ...
... Rubrics specifically designed to assess writing can be beneficial for both teachers and students. Rubrics serves as both a reminder and tool for teachers to analyse beyond the level structures of students' written work (Brooks, 2013) which often become the whole focus of the assessment with disregard to other aspects of a good composition such as coherence and cohesiveness. Many have tried to classify the components to be assessed as to simplify the process of assessment without disregarding the main purposes of tasks assigned. ...
... Additionally, effective and regular trainings on the use of rubrics for teachers can reduce both interrater inconsistency and teacher's inaccuracy in interpreting the components of the rubrics (Lovorn & Rezaei, 2011;Wang, 2010). Regardless of the debate, rubrics are still widely used as assessment tools for producing valid evaluation of complex competencies (Brooks, 2013). ...
Conference Paper
The benefits of rubrics as teaching learning tools have been identified specifically for performance-based assessment in language. In Indonesia, the ability to produce quality written work has become a necessity to complete higher education but it remained unclear how learning and assessment on this area were conducted. This paper focused on exploring the use of rubrics by four non-native teachers’ working for a private ESL school in Indonesia for assessing students’ writing tasks. The study investigated how the teachers’ current practice and how they approached rubrics for assessing writing by means of both closed and open-ended surveys. Additionally, an analysis of the assessed essay against the rubrics was conducted to identify interrater reliability. The results showed that the teachers had positive attitude towards rubrics, used rubrics regularly and approached rubrics in a similar fashion which was to use them as an assessment tool but not learning tool. There was an identified interrater inconsistency in the scoring results. Additionally, the teachers put a lot more focus on Grammar, Spelling and Punctuation category than on the other two categories (Function & Content, and Cohesion & Coherence). The implication of the study calls for more effective use of rubrics as teaching and learning tools by the teachers as well as the provision of teacher training which enable the teachers to do so and consequently resulting in improvement of interrater reliability. Keywords: rubrics, interpretation of rubrics, non-native teachers, English writing, writing assessment
... Although writing is an indispensable part of the instructional process, it is the most complex skill, which is challenging to both learn and teach (Bukhari, 2016). Taking these aspects into account, a great deal of importance is given to enhancing the writing skills of the EFL learners in the North Cyprus context, as writing is considered as a fundamental skill that should be taken into consideration more than the other skills, with a view to having a better academic and professional life (Brooks, 2012;Bostanci and Çavuşoglu, 2018). In North Cyprus, EFL learners study a year of compulsory English at a preparatory school, where they take specific classes in English and continuously write in English, with the objective of boosting their academic literacy skills and meeting future personal and public expectations, depending on their language levels, as well as in line with the major subject they study (Turgut and Kayaoglu, 2015). ...
Full-text available
Flipped learning models are considered as important elements of English as a foreign language (EFL) writing courses in order to advance the EFL learners' writing skills. Significantly, studies examining the efficacy of in-class and out-of-class writing models in flipped classroom settings when teaching online EFL writing courses are still of focus in the Turkish Cypriot context. This investigation aimed to examine the most efficient flipped learning model among the in-class vs. out-of-class writing models for the purpose of helping instructors to advance their EFL learners' writing achievement in an online writing setting. In addition, this study sorted to reveal the EFL learners' perceptions toward learning writing through in-class and out-of-class flipped learning writing models. A mixed methods research design was applied to achieve the aforementioned aims. Twenty-eight EFL learners studying at a private university's English Language Teaching department constituted the participants of this study. As the findings pointed out, the EFL learners in group A who wrote their essays in-class outperformed those in group B, who wrote their essays out-of-class. Moreover, it was found that the majority of the participants had more positive perceptions toward the in-class flipped classroom writing model. This study highlights that, better learner performances are achieved when the learners write during the class session online with the support of the instructor when implementing a flipped classroom model to teach EFL writing.
... The students' written essays were assessed for quality according to Jacobs et al. (1981) scoring rubric. The purpose of using this scoring scale was its reliability, as scholars and researchers widely used it (e.g., Al-Zankawi, 2018;Brooks, 2012) conducting similar studies. The scoring rubric, which assessed essay writing ability in EFL academic contexts, targeted five major writing aspects: content, organization, vocabulary, language, and mechanics. ...
Full-text available
This article seeks to restore the notion of the classroom in light of emerging pedagogies. It attempts to provide a sketchy introduction of the theory of multiple intelligences and its potential gains in ELT. Despite the variance of teaching methods and approaches implemented in classrooms, students’ results in some topics have remained below the par of teachers’ expectations. This article attempts to provide a solution to teaching students to their differences and learning preferences. It investigates the impacts of accommodating students’ intelligences profiles in writing classes. To fulfill this objective, 114 male and female participants majoring in English at the University of Kairouan were asked to write a five-paragraph essay and respond to a writing strategies questionnaire, a multiple intelligences inventory. The collected data were statistically analyzed using Pearson Correlation and ANOVA techniques to probe possible correlations and predictability levels between MI profiles and WS. The findings indicated that there is a significant degree of correlations between learners’ multiple intelligence profiles and the writing strategies they use when writing. As for the impacts of multiple intelligences on writing quality, the results revealed weak or no significant correlations. It is then suggested that the same study should be further elaborated within the same L2 context with larger population to acknowledge learners’ differences and learning preferences and benefit from new dimensions in teaching paradigms Chronic deterioration of learners results in different language topics The current upheaval caused by the striking pandemic Lack of homogeneity in the syllabi provided to university students For the above reasons, implementing the Multiple Intelligences Theory (Gardner 1983/1999) and Computer Assisted Language Learning at the university level classroom could boost the teaching/ learning and would reduce apprehension and foster learning achievements. So, knowing learners’ individual differences and preferences and integrating CALL in the classroom would allow teachers to establish “broad range of teaching strategies with their students” (Armstrong, 2009). Gardner (2006) also concludes that “people have very different kinds of minds… then education which treats everybody the same way, is actually the most unfair education”. Briefly, incorporating intelligences whether cognitive or artificial and recognizing the blended learning would revolutionize the notion of the classroom and reduce inconveniences between the teaching input and the learning output. Therefore, the new CLASSROOM would be effective, inspiring, and prospective.
... According to Applebee & Langer (2011);Graham, Cappizi, Harris, Hebert, & Morphy (2014), the reason for minimal or partial mastery of writing lies in writing activities across the different educational disciplines involved writing without composing (note-taking, filling in blanks on a worksheet, one-sentence responses, etc.) which did not develop essay writing. On the contrary, Brooks (2013) stated that instructors should focus not only on the sentence level but also on the use of rhetorical and discourse skills if they wish to enhance their Note: * p < .05; ** p < .01 ...
Full-text available
Background: Academic writing is a complex and demanding activity in which students have to regulate their (meta)cognitive, motivational, and linguistic processes and self-regulatory writing strategies might serve as a tool to accomplish writing tasks. The research was done as part of a verification of Zimmerman & Risemberg’s (1997) model of self-regulation in writing. Previous research on the relationships between students’ self-regulated learning (SRL) and writing performance has suggested their positive impact. Purpose: This paper provides insights into Croatian university students’ first/second language (L1/L2) writing performance regarding the SRL strategy use. Method: Students’ written performance in both L1 (Croatian) and L2 (English) was checked, and the contributions of SRL and sociodemographic factors were explored. A total of 104 students from the initial and final years of teacher education study were included in the research. A quantitative research method was used including the following instruments: The learning orientation scale, the Perceived academic control scale, the Croatian version of the values subscale, Writing strategies questionnaire.Results: Descriptive analyses revealed that students’L1/L2 writing proficiency was on average. There was no difference between L1 and L2 writing proficiency. Furthermore, the study showed that students mostly initiated learning goal orientation, writing tasks were valuable to them and they had more results of academic control over the mentioned tasks. Participants mostly used the most effective writing strategy - checking and correcting the text. The final study year students had better L1 writing proficiency compared to the initial study year students. Such results were expected since students were exposed to the extensive L1 academic experience, which was not the case with the exposure to learning English as a foreign language (EFL learning), resulting in a lower level of L2 essay writing proficiency. Success in L1 writing proficiency was explained more by cognitive and less by sociodemographic and motivational factors. The greater academic control over writing assignments and the lower goal orientation on avoiding effort was shown, the greater success was achieved. Success in L2 writing proficiency was mostly explained by cognitive factors, but also significantly by some sociodemographic and motivational factors. The higher GPA in L2 and the less asking for help and writing by the model strategy was employed, the greater success in writing assignments was achieved. The study indicated the importance of mastering SRL, especially cognitive factors in both L1 and L2 learning. Implication: The implications of the study were discussed which may benefit L1/L2 teachers to teach their students SRL writing strategies by which students could self-regulate their thoughts, feelings, and behaviours throughout the writing process to achieve academic success.
... Prior to the PGA, many writing instructors intensified their attention on the content of their students' writing, not on the organization of the academic writing process A PRE-EXPERIMENTAL STUDY ON A PROCESS-GENRE APPROACH FOR TEACHING ESSAY WRITING (Furneaux, 1995). Brooks (2013) affirmed that instructors should focus not only on the sentence level, but also on the use of rhetorical and discourse skills if they wish to enhance their students' writing skills. Viewing writing as a communicative task (Badger & White, 2000), using the PGA for teaching academic writing has been evidently effective. ...
Full-text available
This study explored the feasibility of using a process-genre approach (PGA) for teaching academic writing from the perspective of EFL undergraduates. The sample consisted of 15 students enrolled in a four-year English program at the College of Education in Socotra, Yemen during the academic year 2018-2019. The study followed a pre-experimental design in which a pretest was given to the sample, and an extensive 30-hours program was pursued using the PGA. Additionally, ten informants were singled out for interviews to explore their opinions about the PGA-based teaching they experienced during the experiment. A Wilcoxon Signed Rank Test was used to calculate the degree of significance in students’ improvement on opinion essay writing (Z=3.408, p < 0.05) between the pretest and posttest in favor of the latter. The findings also revealed that students had positive perceptions towards the PGA that was applied by their instructors. The findings suggest that applying such an approach in writing courses could engage learners in writing practices that they view positively.
Full-text available
Using the ICNALE-Edited Essays, a newly developed learner corpus module that includes both learners' original essays and edited versions by professional proofreaders, we examined how many words learners in China, Japan, Korea, and Taiwan recognized, which words or trigrams they used in essays, and the ways in which they were edited by proofreaders. First, concerning the learners' vocabulary knowledge, we have discovered that (i) the size of the learners' vocabulary is estimated to be 3,333 words on average; and (ii) the receptive vocabulary size is larger for Korean learners, while being smaller for Japanese and Taiwanese learners. Concerning word use, we have concluded that (i) learners use approximately 1,440 lemmas on average in two types of topic-controlled essay writing; (ii) Japanese learners use less varied vocabulary than others; (iii) learners overuse (bare) singular nouns, basic verbs, a particular group of logical connectors; and (iv) they underuse subjunctive/epistemic modal verbs and possessive or demonstrative determiners. Moreover, concerning learners' trigram use, we have discovered that (i) learners use 14,000 types of trigrams on average; (ii) Japanese learners use less varied types of trigrams; (iii) learners overuse trigrams including (bare) singular nouns, inappropriate definite articles, inappropriate variants of 'it is adjective for person to do' structure, and quantitative amplifiers. Finally, concerning the classification of learner essays, it has been suggested that (i) Japanese learner essays are deviant and are distinguished from other learner essays; (ii) original essays and edited essays are neatly classified for all learners; and (iii) the effect of editing is relatively larger for Korean and Taiwanese learners and smaller for Japanese and Chinese learners.
This classroom-based action research (CBAR) corroborated our belief in the valuable role rubrics play in a tertiary L2 writing context where English is the medium of instruction. The three-stage CBAR involved ongoing discussions between us, two writing teacher-researchers, as we adapted our teaching and assessment strategies to explore the potential of rubrics as formative tools. This study confirmed the proactive role rubrics could play in teaching writing and promoting successful partnerships between teachers and students during the assessment process. The multifaceted function of rubrics as driver of change in practitioners’ approaches to teaching and assessing writing as well as a tool that enables students to take ownership of the different stages of their writing was a major finding of our study.
Full-text available
The Common European Framework of Reference (CEFR) intends to work as a basis for the creation of didactic materials, language certification, instruments of assessment, and curriculums. Almost twenty years after the Framework was passed, the communicative approach has been promoted and the levels of language proficiency have been unified in Europe. This research aims at checking the adaptation of the speaking rubrics of two English Official Certificates: Trinity College and Spain’s Official School of Languages in order to determine whether they follow the Framework’s guidelines
Full-text available
The Common European Framework of Reference (CEFR) intends to work as a basis for the creation of didactic materials, language certification, instruments of assessment, and curriculums. Almost twenty years after the Framework was passed, the communicative approach has been promoted and the levels of language proficiency have been unified in Europe. This research aims at checking the adaptation of the speaking rubrics of two English Official Certificates: Trinity College and Spain’s Official School of Languages in order to determine whether they follow the Framework’s guidelines
Full-text available
The Common European Framework of Reference (CEFR) was developed by the European Council with the intention of providing a comprehensive basis for the creation language syllabi and curriculum guidelines, together with the design of teaching materials, language certificates and instruments of assessment. The CEFR has been implemented in Spain through different education laws and has prompted the introduction of the communicative approach and the use of new instruments of assessment such as rubrics. Nonetheless, almost twenty years after the CEFR was passed, not many researches have been conducted on how the Framework has been implemented. It is from this line where the current research stems as it intends to check how the CEFR has been adapted in the rubrics used for the assessment of the writing skill in two main English Certificates: the Cambridge Assessment English FCE and the Trinity College ISE-II. El Marco Común Europeo de Referencia (MCER) se desarrolló con el objetivo de promover una base común para la creación de currículos educativos y servir como guía en la elaboración de materiales didácticos, exámenes de certificación e instrumentos de evaluación. El MCER se ha implantado en España a través de diferentes leyes educativas y ha propiciado la introducción del enfoque comunicativo o nuevos instrumentos de evaluación como rúbricas. Sin embargo, casi veinte años después de la aprobación del marco, pocos son los estudios que han revisado de qué manera se ha adaptado el marco. Es aquí donde la presente investigación se sitúa, con el objetivo de comprobar cómo se ha implementado el MCER en las rúbricas para examinar la destreza escrita de dos de los principales certificados de inglés: el Cambridge Assessment English FCE y el Trinity College ISE-II.
Full-text available
Instructional rubrics help teachers teach as well as evaluate student work. Further, creating rubrics with your students can be powerfully instructive.
Instructors and administrators in the portfolio program at City University urgently desired to standardize their evaluations of students' writing - to make their judgments quick, easy, and homogeneous. Because they refused to compromise the rhetorical and pedagogical integrity of their decisions, however, participants in this study found that evaluative ambiguity and conflict stubbornly remained. Though extremely frustrating for the writing faculty involved, the evaluative crises they experienced set the stage for a radical reconceptualization of the process of standardization. In fact, their struggles delivered communal writing assessment to the doorstep of hermeneutic standardization, a paradigm that can accommodate both of writing assessment's historically antagonistic commitments: to fairness and consistency as well as to the diversity, complexity, and context-dependence that are characteristic of rhetorical experience. Using qualitative methods to analyze observation- and interview-based data as well as written documents, I explore how City University's writing instructors grappled with their crises of standardization. Participants experienced multiple breakdowns in the project of standardization, of which this article details the two most severe: crises of textual representation and crises of evaluative subjectivity. I conclude by examining conflicting interpretations - psychometric and hermeneutic - of City University's crises. In advocating the second interpretation, I argue that viewing City University's struggles through a hermeneutic lens can lead to revised understandings and practices that allow teachers of composition to honor more fully their theoretical, pedagogical, and ethical commitments when judging students' writing.
Maja Wilson believes that efforts to standardize language through rubrics and generalized comments provide a disservice to students and undermine the power of the reading and writing experience. She advocates making use of our subjectivity as readers, conceding that her values cannot be standardized and often shift in response to interactions with students and their writing.
This book provides an authoritative, readable and up-to-date guide to the major themes and developments in current writing theory, research and teaching. Written in a clear, accessible style, it covers theoretical and conceptual issues, addresses current questions and shows how research has fed into state-of-the-art teaching methods, practices, materials and software applications. Thoroughly updated and revised, this second edition also contains a new chapter on important issues in writing such as genre, context and identity.
This volume synthesizes and critically analyzes the literature on response to the writing of second language students, and discusses the implications of the research for teaching practice in the areas of written and oral teacher commentary on student writing, error correction, and facilitation of peer response. The book features numerous examples of student texts and teacher commentary, as well as figures and appendices that summarize research findings and present sample lessons and other teaching materials. It is thus simultaneously comprehensive in its approach to the existing research and highly practical in showing current and future teachers how this material applies to their everyday endeavors of responding to student writing and teaching composition classes. Response to student writing--whether it takes the form of teachers' written feedback on content, error correction, teacher-student conferences, or peer response--is an extremely important component of teaching second language writing. Probably no single activity takes more teacher time and energy. Response to Student Writing is a valuable theoretical and practical resource for those involved in this crucial work, including L2 composition researchers, in-service and preservice teachers of ESOL/EFL writers, and teacher educators preparing graduate students for the teaching of writing. © 2003 by Lawrence Erlbaum Associates, Inc. All rights reserved.
Research in L1 and L2 student writing has suggested that teacher response to student compositions is most effective when it is given on preliminary rather than final drafts of student essays (Freedman, 1987; Krashen, 1984). One area of research in L1 and L2 composition is the assessment of student reactions to the feedback they receive from their teachers (Cohen & Cavalcanti, 1990; Hedgcock & Lefkowitz, 1994; Leki, 1991; McCurdy, 1992). However, most previous studies of ESL student response to their teachers' written comments on their essays have been undertaken in single-draft, rather than multiple-draft, contexts.In this study, 155 students in two levels of a university ESL composition program responded to a survey very similar to the ones utilized by Cohen (1987) and McCurdy (1992) in single-draft settings. The results of the survey indicated that students pay more attention to teacher feedback provided on preliminary drafts (vs. final drafts) of their essays; that they utilize a variety of strategies to respond to their teachers' comments; that they appreciate receiving comments of encouragement; and that, overall, they find their teachers' feedback useful in helping them to improve their writing. Responses also showed that students had a variety of problems in understanding their teachers' comments, suggesting that teachers should be more intentional in explaining their responding behaviors to their students.