Content uploaded by Vijay Kumar
Author content
All content in this area was uploaded by Vijay Kumar on Oct 29, 2017
Content may be subject to copyright.
Examiners’reports on theses: Feedback or assessment?
Vijay Kumar
a
, Elke Stracke
b
,
*
a
University of Otago, Higher Education Development Centre, PO Box 56, Dunedin 9054, New Zealand
b
University of Canberra, Faculty of Arts and Design, University Drive, Bruce ACT 2601, Australia
Keywords:
Assessment
Examiner reports
Feedback
Postgraduate students
Thesis examination
abstract
Traditionally, examiners’reports on theses at the doctoral and Master’s level consist of two
components: firstly, summative assessment where a judgement is made about whether
the thesis has met the standards established by the discipline for the award of the degree,
and, secondly, the developmental and formative component, where examiners provide
feedback to assist the candidate to revise the thesis. Given this dual task of providing
assessment and feedback, this paper presents the findings of a small-scale empirical study
that aimed to gain insights into the connection or potential disjunction between feedback
and assessment in six examiners’reports. The main aim of this study was to identify the
nature of examiners’reports on Master’s and doctoral theses: is it primarily assessment or
feedback? Our study suggests the crucial role of feedback in postgraduate thesis exami-
nation practice. Without feedback, there is little impetus for the candidate to progress, to
close the gap between current and desired performance, and to attain the level needed to
become a member of the scholarly community. The study concludes with the implications
that a stronger focus on feedback might have for all stakeholders involved in the thesis
examination process.
Ó2011 Elsevier Ltd. All rights reserved.
1. Introduction
Examiner reports play a crucial role in postgraduate examination both at the Master’s and doctorallevel. At both levels, the
examiner report is the culmination of many years of supervised research.In Australia and New Zealand, for example, external
evaluation of a written thesis is essential if candidates are to be awarded the degrees for which they have entered. Two or
three examiners, who are not members of the supervisory committee, usually mark the thesis. Similarly, in the UK or
countries that follow the UK system (like Malaysia), examiners assess the thesis and prepare a written report. Even though
systems might vary with regard to any further examination requirements, for instance oral examinations, referred to as viva
or defence, all systems require written examiner reports. In this study we focus on two sets of examiner reports from New
Zealand and Malaysia respectively.
Examiners may consider the examination as a ‘gate keeping’task, and/or as an opportunity to provide developmental
experiences (Joyner, 2003) to the candidate. The examiners usually make a summative judgement and also encourage
developmental experiences in the form of feedback (Kiley, 2009). In other words, a first component of examiner reports is
a summative assessment where examiners make a judgement as to whether the thesis has met the standards established by
the discipline and the university for the award of the degree. The second component is developmental and formative, where
examiners provide feedback to assist the candidate to revise the thesis (Stracke & Kumar, 2010).
*Corresponding author. Tel.: þ61 2 6201 2492; fax: þ61 2 6201 2649.
E-mail addresses: vijay.mallan@otago.ac.nz (V. Kumar), Elke.Stracke@canberra.edu.au (E. Stracke).
Contents lists available at ScienceDirect
Journal of English for Academic Purposes
journal homepage: www.elsevier.com/locate/jeap
1475-1585/$ –see front matter Ó2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.jeap.2011.06.001
Journal of English for Academic Purposes 10 (2011) 211–222
It should be noted that examiners’assessments of theses are usually non-terminal in the sense that postgraduate
candidates are expected to take on board examiners’comments and revise their work. While an initial assessment is made,
which could range from ‘accept with minor corrections’to ‘resubmit’, it is the norm to change this assessment to accepting the
thesis once revisions have been made and the goals met. Recent literature has highlighted the notion that doctoral examiners’
assessments are in fact considered feedback on work in progress (Bourke, Hattie, & Anderson, 2004). Given this dual task of
examiners to provide summative assessment and feedback, this study aims to gain insights into the connection, or perhaps
potential disjunction, between feedback and assessment in examiners’reports on theses. To understand this relationship, it is
essential to understand what assessment and feedback refer to in postgraduate thesis examination.
2. Theoretical background: assessment and feedback
2.1. Assessment
A conceptual definition of assessment refers to how much learning has taken place as a result of teaching (Gibbs &
Simpson, 2004–05). This definition emphasises, in the context of postgraduate supervision, the often-overlooked role of
supervisor and examiner as educators (Stracke, 2010). In this respect, assessment considers learning outcomes –that is,
whether the outcomes meet the standards that have been established. In a sense, assessment provides information about
a performance. The performance standards are usually listed as assessment criteria or, in the case of a thesis, usually classified
as guidelines for examiners –see Appendices A (Guidelines for Master’s theses at University MAL (Malaysia) and B(Guidelines
for PhD theses at University NZ (New Zealand)) for examples. It should be noted that in postgraduate education at the
Master’s and doctoral levels the institutions awarding the degree usually prepare assessment criteria. Examiners are asked to
decide if certain learning outcomes have been met. As an example, one of the learning outcomes of the PhD is whether the
thesis makes an original contribution to knowledge. If a candidate has met this criterion, the assumption is that the objectives
of this learning outcome have been met. But even though the examination criteria are made available to the examiners,
examiners may interpret the criteria based on their own scholarly understanding and interpretation. This notion is reported
in a study on doctoral examination (Mullins & Kiley, 2002) where empirical evidence suggests that many experienced
examiners do not make use of institutional criteria when assessing a thesis but rather use their own professional intuition to
assess learning outcomes. There may be the notion of the hidden curriculum (Snyder, 1971) by which examiners assess the
learning outcomes.
A second conceptual understanding of assessment emphasises the view of assessment as educational measurement, that
is, assessment is a measure of competence. Assessment refers to “any appraisal (or judgement, or evaluation)”(Sadler, 1989,
p. 120) and has been suggested to serve two purposes: summative and formative. Bloom, Hastings, and Madaus (1971)
defined summative assessment as those assessments given at the end of a semester/program or mid-semester with the
sole purpose of grading or evaluation of progress. Summative assessment indicates whether a learning goal has been ach-
ieved. Summative assessment is a passive measure of performance because it does not normally have “immediate impact on
learning”(Sadler, 1989, p. 120). In summative assessment, a final grade is given –in a thesis, it is usually a pass with several
degrees of acceptance criteria, which could range from the thesis being accepted, accepted with minor modifications,
accepted with major modifications, to a resubmission or even a fail.
In contrast, assessment that is given with an opportunity to improve the task is referred to as formative (Dunn, Morgan,
O’Reilly, & Parry, 2004, p. 18). Formative assessment “does not carry a grade”(Irons, 2008, p. 7) and concerns itself with
improvements to outputs that are developmental in nature. This seems to be pertinent at the postgraduate level, as writing
multiple drafts of chapters and engaging in formative assessment is a norm. Formative assessment is said to incorporate three
main components: diagnosing student difficulties, measuring improvement over time, and, finally, providing information to
improve. Contrary to the passive nature of summative assessment, formative assessment is active in the sense that it triggers
and provides a sense of direction to achieve learning goals.
It has been suggested that when students receive summative assessment, theyhardly attend to formative feedback(Butler,
1988). However, Master’sand PhD candidates cannot ignore their examiners’formative assessment/feedback. It is
a requirement of thesis examination procedures that the candidate attends to suggestions and comments (that are usually
moderated and prepared by the convener of the examination), and that revisions have to be made to the satisfaction of the
convenor or, possibly, an (internal) examiner.
2.2. Feedback
Afirst conceptual understanding of feedback is that it closes a gap between current and desired performance (Parr &
Timperley, 2010). Feedback has been conceptualised as “information about the gap between the actual level and the refer-
ence level of a system parameter which is used to alter thegap in some way”(Ramaprasad, 1983, p. 4). Sadler (1989) adds that
what is essential in feedback is that it has to be active, in the sense that once the gap is identified, it has to be closed.
Traditionally feedback is conceptualised as “information provided by an agent (e.g. teacher, peer, book, parent, self, experi-
ence) regarding aspects of one’s performance or understanding”(Hattie & Timperley, 2007, p. 81). Feedback is given to ensure
that learning goals are met. Sadler (1989) supports this view by noting that feedback is “information given to the student
about the quality of performance”(p. 142). In a model of feedback proposed by Hattie and Timperley (2007), effective
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222212
feedback involves closing a gap in knowledge. Hattie and Timperley use the term ‘feed up’to refer to notions of where the
learner is going, ‘feedback’to the notion of what progress is being made to achieve a goal, and finally ‘feed forward’to refer to
the notion of where to next. In terms of postgraduate supervision, the supervisors usually play an active role in all three types
of feedback by providing input to ensure that specific learning goals are met, but the resulting thesis is subject to external
validation by the examiners. In an examination scenario, the examiners may have differences in the conceptualisations of
these goals that may differ from that of the supervisors. Kiley (2009) summarises supervisors’challenges when nominating
examiners in two categories: professional/academic issues and personality issues. An example of the first category is
a different disciplinary perspective, in particular with cross-disciplinary and multidisciplinary dissertations. As for the second
category, supervisors and examiners might, for instance, have different understandings of intellectual courtesy and
generosity.
A second conceptual understanding of feedback is that it provides developmental experiences and encourages self-
regulated learning (for elaborations, see Stracke & Kumar, 2010). In the postgraduate education process, feedback is given
during supervision, for instance after the candidate has completed a chapter. Feedback provides opportunities for post-
graduate students to practise skills and to consolidate the journey from a zone of current development to a zone of proximal
development (Vygotsky, 1978), that is, to move from being a novice to becoming an expert in a specialised field of study, and
to achieve the tenacities of self-regulated learning. The main aim of feedback is to reduce “discrepancies between current
understandings, performance and a goal”(Hattie & Timperley, 2007, p. 86). In feedback, the focus is on specific aspects that
need improvement –both supervisors and examiners may provide such feedback, with the examiners providing the last stage
of scaffolding the candidate’s learning experience.
Finally, feedback is often referred to as a form of communication. In a study by Kumar and Stracke (2007) on the doctoral
supervision process, it was reported that through feedback supervisors engage with supervisees. The term engagement refers
to strategies that writers use to “to recognise the presence of their readers”(Hyland, 2005, p. 365). This engagement could be
in the form of interactions that could range from referential utterances which provide information, directive utterances which
try to get the hearer (writer) to do something or, finally, expressive utterances which express the speaker’s (supervisor’s)
feelings (Kumar & Stracke, 2007). On a similar note, Higgins, Hartley, and Skelton (2001) highlighted the dialogical role of
feedback. Their argument was based on the notion that feedback should lead to “[d]iscussion, clarification and negotiation”
(p. 274).
The different conceptualisations of feedback that have emerged from Higher Education research focus on teacher/lecturer
as well supervisor feedback. However, examiners on theses also provide developmental experiences (Joyner, 2003) to the
supervisee. This is based on the theoretical underpinning that examiners view a thesis as work-in-progress (Bourke et al.,
2004; see also Stracke & Kumar, 2010). Therefore, examiners’reports on research theses usually contain both assessment
as well as feedback.
2.3. Formative assessment and feedback
The distinction between summative and formative assessment may be clear cut: summative assessment makes a judge-
ment call on learning outcomes while formative assessment provides a sense of direction to achieve unattained goals.
Formative assessment seems to echo what feedback does, that is, closing a gap. If the information given to close a gap has an
immediate impact for learning, it can be considered as formative assessment or feedback. On the contrary, if information
given does not have an immediate impact on learning, it is summative assessment. In other words, while educators and
researchers may use different terms to refer to the process of closing a gap between achieved and desired goals, such as
formative assessment or assessment for learning (e.g. Parr & Timperley, 2010), formative feedback (e.g. Kiley, 2009), or
(bringing the two terms together) assessment feedback (e.g. Higgins et al., 2001), the terms refer to a common central goal –
that is, a trajectory move towards attainment of a learning goal. In this paper, we use the term feedback to refer to this
trajectory in an effort to distinguish it clearly from summative assessment.
3. Methodology
Our small-scale study of six examiner reports aimed at identifying the nature of examiners’reports on research theses, and
whether the reports provided primarily (summative) assessment or feedback.
3.1. Data sources and background information
Data for this study constituted six examiner reports that belonged totwo sets, one set of three reports on a Master’s thesis
in Applied Linguistics (from University MAL, Malaysia), and one set of three reports on a PhD thesis in Applied Linguistics
(from University NZ, New Zealand). We chose to include both a PhD and a Master’s thesis to gain an initial insight into the
under-researched processes involved in the assessment of postgraduate thesis examination practice (Tinkler & Jackson, 2000,
cited in Mullins & Kiley, 2002, p. 369). Both theses were assessed as a ‘pass with major revisions’. In each case, two examiners
had asked for major revisions, whereas one examiner recommended minor revisions.
The reports from Malaysia were written by two internal examiners (from the department) and one external examiner
(from outside the university). In the Malaysian postgraduate education system, candidates have to undertake 16 credits of
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222 213
course work and spend two to three semesters to work on a thesis. Two academic staff supervised the thesis. Examiners are
given a set of guidelines to prepare the report (Appendix A). The candidate also has to undergo a compulsory viva as part of
the examination process.
For University NZ in New Zealand, the three examiner reports were for a PhD and the reports were written by two external
examiners (one international examiner, and one within New Zealand) and one internal examiner (same department). It needs
to be pointed out here that, in New Zealand, the thesis is the only compulsory item of examination for a PhD. A team of three
academic staff supervised the PhD. An oral examination is not compulsory but can be requested by examiners. Examiners are
given a set of guidelines to prepare their reports (Appendix B).
3.2. Data management and analysis
All examiner reports were available in electronic form. These reports were tabulated to enable coding of each sentence,our
unit of analysis. The descriptive analyses focused on the overall nature of each report, whether the assessor provided mainly
assessment or feedback. Our guiding questions were: Does the examiner provide clear judgement about whether the
candidate has achieved the learning outcomes to be awarded the degree? In these cases, we coded the comment as focussing
on assessment (A). With regard to feedback: Does the examiner identify goals that the candidate has yet to achieve? Most
importantly, does the examiner provide clues for the candidate to close the gap between achieved and desired goals to
become a member of the scholarly community of practice for which the degree will be awarded? Such instances were coded
as comments with a focus on feedback (F).
Both researchers read the six reports and identified and categorised all sentences into feedback or assessment based on
the theoretical underpinnings discussed above. At the outset of the research, both researchers drew up a list of the possible
coding categories, ‘assessment’(coded as A), ‘feedback’(coded as F), ‘both’(coded as B) or ‘don’t know’(D) based on the data
collected. Once the researchers had completed the categorisation individually, they exchanged coding categories and dis-
cussed differences until there was consensus. At times, there was discussion about whether a comment was mainly
assessment or feedback, and sometimes comments showed overlaps. Such instances were double coded, as comments can fall
into both categories. For instance, in a comment like “However, no reference has been made to alternative perspectives which
could illuminate the development of inner mental processes –which Vygotsky referred to as ‘cognition in flight’”, the first half of
the sentence contains a clear judgement, i.e., alternative perspectives are lacking according to this examiner. However, the
subordinate clauses contain clues to the candidate about how to integrate such a perspective. Hence, this comment is an
example of double coding as it contains both assessment and feedback (B).
4. Findings and discussion
The first part of this section presents and discusses the results of the analysis of the examiners’reports. It describes the
overall nature of the set of three reports on a Master’s thesis (from University MAL, Malaysia) (4.1) and, subsequently, of the
set of three reports on a PhD thesis (from University NZ, New Zealand) (4.2). In the second part of this section, we discuss the
dual task of examiners - assessment and feedback - and look at the connection or potential disjuncture between these two
essential components (4.3). It should be recalled that the focus of our descriptive analysis was to find out whether examiners
provide mainly assessment or feedback.
4.1. University MAL, Malaysia
As has already been stated, the Master’s level examination at University MAL included three examiner reports. The
examiners followed the guidelines quite closely by addressing all twelve points (Appendix A).
Examiner 1 at University MAL (E1-MAL) recommended ‘accept with major revisions’based on his summative judgement
that the thesis needed depth in the form of conceptual and theoretical discussions. Other summative assessments include
judgements about the formulation of the research questions: “Further, the research questions are presented as two ‘research
propositions’at the end of this section on study purpose”(A).
Even though the examiner started the report with a negative tone by highlighting weaknesses in the choice of the title and
moving on with critical comments on what should be included in an abstract, he provided a good deal of feedback to enable
the candidate to move on with the task of meeting the required expected standard.
His critical assessments were always well substantiated with evidence from the thesis. When providing summative
assessment, E1-MAL also made suggestions to enable the candidate to close the gap. For example, when E1 found that the
scope of the study was inadequate, he suggested that the candidate should provide “a richer and ‘thicker’explanation of [.]”
(F).
In another instance, even though he summatively concluded that the literature review “has been fairly comprehensive and
critical”(A), he took the initiative to suggest new references to assist the candidate to revise. For example, when he made an
assessment on the quality of the literature review, he suggested a current reference so that the candidate can “locate her
literature review (and her study) in clearer perspective amid a comprehensive array of issues”(F).
The report by E1-MAL contains both assessment and feedback. What seems unique about this report is that even though
the examiner was critical at the beginning, he provided useful comment in the form of ‘feed forward’to enable the candidate
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222214
to revise. Besides this, the examiner also played a collegial role by providing guidance to the candidate to ensure that she was
able to close the perceived gap between her actual and desired performance. Assessment that required revision was always
supported with clear and well-directed feedback. This seems to give aclear indication that E1-MAL viewed the thesis as work-
in-progress and thus focused on providing clear directions if certain criteria for the award of the degree were not met.
Examiner 2 at University MAL (E2-MAL) also recommended that the thesis be accepted with major modifications. E2
0
s
report, however, started off with quite negative comments (A) such as “grammatically incorrect”,“inaccurate”,“does not
mention”,“does not explain”,“lack of”,“too brief”, which indicated that the candidate had not attained certain learning goals.
These types of comments were evaluative in nature. There were also instances of directing the candidate with sentences
starting with “The Candidate needs to [.]”(F)and “The Candidate should not [.]”(F), as well as “It is the supervisors’duty to
check that [.]”(F). It should be noted that directives seldom allow for any forms of engagement. E2-MAL’s report was eight
pages in length, and the majority of the comments were assessment in nature. Unlike E1-MAL, who provided ‘feed forward’
after every assessment, E2-MAL often stopped at the evaluative judgement stage. An example of this is the following: “There is
lack of recent literature related to [.] in the dissertation. (A) The studies cited are more than 10 years old”(A). The candidate was
not given any further guidelines or a sense of direction on how this apparent gap could be closed.
However, E2-MAL provided well-directed feedback on surface features to guide the candidate to revise. For example,
comments such as “Figure 4 should bear a specific heading”(F) or “The citation for any quotation should include a page number”
(F) gave a clear indication to the candidate about what needed to be done.
In sum, E2-MAL’s report was more of an assessment where the examiner strongly embraced her role as gatekeeper.
The assessment focused more on what the candidate had not achieved than on what had been achieved. The report did not
engage the candidate to make new discoveries or provide indications on how she could close the perceived gap and move
forward from her zone of current development to the zone of proximal development needed for the Master’s degree she was
seeking.
E3-MAL’s report consisted mostly of assessment. Unlike E2-MAL, this examiner highlighted what had been achieved. This
is understandable as she recommended ‘accept with minor changes’. Her assessments were mostly positive and included
praise, such as “[t]his timely work on .is thus of great relevance and value [.]”(A) or “I am especially appreciative of the way in
which [the candidate] has managed to keep her writing concise yet comprehensive”(A). The positive assessments highlighted the
candidate’s achievements. For example: “The research issues that have been selected are current and very worthy of investi-
gation”(A) or “The main strength of the study lies in its scope and relevance”(A). There was also negative assessment. However,
it was less directive, and often followed byan invitation to the candidate to consider her stand, for instance “I think that [.]is
more a research than a teaching tool”(F). Feedback such as this enables a candidate to self-reflect and to make new discoveries
–this is a tenet of self-regulated learning. It should also be noted that E3-MAL was collegial and expressive when making
requests to the candidate. A phrase such as “It would have also been good if the candidate [.]”(F) or “It would also have the
added benefitof[.]”(F) seems to indicate the dialogical nature of this report. Overall, E3-MAL’s report provided positive
assessment as well as feedback that would have encouraged the candidate to undertake the recommended revisions.
4.2. University NZ, New Zealand
The PhD level examination at University NZ included three examiner reports. The three examiners followed the guidelines
(Appendix B) in varying degrees.
Examiner 1 at University NZ (E1-NZ) recommended ‘accept with minor revisions’for the PhD thesis. Hence, it is not
surprising that the assessment given is mainly very positive and encouraging. The report primarily offers summative
assessment. Positive comments dominate the report, such as:
“The research methods, analyses, and interpretations of them are appropriate to the research questions posed, which in
turn are of significance not only for educational practices but also for theories of literacy development and writing
processes”(A).
The examiner clearly addresses the questions given by the University and makes clear judgements when praising the
candidate for his “distinctive contribution”,for offering “intensive and thorough”analyses, for writing “clearly”and presenting
“coherently.”
E1-NZ’s feedback can be interpreted as ‘feed forward’, as he suggests avenues to the candidate of ‘where to next’. For
instance, this examiner encourages the candidate to publish his research when he says that he expects “an article-length
manuscript about the thesis [that] would be welcomed at such international journals as [.]”(F). He also points “toward
future studies to be conducted following from the present research”(F) and provides the candidate with a few suggestions about
how he could achieve this, for instance by expanding the task types used with the research participants.
To sum up, this report provides clear judgement (summative assessment) and also some clear feedback to the candidate.
The feedback is more precisely ‘feed forward’and shows the candidate clear avenues for developing the research further, now
that he is at the threshold of becoming a full member of the community of practice of academics. It is noteworthy that the
overall positive and encouraging tone of this report can be traced back to the frequent use of comments that show the
examiner’s thoughts, feelings and opinions. Comments like “I expect”or “I was impressed”pave the way for a dialogue, for
some interaction between the assessor and the candidate, which is an important part of the learning experience.
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222 215
Examiner 2 at University NZ (E2-NZ) recommended ‘accept with major revisions’. He offers clear summative assessment as
well as feedback by offering many suggestions for the candidate to address the concerns that led to his judgement. His
assessment includes both positive and negative summative assessment, as these two examples illustrate:
“The thesis clearly indicates that the candidate is well able to select, present and interpret the primary data that has
been collected”(A).
“There is a lack of coherence between the primary data that illuminate the mental processes of student writers and
the secondary data (collected apparently exclusively by email interviews) upon which the argument of the thesis is
based”(A).
While this examiner’s comments also fall mainly under summative assessment, he does offer clear guidance to the
candidate to address the concerns. His feedback is often directive and includes many suggestions, such as:
“It is suggested that a stronger theoretical discussion should be included in the final chapters of the thesis”(F).
“These secondary sources should be more fully explained and referenced, and relevant contextual information from
these (and other?) sources should be fully considered in Chapter 1, to be drawn upon in the discussion in the final two
chapters”(F).
The candidate receives clear guidance about how to close the gap between what the examiner expects and the current
draft of the thesis. Use of the passive voice and modal conditionals allow for directness but focus on the written product, and
not on the candidate-writer. This allows the candidate to look at his writing in an objective way and start the revisions, as
requested by this examiner.
Like E2-NZ, Examiner 3 at University NZ (E3-NZ) also recommended ‘accept with major revisions’. His report offers many
instances of summative assessment and some feedback. In this report the candidate finds most of the (summative) assess-
ment in the first half of the report. First, statements clearly address the university guidelines and answer the questions
provided (Appendix A). However, the examiner makes comments that go beyond the questions and expresses his concerns
with the thesis as it stands, for instance:
“The positive comments made in [A] notwithstanding, the thesis falls short of dealing with the topic in terms of depth
and scope to fully meet the requirements of the PhD degree”(A).
“In this respect, the thesis does not seem to have much to contribute to the field”(A).
The candidate is confronted with a long list of such statements that fall under the negative summative assessment
category, before coming to the part where this examiner suggests how to improve the critical aspects. The examiner offers
many useful suggestions about how to go about the recommended revisions:
“Granted that this is a qualitative study [.], some statistical data on the students’revision strategies will be very useful
and in fact necessary”(F).
“This is not just for a better understanding of each revision strategy, but such statistical data could also be subjected to
further in-depth analysis”(F).
Often, this examiner offers feedback that opens up a dialogue with the candidate, for example when he asks a question, as
if the candidate was with him, face-to-face, such as “In Table l, didn’t Melinder also generate ideas?”(F) or “Quality by whose
expectations?”(F).
Similarly, comments that reveal more of one’s own opinion show the power of such feedback (Kumar & Stracke, 2007). It
can lead to a dialogue between the examiner and the candidate, despite the distance between the two interlocutors. In the
following, these expressive comments invite the candidate to look at things from a different perspective, namely the
examiner’s:
“From my personal experience, I can say that sometimes [I] do planning in the back of my mind when doing something
unrelated (e.g. taking a shower, cooking a meal, etc.)”(F).
“Passim: What is somewhat puzzling [.]”(F).
The examiner’s use of exclamation marks on a couple of occasions also underlines his involvement with the text he is
reading and his reaction to it:
“Obviously it did, because they all received very good grades!”(F)
“The students have certainly produced quality texts by the teacher’s expectations, but not by the examiners’!”(F)
To conclude this first part of this section, it is noteworthy to point out that, despite the variety in this small sample of
examination reports, all examiners provide both (summative) assessment and feedback, albeit with different weightings and
in different ways. As for the proportion of assessment and feedback in these six reports, we can observe some tendencies.
Despite the fact that all examiners provide both assessment and feedback, there are notable differences. E2-MAL provides the
least feedback. E1-MAL and E2-NZ provide assessment and feedback in a more balanced way than E3-MAL, E1-NZ and E3-NZ,
who tend to offer more assessment. Regardless of their final recommendation, these examiners’reports also show variety
with regard to the level of directness, the use of the expressive function, and/or the use of feedback as ‘feed forward’. Future
research, with a larger sample, might look at these factors in more depth.
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222216
In the following, we focus on this dual role of the examiners as providers of assessment and feedback and look at the
connection or potential disjuncture between these two essential components of examiners’reports.
4.3. The dual task of examiners: assessment and feedback in postgraduate thesis examination
Examination at the postgraduate level is very different from any other circumstance. In a public examination or a final
assessment at the end of a semester, summative assessments are the norm. Often the candidates do not have anyopportunity
to see their work after the assessment, or to make amendments to improve. However, at the postgraduate level, the
examination is entirely different in the sense that a piece of work (thesis) is written under supervision and then sent out for
external assessment. Even though recommendations regarding the assessment of the thesis are made, the thesis can still be
re-worked to ensure that the learning goals are met to the satisfaction of the examiners. In other words, examiners’
assessments of a thesis are not final, and the candidate is given the opportunity to close any gap identified by the examiners.
For this, the candidate relies on the feedback offered by the examiners.
This particular examination situation is clearly evident from the data presented. All six examiners seem to play the role of
gatekeepers. All of them make assessments, some positive, some negative in tone. The positive assessments indicate that the
examiners are satisfied that the candidates have met the expected goals and attained the learning objectives stipulated for the
degree that they are seeking. The negative assessments are the ones that can create a disjuncture when examiners perceive
their roles purely as gatekeepers. While most of the examiners provide feedback to ensure that the potential disjuncture
between assessment and feedback does not occur, sometimes examiners stop with the provision of their (summative)
assessment. Stopping at this point of the assessment does not seem to provide any impetus for the candidates to progress, to
close the perceived gap in their performance, and to attain the level that will allow them to become members of the scholarly
community to which they wish to belong.
What seems to be the underlying concern potentially leading to disjuncture is the role of feedback in the assessment of
a thesis. Our data appear to indicate both a linear and recursive process of assessment and feedback. The process is linear
when the entire thesis, or some aspects of it, have met the standards as stipulated by the institution or decided by the
examiner. However, it becomes recursive if the standard has not been met. When a thesis is sent out for assessment,
examiners have to decide whether or not the thesis has met the standards for the award of the degree. If it has not, the
examiners are required to provide guidance to the candidate (as stipulated in the guide for examiners). This feedback is of
crucial importance for the student to be able to close the gap between current and desired performance (Parr & Timperley,
2010). Once feedback is provided, the candidate is expected to revise to the expected standards. A subsequent assessment is
made to ensure that the standard has been met. If it has, the thesis is accepted and the candidate has rightfully become
a member of the community of practice.
5. Conclusion and implications
With this study we aimed to gain insights into the connection and/or potential disjunction between assessment and
feedback in examiners’reports through empirical study of examiners’reports. The main aim of this study was to identify the
nature of examiners’reports by finding out whether they primarily provide assessment or feedback, and to explore the link
between, or possibly the disjuncture between, assessment and feedback.
We acknowledge that the small sample size does not allow for any generalisation. We undertook this analysis to gain an
initial data-grounded insight into questions around assessment and feedback in thesis examination that we, in our role as
academics in Higher Education who supervise and examine postgraduate theses, had been reflecting on in a more theoretical
way. A larger sample of postgraduate theses reports at both Master’s and PhD level, from various disciplines and from other
countries, would allow for more insights and more robust findings. Most importantly, this small-scale study does not show
any distinct difference in the way the examiners write their examination reports at Master’s or PhD level. Further research
could examine differences between Master’s and PhD theses at these two levels and (potentially) different examiners’
feedback and develop recommendations for different approaches by supervisors, including their feedback.
Limitations notwithstanding, our study suggests the crucial role of feedback in postgraduate thesis examination practice.
Without feedback, there is little impetus for the candidate to progress, to close the perceived gap, and to attain the level
required to become a member of a scholarly community. Summative assessment alone cannot achieve this goal.
Given the crucial role of feedback in the assessment process and for the successful completion of the examination process
–candidates will only be awarded the degree if they succeed in closing the gap between actual and desired performance,
between the submitted draft of the thesis and the final one –there appears to be a need to emphasise the role of feedback in
the postgraduate thesis examination process. Such an emphasis has implications for all parties involved - the examiners, the
university, supervisors and candidates alike.
Even though the majority of comments made by these examiners were essentially summative assessment, one can also see
that all examiners, regardless of their recommendation, looked at the thesis as work-in-progress and provided feedback.
Whereas E1-NZ considered the thesis as a useful basis for a journal article and tailored his feedback towards this goal, E2-NZ
and E3-NZ, who saw shortcomings in the thesis, clearly made suggestions tothe candidate as to how he could overcome these
and close the gap between the current draft and the expected revised version. Similarly, E1-MAL, E2-MAL and E3-MAL
provided differing degrees of guidance to the candidate.
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222 217
However, the instances of disjuncture that we found in the data strongly suggest that examiner guidelines could spell out
more clearly that examiners need to offer such feedback, at least with regard to all aspects of the thesis that require changes.
Such an emphasis might require some policy changes at institutional level. For example, University NZ (New Zealand) could
make the instructions that allow for feedback (“The reports should also contain specific comments on those parts of the thesis that
the examiners believe to require correction or amendment”) more prominent (through formatting devices, for example) or
otherwise emphasise the importance of such comments. Likewise, even though University MAL (Malaysia) highlights at the
very beginning that “[t]he objective is to help students to effectively incorporate all recommended amendments based on these
comments”, the majority of the document asks the examiner to “determine”whether a certain standard has been met, thus
focussing strongly on the assessment aspect of the examination report.
Another implication of a stronger emphasis of feedback in thesis assessment is the need for examiner training that would
allow examiners to reflect and embrace more willingly their dual role as assessor and feedback provider. This would also
allow for a better understanding of the educational role of supervisors (Stracke, 2010) and examiners. Current training
materials (Evans & Tregenza, 2002) could also more strongly include this important aspect of the examination process.
Finally, we would suggest that if an examination report included more dialogical type of feedback, i.e., feedback that
encourages reflection, it could lead more effectively to the desired learning outcomes at the postgraduate level. As this study
clearly shows, summative assessment alone does not augur well for the attainment of learning goals. Possibly, an emphasis on
feedback in assessment will also lead to a greater understanding of the cycle of feedback and revision on the candidate’s side.
Acknowledgement
The authors gratefully acknowledge the generosity of the examiners and the Malaysian student who contributed to this
study. The authors wish to record their appreciation to the anonymous reviewers and to thank the Universiti Putra Malaysia
and the University of Canberra for supporting this research. The authors have prepared this paper equally.
Appendix A. Guidelines for the preparation of a Master’s thesis examination report from University MAL (Malaysia)
Guidelines for the preparation of a thesis examination report
Please submit a DETAILED REPORT using the following guidelines when examining the thesis. The objective is to help
students to effectively incorporate all recommended amendments based on these comments.
The examiner is expected to keep the thesis material confidential until it is made public by the student through publication
or by deposition in the library.
1. Thesis topic (Title)
Determine whether the title is grammatically correct, contains important and pertinent keywords found in the abstract,
and reflects the actual research issues addressed in the study. If the title requires improvement, do suggest a suitable title.
Document [.] (Appendix 1) provides additional guidelines for determining a suitable title for a thesis.
2. Abstract
Determine whether the abstract accurately reflects the study that was conducted. The abstract should contain a (i) brief
statement of the problem or objectives, (ii) concise description of the research method and design, (iii) summary of the major
findings and (iv) brief conclusion.
3. Research problems and objectives
Determine whether the background to the pertinent research issues is well discussed, the research problems well defined,
and the hypotheses address the define research problem. Determine whether the objectives are clearly stated and met by the
research methodology design used and findings. Suggest improvement, if necessary.
4. Scope and relevence
Determine whether the scope of the study is appropriate for the degree it is intended. The level of appropriateness is
a relative concept, and therefore, needs to be addressed by considering the following factors:
a. Field of study (example: pure sciences and social sciences have different perception of scope)
b. Research issues in a particular field; and
c. Practicability of the addressed research problems (example: the scope could be limited by financial, time and other
constraints)
d. Research objectives
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222218
5. Literature review
Determine whether the literature review:
a. is relevant to the research issues
b. is comprehensive and takes into consideration past and current literature
c. is well reviewed, summarised, organised and consistent with the sequence of the research issues addressed in the study
d. is proportionate relative to the rest of the thesis
e. contains too much text book materials (it should be kept to a minimum)
6. Methodology/Materials and methods
Determine whether the:
a. collection, strengths and weaknesses of the data used in the study are clearly specified
b. research design (e.g. sample size, choice of methods etc.) is suitable and appropriate to meet or address the specified
objectives or research issues of the study
c. use or choice of methods is well defined and justified
d. statistical analysis or package used is appropriate
e. methods used are properly and adequately referenced
7. Analysis and intepretation of results
Determine whether the
a. results obtained are in agreement with the stated objectives of study
b. interpretation of the findings is logical or acceptable within the context of the issues of interest
c. analysis of the data using the chosen methodology has been properly specified
d. findings are discussed with appropriate references
8. Presentation
Determine whether:
a. the sequence of chapters, and sections in each chapter are able to facilitate the understanding of the research issues
b. tables, pictures and any other form of summarized information properly labelled, numbered, and placed in the appro-
priate sequence and section of the thesis
c. the same research data is presented in more than one form (e.g. both table and figure)
d. figures especially photographs are clearly reproduced
9. References/bibliography
Determine:
a. the extensiveness of the bibliography/reference list
b. whether current references are included
c. whether any reference cited in the text is missing or wrongly cited, and
d. whether the format used is consistent throughout the list
10. Accomplishments and/or merits
Indicate whether:
a. the author has clearly identified and discussed the contributions of the findings to the knowledge in the area, and the
applicability of the findings in addressing the research problems in the study
b. the stated objectives are achieved
c. there are any other accomplishment that merit a mention
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222 219
11. Demerits
Indicate whether:
a. the main weaknesses of the research and their impacts on the findings are properly addressed by the author
b. there are any other demerits (example: contents, language, relevance, etc.)
12. Recommendation
Conclude the evaluation of the thesis by stating your professional opinion on the overall acceptability of the thesis (after
taking into account all the above considerations), whether it is worthy of the degree pursued or otherwise. The outcome of the
examination should be reported as one of the following:
(a) Accepted
(i) with distinction when all or most of the research findings have either been published or accepted for publication in
citation-indexed journals, and requires minimal improvement in spelling, grammar and syntax only;
(ii) accepted with some corrections in spelling, grammar and syntax.
(b) Accepted with minor modifications
A thesis is accepted with minor modifications if it requires any of the following: reformatting of chapters, improvement in
declaration of research objectives or statements, insertion of missing references, amendmentof inaccurately cited references,
require minimum improvement in spelling, grammar, syntax, and presentation.
(c) Accepted with major modifications
A thesis is accepted with major modifications if it requires any of the following but not additional experimental work or data
collection: major revision of the literature, major improvement in the description of the methodology, statistical analysis of the
research data, re-presentation of written data in the form of figures or tables, and improvement in the discussion of results.
The examiner may recommend the candidate seek the assistance of an editorial service if errors in grammar and syntax are
extensive.
(d) Re-submission of thesis
The thesis should be recommended for resubmission if it does not meet the scope of the degree for which it is intended,
the objectives of the research are not met or when there are obvious flaws in the research design or methodology, and
therefore, requires additional experiments or data collection.
(e) Fail
A‘Fail’status is given if the thesis does not achieve the level of the degree for which it is intended.
Appendix 1. Guidelines for determining the title of thesis by the examination committee
When preparing the report for a thesis being examined, the examiner is required to determine whether the title of the
thesis is grammatically correct and reflective of the study undertaken (as stated in the ‘Guidelines for Preparation of Thesis
Examination Report’and the ‘Final Examination Report Form’). In addition to the two, the examiners, being also members of
the Examination Committee, should consider the following guidelines when deciding on the most appropriate title for the
thesis.
1. Ensure that important keywords are found in both the title and abstract of the thesis.
2. For titles in Bahasa Melayu, terms used are actually those found in the ‘Dewan Kamus’or ‘Istilah Bahasa Melayu’for the
relevant fields of study.
3. Do not allow the use of abbreviations (e.g. AMN etc.) and/or acronyms (e.g. UNITAR) unless they are universallyaccepted in
the field of study e.g. DNA, ESL, PCR. Use, instead, the full terminology.
4. Do not allow the use of a colon (:) or dash () e.g. ‘Bacillus subtilis amylase: Purification and Characterisation’or Bacillus
subtilis amylase-Purification and Characterisation’. The title may be replaced with Purification and Characterisation of Badllus
subtilis amylase.
5. Ensure that when both the common and scientific names of an organism (where applicable) are mentioned, the common
name is stated first followed by the scientific name (including variety if known) in parenthesis.
6. Where possible avoid, do not allow the title to begin with ‘The....’. e.g. Use ‘Effect of..’instead of ‘The Effects of..’.
7. Do not allow the use of phrases such as ‘A study of...’., ‘Studies on..’
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222220
Appendix B. Excerpt from “The PHD examination process”from University NZ (New Zealand)
Written Reports from Examiners
Each of the examiners is requested to furnish a written report on the thesis together with an assessment of its five-point
scale:
a) Accept, or accept with minor editorial corrections
(the corrections required are minor and can be completed in a short period of time, normally not longer than a few weeks.
The Convener of Examiners will check that the corrections have been made satisfactorily)
b) Accept after amendments have been made to the satisfaction of the Convener of Examiners in consultation with the internal
examiner
(the amendments required can be completed within a few months, normally not longer than two or three months. The
amendments will be made to the satisfaction of the Convener of Examiners in consultation with the internal examiner)
c) Revise and resubmit for examination
(the thesis is not of the required PhD standard and requires substantial revision involving up to six months of work or
possibly a little longer. The revised thesis will be resubmitted formally to all three examiners for a repeat examination)
d) Reject and refer to the appropriate authority within the University for consideration of the award of another degree
(the thesis is not of the required PhD standard and there is no likelihood that revisions will bring it up to that standard.
However, the thesis may meet the standards required of an alternative degree, possibly a Master’s)
e) Reject with no right of resubmission
(the thesis is not of the required PhD standard and there is no likelihood that revisions will bring it up to that standard, nor
does the thesis meet the standards required of an alternative degree).
The examiners are asked to comment on the thesis with reference to the description of the degree (see “Introduction”
above).
Examiners are requested to respond to the following questions:
Does the thesis comprise a coherent investigation of the chosen topic?
Does the thesis deal with a topic of sufficient range and depth to meet the requirements of the degree?
Does the thesis make an original contribution to knowledge in its field and contain material suitable for publication in an
appropriate academic journal?
Does the thesis meet internationally recognised standards for the conduct and presentation of research in the field?
Does the thesis demonstrate both a thorough knowledge of the literature relevant to its subject and general field and the
candidate’s ability to exercise critical and analytical judgement of that literature?
Does the thesis display mastery of appropriate methodology and/or theoretical material?
The reports should also contain specific comments on those parts of the thesis that the examiners believe to require
correction or amendment.
The examiners form their own independent assessments of the thesis without discussion amongst themselves or with the
candidate. Should discussion be necessary amongst the examiners, it will be co-ordinated by the Convener.
The examiners send the reports directly to the Doctoral Office. From there, they are forwarded to the Convener of
Examiners. The examiners normally retain their copies of the thesis.
References
Bloom, B. S., Hastings, J. T., & Madaus, G. F. (Eds.). (1971). Handbook on the formative and summative evaluation of student learning. New York: McGraw Hill.
Bourke, S., Hattie, J., & Anderson, J. (2004). Predicting examiner recommendations on Ph.D theses. International Journal of Educational Research, 41,178–194.
Butler, R. (1988). Enhancing and undermining intrinsic motivation: the effects of task-involving and ego-involving evaluation on interest and performance.
British Journal of Educational Psychology, 58(1), 1–14.
Dunn, L., Morgan, C., O’Reilly, M., & Parry, S. (2004). The student assessment handbook: New directions in traditional and online assessment. New York:
Routledge.
Evans, T. (concept), assisted by Treganza, K. (2002). Discussions on examining theses [CD]. Deakin University: Learning Services.
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222 221
Gibbs, G., & Simpson, C. (2004–05). Conditions under which assessment supports students’learning. Learning and Teaching in Higher Education, 1,3–29,
Retrieved on 12 June 2010 from. http://resources.glos.ac.uk/tli/lets/journals/lathe/issue1/index.cfm.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112 .
Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: the problem of communicating assessment feedback. Teaching in Higher Education,
6(2), 269–274.
Hyland, K. (2005). Representing readers in writing: student and expert practices. Linguistics and Education, 16, 363–377.
Irons, A. (2008). Enhancing learning through formative assessment and feedback. London and New York: Routledge.
Joyner, R. W. (2003). The selection of external examiners for research degrees. Quality Assurance in Education, 11(2), 123–127.
Kiley, M. (2009). ‘You don’t want a smart Alec’: Selecting examiners to assess doctoral dissertations. Studies in Higher Education, 34(8), 889–903.
Kumar, V., & Stracke, E. (2007). An analysis of written feedback on a PhD thesis. Teaching in Higher Education, 12(4), 461–470.
Mullins, G., & Kiley, M. (2002). `It’s a PhD, not a Nobel Prize’: how experienced examiners assess research theses. Studies in Higher Education, 27(4), 369–386.
Parr, J. M., & Timperley, H. S. (2010). Feedback to writing, assessment for teaching and learning and student progress. Assessing Writing, 15(2), 68–85.
Ramaprasad, A. (1983). On the definition of feedback. Behavioural Science, 28(1), 4–13.
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18,119–144.
Snyder, B. R. (1971). The hidden curriculum. New York: Alfred A. Knopf.
Stracke, E. (2010). Undertaking the journey together: peer learning for a successful and enjoyable PhD experience. Journal of University Teaching & Learning
Practice, 7(1). Available at: http://ro.uow.edu.au/jutlp/vol7/iss1/8.
Stracke, E., & Kumar, V. (2010). Feedback and self-regulated learning: insights from supervisors’and PhD examiners’reports. Reflective Practice, 11(1),19–32.
Tinkler, P., & Jackson, C. (2000). Examining the doctorate: institutional policy and the PhD examination process in Britain. Studies in Higher Education, 25(2),
167–180.
Vygotsky, L. (1978). In Cole, M., John-Steiner, V., Scriber, S., & Souberman, E. (Eds.), Mind in society: The development of higher psychological processes.
Cambridge, MA: Harvard University Press.
V. Kumar, E. Stracke / Journal of English for Academic Purposes 10 (2011) 211–222222