ArticlePDF Available

Oral versus written assessments: A test of student performance and attitudes

Authors:

Abstract

Student performance in and attitudes towards oral and written assessments were compared using quantitative and qualitative methods. Two separate cohorts of students were examined. The first larger cohort of students (n = 99) was randomly divided into ‘oral’ and ‘written’ groups, and the marks that they achieved in the same biology questions were compared. Students in the second smaller cohort (n = 29) were all examined using both written and oral questions concerning both ‘scientific’ and ‘personal development’ topics. Both cohorts showed highly significant differences in the mean marks achieved, with better performance in the oral assessment. There was no evidence of particular groups of students being disadvantaged in the oral tests. These students and also an additional cohort were asked about their attitudes to the two different assessment approaches. Although they tended to be more nervous in the face of oral assessments, many students thought oral assessments were more useful than written assessments. An important theme involved the perceived authenticity or ‘professionalism’ of an oral examination. This study suggests that oral assessments may be more inclusive than written ones and that they can act as powerful tools in helping students establish a ‘professional identity’.
Author Query Sheet
Manuscript Information
Journal Acronym cAEH
Volume (issue) G07998 Flow Article
Author’s name M. Huxham et al.
Manuscript No.
(if applicable) 515012
AUTHOR: The following queries have arisen during the editing of your manuscript.
Please answer the queries by making the necessary corrections on the CATS online
corrections form. Once you have added all your corrections, please press the SUBMIT
button.
QUERY
NO. QUERY DETAILS
1. Please check and confirm the insertions in affiliation.
2. Please provide full reference details for reference [Joughin 1999].
3.
It is mentioned as
Questions were marked on a seven-point scale
But description of only three scales (i.e. 0, 3 and 6) is given. Please provide the
descriptions for other scales also (i.e. 1, 2, 4 and 7).
4. Please provide page range for the quotation “responding to abstract questions in writing is
the natural context in which knowledge appears”.
5. Please provide a short biographical note for each author, for example current academic
position, research interests or books published.
6.
The URL
‘www.napier.ac.uk/studentvoices/download/Final_report_studentvoice_web.pdf’ provided
in the reference [Campbell et al. 2007] is not linking to any working website. Please
provide a valid site for the search purpose.
7. Please provide page range for the referred chapter title in the reference [Wakeford 2000].
AQ
Artwork Query (from Artwork Dept.)
Please resupply this artwork in a format suitable for printing.
After resampling and resizing for typesetting, the resolution of the figure
[Fig1_181, Fig2_96] is not appropriate [the min. requirement is of 300 dpi].
CE: VAG QA: SS
Assessment & Evaluation in Higher Education
Vol. 00, No. 0, Month 2010, 1–12
ISSN 0260-2938 print/ISSN 1469-297X online
© 2010 Taylor & Francis
DOI: 10.1080/02602938.2010.515012
http://www.informaworld.com
5
10
15
20
25
30
35
40
Oral versus written assessments: a test of student performance
and attitudes
Mark Huxham*, Fiona Campbell and Jenny Westwood
School of Life Sciences, Edinburgh Napier University, Edinburgh EH10 5DT, UK
Taylor and FrancisCAEH_A_515012.sgm10.1080/02602938.2010.515012Assessment & Evaluation in Higher Education0260-2938 (print)/1469-297X (online)Original Article2010Taylor & Francis0000000002010MarkHuxhamm.huxham@napier.ac.uk
Student performance in and attitudes towards oral and written assessments were
compared using quantitative and qualitative methods. Two separate cohorts of
students were examined. The first larger cohort students (n = 99) were
randomly divided into ‘oral’ and ‘written’ groups, and the marks that they
achieved in the same biology questions were compared. Students in the second
smaller cohort (n = 29) were all examined using both written and oral questions
concerning both ‘scientific’ and ‘personal development’ topics. Both cohorts
showed highly significant differences in the mean marks achieved, with better
performance in the oral assessment. There was no evidence of particular groups
of students being disadvantaged in the oral tests. These students and also an
additional cohort were asked about their attitudes to the two different
assessment approaches. Although they tended to be more nervous in the face of
oral assessments, many students thought oral assessments were more useful than
written assessments. An important theme involved the perceived authenticity or
‘professionalism’ of an oral examination. This study suggests that oral
assessments may be more inclusive than written ones and that they can act as
powerful tools in helping students establish a ‘professional identity’.
Keywords: oral assessment; authenticity; identity; performance; inclusive
Introduction
The oral examination (or viva voce), in which the candidate gives spoken responses to
questions from one or more examiner, is perhaps the oldest form of assessment; it has
certainly been traditional practice in some areas of academic life, such as the Ph.D.
viva and clinical examination, for decades if not centuries. But, despite this antiquity
it is now rare or absent in many undergraduate courses. For example, Hounsell et al.
(2007) reviewed the recent UK literature on ‘innovative assessment’. Of 317 papers
considered, only 31 dealt with ‘non-written assessments’, and within this category
only 13% addressed the use of oral examinations; oral group presentations were by far
the most commonly cited non-written assessment, at 50% of the total sample.
The apparent rarity of the oral examination is surprising given its many possible
advantages. Five suggested key benefits are: first, the development of oral communi-
cation skills. These are seen as essential for graduates, which means these skills must
be explicitly taught and assessed (Wisker 2004). Second, oral examinations are more
authentic than most types of assessment (Joughin 1998). Virtually all graduates will
attend job interviews, and will have to defend their ideas and work in verbal
*Corresponding author. Email: m.huxham@napier.ac.uk
AQ1
CAEH_A_515012.fm Page 1 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
2 M. Huxham et al.
5
10
15
20
25
30
35
40
45
exchanges, whilst most will never sit another written examination after they graduate.
Third, oral assessment may be more inclusive. For example, Waterfield and West
(2006) report the views of 229 students with disabilities on different types of assess-
ment. Written exams were the least preferred type, whilst oral examinations consis-
tently came near the top; students with dyslexia were particularly likely to favour oral
assessments. Fourth, oral examinations are powerful ways to gauge understanding and
encourage critical thinking (Gent, Johnston, and Prosser 1999). Because of the possi-
bility of discourse and genuine exchange, oral examinations can allow a focus on deep
understanding and critique, rather than on the superficial regurgitation often found in
written examinations. Fifth, oral examinations are resistant to plagiarism (Joughin
1998); students must explain their own understanding using their own words.
In addition to these advantages, there is a deeper dimension to oral assessment that
involves fundamental distinctions between oral and written communication. The
philosopher Frege emphasised the ambiguity and fluidity of language, and discussed
how the ability of spoken, as opposed to written, language to carry emotional charge
allowed it a flexibility and finesse not possible on the written page (Carter 2008). This
reflects a long-held position in philosophy, going back at least to Plato, that elevates
the spoken word above the ‘mere shadow’ that is the written (Joughin 1999). The idea
that speech reflects, and creates, the person more accurately and fully than writing has
been developed more recently by Barnett, who considers how students struggle in the
‘risky’ environment of higher education to find new ways of defining themselves:
‘speech is one way in which individuals help to form their own pedagogical identities.
It has an authenticity that writing cannot possess’ (Barnett 2007, 89). Related to these
ideas is the pervasive and important notion that higher education at its best consists of
dialogue and learning conversation. To adapt a phrase from psychoanalysis, teaching
is ‘an alchemy of discourse’ (Hayes 2009) from which new understandings can arise.
Hence there are fundamental reasons why higher education might value oral assess-
ments.
So why, despite these arguments, might oral examinations be rare? One obvious
reason could be the perception that they take a long time; individual interviews with
300 first years will generally be impossible (although it is worth considering the possi-
ble savings in time gained from not marking written work). But there is a more explicit
concern about reliability and bias. For example, Wakeford (2000) advises: ‘The new
practitioner in higher education is counselled to beware of and avoid orals’, since they
may be open to bias; clearly, for example, anonymous assessment will be impossible
and producing evidence for external examiners is more difficult. There is a concern
too that oral examinations are very stressful, and might unfairly favour the extravert
and confident student (Wisker 2004). They are often seen as an ‘alternative approach’
which might be valid for a minority of disabled students but which should not apply
to the majority (Waterfield and West 2006). In addition, oral examinations may be
seen as suitable for assessing more emotive or personal issues, such as the ability to
reflect, but as not appropriate for abstract reasoning: ‘only an exceptional person
would prefer to be judged on the basis of a spoken rather than written performance
when the assessment relates to complex abstract ideas’ (Lloyd et al. 1984, 586).
Hence despite the strong arguments in favour of oral examinations, tutors might
legitimately fear using them given pressures on time, warnings that they may not
reach transparent standards of reliability and may be biased against some students
and feelings that they are only for ‘special’ groups. There is currently little in the
literature that might help a balanced assessment of the strengths and weaknesses of
AQ2
CAEH_A_515012.fm Page 2 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 3
5
10
15
20
25
30
35
40
45
oral versus written assessments (but see Joughin 2007). For example, there are to our
knowledge no explicit tests of performance in the same examination administered
orally and in writing to higher education students. The main aim of the current work
is to help fill this gap by performing such a test. In addition, we considered the
following questions: (1) Do the results in oral and written examinations differ
between different types of questions (in particular, between abstract ‘scientific’ ques-
tions and those requiring reflection on personal skills)? (2) Do students find oral
assessments more stressful than written assessments? (3) What do students feel are
the strengths and weaknesses of oral versus written assessments?
Methods
Student groups
Three groups of students were involved as participants in this research. The largest
group was a first-year (Level 7) cohort of 99 biology students taking an introductory
module in evolutionary biology, 28% of whom were male and who ranged in age from
17 to 45 (with a majority in the 17–20-year age group). The second group included 29
third-year (Level 9) students taking a field methods module, with eight males, ranging
in age from 18 to 42. The third group included 18 third-year students, seven of whom
were males and ranging in age from 19 to 29, who studied the same field methods
module the previous year.
Randomised test
In October 2007 the first-year students were randomly allocated to either a ‘written’
or an ‘oral’ group. Students were told of their allocation four weeks before the
assessment, which was a small formative test designed to encourage review and
revision of module material before major summative assessments. After explaining
the purpose of the division into two groups, students were told that they could
request a change of group if they wished. The test involved seven short-answer
questions that were taken from a list of ‘revision points’ that students had already
seen after lectures. Questions dealt with evolution and ecology and were intended to
test for understanding rather than recall; for example, question two was: ‘What
explanation can you give for the fact that most wild plants have even, as opposed to
odd, numbers of chromosomes?’, whilst question three asked: ‘Birds and bats share
the analogous similarity of wings. What is meant by this phrase, and what has
caused the similarity?’. Students allocated to the ‘written’ group were given 30
minutes to answer the questions under standard, silent examination conditions.
Students allocated to the ‘oral’ group had a maximum of 15 minutes in a one-to-one
oral examination. The additional time allowed for the written test was to compensate
for the relative slowness of writing compared with talking; experience in previous
years had shown that the time allocated was more than sufficient for full answers in
both formats. A team of 10 volunteer interviewers was involved. All the candidates
came to a single room before their designated interview slot, and they were accom-
panied from there to the interview room to prevent any opportunity of speaking with
previous candidates before the test. Interviewers followed a standard interview
protocol; questions were read out and were repeated if the candidate asked. Inter-
viewers were also permitted to clarify questions if asked, but only by re-phrasing
rather than by interpreting the question – appropriate clarification was discussed
CAEH_A_515012.fm Page 3 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
4 M. Huxham et al.
5
10
15
20
25
30
35
40
45
between interviewers during training sessions before-hand. Interviewers also
endeavoured to generate a friendly and relaxed atmosphere.
Questions in the written and oral tests were marked on a scale of 0 (no answer or
completely wrong), 1 (partially correct) or 2 (correct and including all key points);
hence the maximum score was 14. Interviewers had standard marking sheets and
had discussed all the questions together before the interviews; they made short rele-
vant notes during the interview and then produced a final mark immediately after-
wards, before the next candidate arrived. Written questions were double-blind
marked. At the end of the written test and of each interview, all students were asked
to complete a very simple questionnaire with the single question ‘how nervous were
you about taking this test?’(answers from 0 ‘not at all nervous’ through 4 ‘very
nervous’).
Mean scores were compared between ‘written’ and ‘oral’ groups using a t-test
(after testing for normality and heteroscedasticity). The distributions of responses to
the ‘nerves’ questionnaire were compared using a chi-squared test.
Paired test
An oral examination with four questions – two ‘scientific analysis’ questions on a
field report submitted by the candidate and two ‘personal and professional develop-
ment’ questions asking for reflection on, for example, communication and group work
skills developed and used during the fieldwork – is the most important assessment
component in the ‘applied terrestrial ecology’ module taken by the third-year cohort.
Questions are specific to each candidate and are developed based on each individual’s
report and field performance. The usual test was modified in 2008 by the addition of
a written element, involving two additional questions (one ‘scientific analysis’ and
one ‘personal and professional development’). Questions were first devised for each
candidate and then selected at random for the oral or written component. All candi-
dates were taken initially to an examination room where they had eight minutes to
complete the written questions, before being led to the interview room for a 15-minute
oral examination.
In these interviews, the assessor quickly promoted a positive and friendly environ-
ment for the each student by providing a warm welcome, establishing a rapport
through use of their first names, clarifying what was to happen in the oral assessment
and thanking them for their report. The questions asked had a clear context (e.g. they
referred to a specific figure or table in the student’s field report) and where students
did not fully answer questions, they were asked another supplementary – although not
leading – question (e.g. if a student was asked ‘why did you choose to use an ANOVA
test for the data in Table 2?’ a supplementary question might be ‘under what general
circumstances do you use ANOVA?’).
Questions were marked on a seven-point scale (0 = no response, 3 = bare pass,
showing a very basic understanding but no knowledge of the broader context or
evidence of wider reading and synthesis of knowledge from elsewhere, 6 = excellent
answer, showing clear understanding and an ability to place the answer in a broad
context of relevant literature or experience); one-third of the oral examinations were
double marked by two interviewers, and all written questions were double marked.
Mean scores (out of the total of four questions in the oral and two in the written tests)
were compared, paired within candidates, using a paired t-test. Marks were also subdi-
vided into those for ‘scientific analysis’ and ‘personal and professional development’
AQ3
CAEH_A_515012.fm Page 4 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 5
5
10
15
20
25
30
35
40
45
questions, and mean marks achieved in the oral and written tests for these were
compared using paired tests.
Qualitative evaluation
Three different sets of qualitative data were collected. First, students from the first-
year cohort were invited to participate in a focus group to discuss their experiences.
The discussion was facilitated by a member of staff from outside the programme team
and students were advised that staff involved in the module would not be present.
Equal number of students who had experienced the written and oral tests participated
and different student groups were invited including home, international, school-
leavers, mature students, and males and females. The discussion was recorded and
students participating gave permission for their contributions to be used on the basis
that their input would be anonymous. To encourage participation, the invitation made
clear that their input was valued; they were also offered a sandwich lunch.
Qualitative feedback was collected from the third-year cohort in 2007, who took
an oral assessment identical to that described for the 2008 cohort but without the addi-
tion of the written component. This is the first time these students had experienced this
kind of viva voce test at university. After the tests had been marked and feedback had
been provided, students were asked by email to respond to the following statement:
Please describe how you felt the interview went. In particular, how did you perform
compared to a more conventional assessment (such as a written exam)? What do you
think the advantages and disadvantages of being assessed by interview are, and what
lessons can you learn from the experience?
Students in the third-year cohort in 2008 were also invited to participate in a focus
group to discuss their experiences of the viva voce. The focus group ran on the same
basis as that described for first-year students above.
Recordings from the focus groups were transcribed, and thematic analysis was
used on these transcripts and on the email texts to identify key themes and illustrative
quotes.
Results
Randomised test
Four students requested transfers from the group to which they had randomly been
assigned; two non-native English speakers asked to be moved from the oral to the
written examination. Two students also asked to transfer from the written to the oral
group; one on the grounds of dyslexia and one for undisclosed reasons.
A total of 91 students took the assessments (45 sat the oral examination and 46 the
written one). The mean scores achieved in the oral and written tests were 8.17 and
6.24 respectively, a highly significant difference (two-sample t-test: t-value = 3.46, df
= 89, P-value = 0.001; Figure 1). Separating by gender gave a highly significant
difference between females, with a difference of 2.03 between mean scores. Males
showed a similar trend, with orally assessed students doing better by 1.50 marks on
average, however this was not significant (two-sample t-test: t-value = 1.39, df = 26,
P-value = 0.176). There was no significant difference in the marks given by the two
independent markers to the written test results.
CAEH_A_515012.fm Page 5 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
6 M. Huxham et al.
5
10
15
20
25
30
35
40
45
Figure 1. Boxplots (showing medians, central line, interquartile range, box margins and outliers) of data obtained from the first-year students’ results in oral ( n = 45) and written (n = 46) tests.
The distributions of scores recorded in the ‘nerves’ questionnaire are shown in
Figure 2. There was a tendency for students to record higher scores (i.e. a greater
degree of nervousness) in the oral group, although this was not quite a significant
difference (chi-squared test: chi-Sq = 6.778, df = 3, P-value = 0.079).
Figure 2. Frequency distributions of self-reported ‘nervousness’ of first-year students who took the oral and written tests; 1 = ‘not at all nervous’, 4 = ‘very nervous’.
Figure 1. Boxplots (showing medians, central line, interquartile range, box margins and out-
liers) of data obtained from the first-year students’ results in oral (n = 45) and written (n = 46)
tests.
Figure 2. Frequency distributions of self-reported ‘nervousness’ of first-year students who
took the oral and written tests; 1 = ‘not at all nervous’, 4 = ‘very nervous’.
AQ
AQ
CAEH_A_515012.fm Page 6 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 7
5
10
15
20
25
30
35
40
45
Paired test
Twenty-four students completed the oral and written tests. There was a highly signif-
icant difference between the marks scored by each student in the oral (mean = 5.4) and
written (mean = 4.6) components (paired t-test: t = 3.84, P = 0.001). The better perfor-
mance in the oral assessment was consistent between question types with the signifi-
cant differences remaining for both the subsamples of ‘scientific analysis’ and
‘personal and professional development’ questions.
Qualitative evaluation
Fifteen (out of a total of 18) third-year students responded to the email request for
feedback in 2007 (comments from this group are henceforth indicated by ‘3rd 2007’).
In common with similar work seeking to capture the student voice (Campbell et al.
2007), recruiting participants for the focus groups proved problematic and only three
first-year students (comments indicated by ‘1st 2008’) and four third-year students
(3rd 2008) attended their respective groups. However those who did attend contrib-
uted their views enthusiastically and perceptively.
An important theme in the student responses concerned anxiety; seven students in
the 2007 cohort mentioned feeling particularly nervous in the face of the interview,
and this was also raised in the focus groups:
I felt I did poorly in the oral exam, however I can honestly say that much of this was
down to nerves. I felt uncomfortable and was concentrating so hard on trying to sound
professional and not make mistakes. (3rd 2007)
You had to think [quickly] and then you are thinking you will be short on time and so
you panic. (3rd 2008)
However, two students in 2007 and two in the focus groups said they felt less nervous
than in written examinations. Students also identified interviews as challenging
because they required real understanding:
In comparison to a conventional exam I thought it was just as challenging, if not a little
more. To be able to cram for an exam and put it all down on a piece of paper is one thing,
but to be able to talk about a subject, clearly and concisely, you have to really understand
it, and I think that is the challenge in an interview. (3rd 2007)
You need to understand what you are saying, what you are trained to explain. (3rd 2008)
Despite the reported anxiety, 13 of the students stated explicitly that they preferred the
oral examination to a traditional written one, whilst only four stated that they would
have preferred a written test. Most of the students valued the opportunity to practice
interview skills and gain relevant experience:
I think having an assessed interview is a good idea. It give me an insight into what I’ll
inevitably have to deal with in the future, interview skills don’t come naturally so I think
the more practice we get the better equipped we’ll be for leaving university and applying
for jobs. (3rd 2007)
One student described preferring an interview because he was dyslexic.
An additional theme concerned how easy it was to express thoughts and opinions
in the two formats, with some students identifying oral communication as more ‘natural’:
CAEH_A_515012.fm Page 7 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
8 M. Huxham et al.
5
10
15
20
25
30
35
40
45
I did keep thinking back, thinking ‘they are next door saying what they mean and I am
struggling to put down on paper’. (1st 2008)
I thought it was easier to explain yourself and explain what you are doing to a person
rather than trying to [write it down]. Its easy to get muddled up with your words and try
to explain something in writing. If you talk to someone in person it’s a lot more natural.
(3rd 2008)
Discussion
Students performed better in oral compared with written tests; this result was consis-
tent between year groups, between different types of questions and when using
paired and un-paired designs. There are a number of possible explanations for this
strong effect, including bias in the assessment procedures. The famous case of
‘clever’ Hans, ‘the counting horse’ illustrates the potential influence of unconscious
cues from the interviewers (Jackson 2005). Hans was able to ‘count’ by stamping its
hoof until its owner unwittingly signalled when to stop. Such effects may have
occurred in our study (although, of course, the current questions were much more
complex and less open to simple cues than counting). We agreed with standard
interview procedures which excluded explicit prompts and encouragement, but did
not curtail all normal social interaction. We were concerned to preserve the ‘ecologi-
cal integrity’ of the interviews and wanted to avoid the highly artificial circumstance
of interviewers simply speaking a question and then remaining silent, like disembod-
ied recorders. Instead the experience was designed to be much closer to an authentic
viva voce or job interview. The current study was therefore not designed as tightly
controlled psychological research, but rather as a comparison of oral and written
assessments under realistic educational settings. As such, the possible existence of
‘clever Hans effects’ can be regarded as an integral part of most oral assessments, in
the same way that the ability to write legibly and quickly is integral to most written
assessments. There were no a-priori expectations that the oral performances would
be better; in fact, given the suggestions that oral assessments can lead to bias against
certain groups of students and can induce stress, a significantly worse performance
seemed equally likely.
The current work supports the evidence that oral assessments might induce
more anxiety than written ones. The quantitative comparison approached signifi-
cance (Figure 2), and anxiety was an important theme raised in the qualitative
responses. However this is not necessarily negative, indeed it may explain the
better average performance, with students preparing more thoroughly than for a
‘standard’ assessment. Interestingly a majority of the third-year students, who
chose to identify anxiety as a feature of the oral assessment, nevertheless stated
that they preferred it to a written test. In his phenomenographic study of student
experiences of oral presentations, Joughin (2007) found that greater anxiety about
oral compared with written assessment was associated with a richer conception of
the oral task as requiring deeper understanding and the need to explain to others.
Thus anxiety was a product of deeper and more transformative learning. The
reported anxiety might also simply reflect the relative lack of experience in oral
compared with written assessments, which was a point made explicitly in the qual-
itative evaluation:
I think the oral is quite different from the writing and we should have some training
because we don’t have experience. (3rd 2008)
CAEH_A_515012.fm Page 8 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 9
5
10
15
20
25
30
35
40
45
As with all types of assessment, it is likely that oral examinations will suit some learn-
ing styles and personalities better than others. It is not surprising that students with
dyslexia might favour oral assessments (Waterfield and West 2006). The current
research lends qualitative support to this idea, with two first-year students identifying
dyslexia as the reason why they chose to swap from the written to the oral group and
students raising the issue in the evaluation:
Before we actually did the [written] test I was a bit apprehensive as I have really bad
spelling so I do get quite conscious about that. (1st 2008)
I think I performed to a higher standard than in written tests. The reason for this I have
dyslexia, and dyspraxia, so reading and writing for me has always been harder than just
plain speak. (3rd 2007)
However, there is no support here for the notion that oral assessments should be
regarded as somehow marginal or suited only for ‘special’ groups of students.
Although sample sizes were not sufficiently large to allow multiple sub-divisions into
different social and demographic groups, there was no evidence that particular types
of students did worse at orals. Although the discrepancy in mean marks obtained in
oral compared with written tests was not as large for male as for female students, the
trend was the same and the lack of significance may have been a result of lower
sample sizes. Clearly it would be interesting to investigate possible gender differences
further, but our results do not suggest males would be disadvantaged by using oral
assessments.
Because oral language may generally carry a bigger ‘emotional charge’ than writ-
ten (Carter 2008), and of course is supplemented in most cases with a range of body
language that can transmit emotional messages, it may be true that oral assessment
will be better fitted to affective and reflective tasks. In contrast the enunciation of
complex abstract ideas might be easier in writing; a clear example would be mathe-
matics. These arguments might suggest the promotion of oral assessments specifically
for developing and measuring reflective skills, whilst abstract conceptual thinking
should be assessed using traditional written formats. However, the current work
showed no such distinction. The first-year cohorts were tested on theoretical, abstract
ideas such as ‘the argument from design’ and aspects of nitrogen cycling in ecosys-
tems, and yet, students performed better on these questions when responding orally.
The third-year students were assessed on questions divided into ‘scientific analysis’
and ‘personal and professional development’ categories, but a similar result of better
performance in the oral compared with written responses was found for both. Hence
there is no support here for the idea of restricting oral assessments to ‘special’ or
emotional categories of learning. The Third International Mathematical and Science
Study (TIMMS) programme tested thousands of children using the same standard
written tests in different countries to allow international comparisons. Schoultz, Säljö,
and Wyndhamn (2001) interviewed 25 secondary school children using two TIMMS
questions on physics and chemistry concepts. They found much better performance in
the oral tests than the average scores in the written tests for children of the relevant
age; their qualitative analyses showed that their subjects often understood the core
concepts being tested but failed to interpret the written questions correctly without
guidance. Hence the ability to re-phrase the question in an oral setting allowed a genu-
ine test of students’ conceptual understanding, and thus better performance. A similar
CAEH_A_515012.fm Page 9 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
10 M. Huxham et al.
5
10
15
20
25
30
35
40
45
effect may explain some or all of the differences we found, and we endorse their
recommendation to challenge the often implicit assumption that ‘responding to
abstract questions in writing is the natural context in which knowledge appears’.
A long tradition in philosophy and discursive psychology views language as
constitutive rather than simply transmissive; people create key aspects of their reality
(particularly their social and subjective realities) through language and especially
through ‘speech acts’. This tradition is concerned with language as a form of social
action, which helps construct such attributes as ‘the self’ during conversation and
discourse (Horton-Salway 2007). This discursive approach, related to Barnett’s idea
of students creating ‘pedagogical identities’ through speech (Barnett 2007), can help
interpret an important theme in the experiences reported by the students concerning
the performative aspects of the viva. One reason students reported greater anxiety was
because they were ‘performing’ in a social space:
This experience has taught me that it is really important to prepare as much as possible
for an interview. There is a big difference between going over things in your head and
saying them out loud clearly and confidently. (3rd 2007)
There was a perception that the oral interview required a different approach from a
written test:
I think that an oral exam allows people to use grammar and words that they may not use
when writing. (3rd 2007)
With a lot of written assessments, I think, you just memorise the paragraph like a parrot
and not know what it means. But you can tell when someone is doing that when you
speak to them because they get that glazed look in their eyes as they recite it. (1st 2008)
This different approach was seen as being more ‘professional’:
I felt uncomfortable and was concentrating so hard on trying to sound professional and
not make mistakes. This is why I was reluctant to use the word ‘niche’, I thought 95%
that it was the correct word to use. (3rd 2007)
There is an impression here of students striving to create ‘professional’ and ‘confi-
dent’ personalities (Gent, Johnston, and Prosser 1999). Zadie Smith describes one
of her working-class characters using the words ‘modern’ and ‘science’: ‘as if
someone had lent him the words and made him swear not to break them’ (2000,
522); the oral assessments involved students using professional language without
‘breaking it’.
Written examinations do not seem to elicit the same feelings, perhaps because such
examinations are so strongly identified with the worlds of school and college, rather
than work, and perhaps because they are usually private and anonymous:
Because we have done [written assessments] since we have been in school, its normal
for us but once you leave school/education you will never need [to do them] again
whereas talking to somebody you will always use. (3rd 2008)
Whilst most academics recognise how assessments can drive student learning, they
may not appreciate how the mode of assessment – including the ‘social performance’
of the assessment – may shape students’ approaches and even identities.
AQ4
CAEH_A_515012.fm Page 10 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 11
5
10
15
20
25
30
35
40
45
In her discussion of the power of the spoken word in the ancient world, Karen
Armstrong describes Socrates’ low opinion of the written text compared with the vivi-
fying effect of living dialogue: ‘Written words were like figures in a painting. They
seemed alive, but if you questioned them they remained “solemnly silent”. Without
the spirited interchange of a human encounter, the knowledge imparted by a written
text tended to become static’ (2009, 64). There is a sense of fluidity, of students
‘trying things out’ during the interchange of the oral assessment – this exploration
might be of identities but also of concepts such as ‘niche’. This stands in contrast to
the ‘static’ representation in written assessments, and is a powerful endorsement of the
use of oral assessments. The current work has found no evidence of disadvantage
accruing from oral assessments to particular groups of students, nor of the need to
restrict orals to particular types of questions. Rather our quantitative and qualitative
results suggest important benefits to students from their use. Our sample size was rela-
tively small and was restricted to biology students at a single institution; if our results
prove representative of broader groups of students, then they support attempts to
uphold and enhance the ‘spirited interchange’ of the oral as a form of assessment in
higher education.
Notes on contributors
References
Armstrong, K. 2009. The case for god: What religion really means. London: Bodley Head.
Barnett, R. 2007. A will to learn: Being a student in an age of uncertainty. Maidenhead:
McGraw-Hill/Open University Press.
Campbell, F., L. Beasley, J. Eland, and A. Rumpus. 2007. Final report of Hearing the student
voice project: Promoting and encouraging the effective use of the student voice to
enhance professional development in learning, teaching and assessment within higher
education. Edinburgh: Napier University. www.napier.ac.uk/studentvoices/download/
Final_report_studentvoice_web.pdf.
Carter, M. 2008. Frege’s writings on language and the spoken word. http://western-philosophy.
suite101.com/article.cfm/freges_writings_on_language_and_spoken_word#ixzz0HgAdX
4Pl&D (accessed December 13, 2009).
Gent, I., B. Johnston, and P. Prosser. 1999. Thinking on your feet in undergraduate computer
science: A constructivist approach to developing and assessing critical thinking. Teaching
in Higher Education 4, no. 4: 511–22.
Hayes, J. 2009. Who is it that can tell me who I am? London: Constable.
Horton-Salway, M. (ed.). 2007. Social psychology: Critical perspectives on self and others.
Milton Keynes: Open University Press.
Hounsell, D., N. Falchikov, J. Hounsell, M. Klampfleitner, M. Huxham, K. Thompson, and S.
Blair. 2007. Innovative assessment across the disciplines: An analytical review of the
literature. York: Higher Education Academy.
Jackson, J. 2005. Clever hans. A horses tale. http://www.skeptics.org.uk/article.php?dir=
articles&article=clever_hans.php (accessed January 15, 2010).
Joughin, G. 1998. Dimensions of oral assessment. Assessment & Evaluation in Higher Education
23: 367–78.
Joughin, G. 2007. Student conceptions of oral presentations. Studies in Higher Education 32,
no. 3: 323–36.
AQ5
AQ6
CAEH_A_515012.fm Page 11 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
12 M. Huxham et al.
5
10
15
20
25
30
35
40
45
Lloyd, P., A. Mayes, A. Manstead, P. Meudell, and H. Wagner. 1984. Introduction to psychology.
An integrated approach. London: Fontana.
Schoultz, J., R. Säljö, and J. Wyndhamn. 2001. Conceptual knowledge in talk and text: What
does it take to understand a science question? Instructional Science 29: 213–36.
Smith, Z. 2000. White teeth. London: Penguin Books.
Wakeford, R. 2000. Principles of assessment. In Handbook for teaching and learning in
higher education, ed. H. Fry, S. Ketteridge, and S.A. Marshall. London: Routledge.
Waterfield, J., and B. West. 2006. Inclusive assessment in higher education: A resource for
change. Plymouth: University of Plymouth. http://www.plymouth.ac.uk/pages/view.asp?
page=10494 (accessed January 15, 2010).
Wisker, G. 2004. Developing and assessing students’ oral skills. Birmingham: Staff Education
and Development Association.
AQ7
CAEH_A_515012.fm Page 12 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
... The main topic that has been discussed in the oral assessment literature is related to the disadvantages and advantages of oral in comparison to written assessment, specifically focusing on understanding assessment from the learner's and the teacher's perspectives. In terms of the disadvantages of oral assessment in comparison to written ones, two things came up: fairness and anxiety (Henderson, Lloyd, & Scott, 2002;Hounsell et al., 2007;Huxham, Campbell & Westwood, 2012;Joughin, 2007). Videnovic (2017b) reports from her study that the mathematics professors interviewed believe that it is not quite clear which type of an exam, oral or written, can be considered to be more or less fair in comparison to the other, and which one of these two can cause more or less anxiety among students. ...
... 34). Also, Huxham et al. (2012) note that oral assessment anxiety may be primarily related to its unfamiliarity. In his phenomenographic study of student experiences with oral presentations, Joughin (2007) notes that greater anxiety about oral compared to written assessment is associated with a richer |4 Indonesian Journal of Mathematics Education, Vol. 5, No. 1, April 2022 conception of the oral task as requiring deeper understanding and the need to explain to others. ...
... Videnovic (2017a) reports that the mathematics professors interviewed believe that written exams can mostly assess procedural knowledge and instrumental understanding, while oral exams can better assess conceptual knowledge and relational understanding in mathematics. Moreover, the research studies on the advantages of the oral assessment show that oral assessment in mathematics and in other subjects: 1) provides immediate feedback and immediate grade (Boedigheimer et al., 2015;Iannone & Simpson, 2012;Odafe, 2006;Roecker, 2007); 2) do not allow plagiarism (Huxham et al., 2012;Joughin, 1998;Nor & Shahrill, 2014); 3) helps develop better oral communication skills (Badger, 2010;Huxham et al., 2012); 4) promotes deep comprehension of the learned material (Iannone & Simpson, 2012, 2015Joughin, 2007;Lianghuo & Mei, 2007;Nelson, 2010;Nor & Shahrill, 2014;Odafe, 2006;Roecker, 2007); 5) encourages students to deeply/actively engage with the course material (Boedigheimer et al., 2015;Iannone & Simpson, 2012;Nor & Shahrill, 2014;Odafe, 2006); 6) helps students gain ownership of the learned material (Boedigheimer et al., 2015); 7) is more personal/ provides individualized contact between teacher and student (Joughin, 2007); 8) helps students learn to express technical material clearly and concisely (Boedigheimer et al., 2015); 9) allows for probing knowledge through dialogue (Badger, 2010;Joughin, 1998;Odafe, 2006); 10) provides long-lasting mathematical knowledge (Iannone & Simpson, 2012); 11) is authentic/helps prepare students for their professional careers (ex. career interviews) (Boedigheimer et al., 2015;Henderson et al., 2002;Huxham et al., 2012;Iannone & Simpson, 2015;Joughin, 1998); 12) helps develop better presentation skills (Boedigheimer et al., 2015); 13) helps students build the confidence (Boedigheimer et al., 2015); 14) is reactive to students' needs (Iannone & Simpson, 2015); 15) provides the opportunity for assessing students' mental math skills (ex. ...
Article
Full-text available
This paper studied the beliefs about mathematics, mathematics assessment, and written and oral mathematics assessment in post-secondary education from the mathematics professors' perspectives. Seven mathematics professors and instructors were interviewed and asked to explain how they perceive mathematics and mathematics assessment and how they compare the oral exam to the written exam. Four out of seven mathematics professors and instructors were educated in Poland, Romania, Bosnia, and Ukraine, and they are currently teaching mathematics at a university in Canada. The other three professors were educated in Canada, Germany, and the United States, and they are currently teaching at a university in Germany. Five participants had previously experienced an oral examination in mathematics, while the other two had never been exposed to an oral examination in mathematics throughout their schooling. The results showed that similar beliefs about mathematics and mathematics assessment result in different beliefs about written and oral mathematics assessment.
... Oral assessments make most students nervous, and many students believe that oral assessments are more useful than written assessments. (Huxham et al., 2012). Oral tests are not preferred by undergraduate students at private universities in Mogadishu, according to teachers who observed this during the -19 pandemic when the students scored up law marks. ...
Article
Full-text available
The aim of the study is to investigate the preferences of undergraduate students regarding the types of examinations. The research is based on a descriptive design. using TURF analysis in SPSS. To find the required data, the random sampling technique was employed to draw the sample size of (510) students from different faculties. According to the study's findings, assignments have the greatest reach and frequency by group size, followed by objective, and essay tests. Finally, the research suggests ways to improve assessment modes for undergraduate students at private universities.
... This is particularly true in Arts assessment where the practical component cannot be tested effectively through test and exam writing. Other forms of assessment, like oral examinations (Huxham, Campbell, & Westwood, 2012), portfolios, peer assessment and self-assessment enhance learning, but are not often used in Higher Education (Rawlusyk, 2018), yet seems ideal for Arts assessment. Other Arts compatible methods include open-book exams, takeaway papers, posters, projects, case studies, reflective journals, assessed seminars, artefacts, concept maps, exhibitions, games and simulations, orals and role-playing (Beets, 2009;Knight, 2001). ...
Chapter
Full-text available
Online teaching platforms have been a technological resource available to university teachers for well over a decade. The extent of take-up by university teachers had, however, been uneven—until COVID-19 made face-to-face teaching unviable. A proliferation of rapid-fire university staff development courses ensued, to fast-track competence to teach online, without due cognizance of the impediments that students in developing contexts like South Africa would have to navigate. Access to synchronous sessions presents particular teaching and learning challenges. Arguably the most exigent aspect of the pedagogic process is the extent to which teaching and assessment practices might sustain the same level of student cognitive competence development in the online space. As such, university academics were likely to experience dissonance as ‘new’ learners (of online pedagogy) and ‘new’ teachers (using online pedagogy). As a higher education pedagogue, I reflect on my particular struggles in moving to online teaching and assessment practice. Methodologically, I engage the tenets of self-study research to portray the dilemmas and cognitive dissonance I experienced in aspiring towards pedagogic communicative competence in the digital space. I reflect on how I employ synchronous and asynchronous teaching using video-conferencing tools, and the necessity of undergirding such online teaching and assessment with fundamental pedagogic/educational principles. I argue that the online pedagogy is likely to be successful if pedagogues are consciously alert to teaching and learning theory that undergirds online teaching, to ensure that online learning platforms like Moodle move beyond its predominantly repository-like function.
... This is particularly true in Arts assessment where the practical component cannot be tested effectively through test and exam writing. Other forms of assessment, like oral examinations (Huxham, Campbell, & Westwood, 2012), portfolios, peer assessment and self-assessment enhance learning, but are not often used in Higher Education (Rawlusyk, 2018), yet seems ideal for Arts assessment. Other Arts compatible methods include open-book exams, takeaway papers, posters, projects, case studies, reflective journals, assessed seminars, artefacts, concept maps, exhibitions, games and simulations, orals and role-playing (Beets, 2009;Knight, 2001). ...
Chapter
The presence of COVID-19 amid an inflexible, binary-gendered South African academia has imposed increased mental, social, economic, and physical burdens on women, intersecting race, class, gender, and culture. COVID-19 has exposed issues in wage gaps, role overloads, research productivity, tenure, mentorship, and work-life balance, drawing attention to the burdens experienced by women. While women academics experienced varied challenges pre-COVID-19, the pandemic exacerbated these and regressed the advancement of women in academia. The numerous challenges that women academics experience are categorised under the four key areas, namely mental, social, economic, and physical encumbrances. A qualitative desktop methodology and an auto-ethnographic approach are adopted in this study to examine the burdens of a virtual university on women. An exploration of scientific studies was incorporated into the presented chapter. The chapter is underpinned by a theoretical framework describing the social construction of reality and intersectionality, which is well placed in circumstances where women are marginalised. To respond to the current position that COVID-19 and the transformed university structures have placed women academics in, a multidisciplinary gendered inclusive approach is utilised. Placing women at the centre of the clinical model will yield an integrated institutional model.
Article
Australia has recently implemented Teaching Performance Assessments (TPAs) as a national accreditation requirement to assess final year preservice teachers’ classroom readiness. In 2019, an Australian university developed a TPA to meet this requirement, comprising three written components and one oral component. This exploratory study investigated 18 TPA assessors’ perceptions of the oral component. Focus group data revealed that both explicit and latent assessment criteria influenced assessors’ professional judgments of the oral component. A discourse competence framework was used to analyse the data, illustrating how preservice teachers’ personal experience and their professional and institutional discourse competence are evident in their orals. Thematic analysis revealed that benefits and issues of fairness and equity contributed to assessors’ perspectives about the value of the oral component.
Article
Five third-year student midwives were interviewed to assess the impact the assessment of obstetric emergencies had on their perceived confidence to manage them correctly in practice. Using purposive sampling and semi-structured interviews, a qualitative descriptive research was conducted. Four themes were identified: OSCE as a form of assessment, Impact of module and assessment, acquisition of knowledge and ways of improving assessment Participants highlighted that assessments act more as an incentive to study and learn and seemed to bear little relevance on their long-term impact on practice confidence. All interviewed students appear to believe that most of their knowledge was acquired through simulation-based learning, lectures, study revision, and clinical encounters of emergencies. Additionally, the assessment undertaken was felt by them not to be comprehensive enough and needed the incorporation of a variety of stations to assess the students' knowledge fully. Recommendations on improvement to assessments to maximise students' confidence and knowledge acquisition have been made.
Article
Aim This study explored postgraduate nursing students’ perceptions, anxiety and satisfaction of an innovative and novel grading method for online vivas, consensus marking, compared with traditional assessor judgement. Background Reflection, self-evaluation and feedback conversations have the potential to develop nursing students’ evaluative judgement. Consensus marking is a novel method of grading students’ performance that supports students to reflect, self-evaluate and grade their own work. Active engagement in a feedback dialogue supports students to calibrate their self-evaluation to the required standard in a grade negotiation. Through this approach, students are supported to develop evaluative judgement and lifelong learning skills. Design A convergent mixed-methods parallel research design was used. Methods Students enrolled in a postgraduate emergency nursing unit of study completed two online viva assessments. One viva was graded using traditional assessor judgement and the other used consensus marking, involving a two-way feedback dialogue, where students had an opportunity to actively engage in grading their own work with the assessor. Student perceptions of each grading method were explored through semi-structured interviews. Interview data were analysed thematically using a six-stage approach. Student anxiety and satisfaction were measured pre- and post each viva using valid and reliable questionnaires. Non-parametric analyses explored differences in anxiety and satisfaction between the two grading methods. Alpha was set at 0.05. Results Forty-six participants had complete data for anxiety and satisfaction across both test occasions (82%) and were included in the analysis. Of these, 13 students participated in follow up interviews. Students perceived that the ability to self-evaluate performance and discuss their grade with the assessor using consensus marking was less hierarchical and similar to a collegial debrief. Student anxiety was significantly lower prior to consensus marking compared with the assessor judged viva (p < 0.001). Students were significantly more satisfied with consensus marking compared with assessor judgement (p <0.01). Conclusions Consensus marking created an opportunity for students to identify knowledge deficits through reflection and self-evaluation of their own performance prior to external judgement. Students were more satisfied and less anxious with the consensus marking grading method compared with traditional assessor judgement. These findings have implications for the development and application of new grading methods in nursing education to facilitate the development of evaluative judgement.
Chapter
Lecturers were still in the throes of transforming curricula and aligning assessments to meet the needs of decolonisation when the worldwide COVID-19 pandemic struck in 2020. Instant changes regarding assessment had to be made to keep the integrity of university standards intact whilst moving to a form of online teaching. Lecturers were faced with instant decision-making to save the academic year and did not have time to do much research to select the best possible options for their contexts. Many rural universities and their students were not equipped for this new reality. This research looks at the design process used to answer the research question: how do you successfully adapt an Arts assessment programme to suit online teaching modes in a contextually appropriate manner? Using qualitative methods and an interpretivist paradigm, insight is provided into the process and outcomes of adapting an Arts assessment programme to suit the COVID-19 reality in an Arts module for Foundation Phase in a rural South African university. This article will be beneficial to lecturers in the Arts when confronted with emergency remote and/or hybrid assessment.
Article
The project discussed here aimed to develop student's critical thinking about computer science by applying research on student learning to the design of teaching method and assessment. A complementary aim was to develop student confidence and competence in group discussion and oral presentation. Interactions between student learning strategy and lecturer teaching strategy are analysed to establish teaching and assessment practices suited to overcoming the student tendency to concentrate on examination requirements to the detriment of their critical thinking abilities.The project took the form of design and delivery of a one‐semester credit bearing undergraduate module where assessment of student performance was by oral examination only. Course design features discussed are: course design as educational development; lecturers as models of critical thinking; student tutorial discussion and oral presentation; lecturer/student feedback and collaboration; oral examination of student performance.The findings emphasise the complexity of a critical thinking pedagogy; challenge simple generic vs discipline specific accounts of critical thinking; and suggest that active engagement between students and lecturers with a collaborative approach to teaching, learning and assessment are key determinants of education for critical thinking.
Article
An analysis of the literature on oral assessment in higher education has identified six dimensions of oral assessment: primary content type; interaction; authenticity; structure; examiners and orality. These dimensions lead to a clearer understanding of the nature of oral assessment, a clearer differentiation of the various forms within this type, a better capacity to describe and analyse these forms, and a better understanding of how the various dimensions of oral assessment may interact with other elements of teaching and learning.
Article
What is referred to as conceptual knowledge is one of the mostimportant deliverables of modern schooling. Following the dominance of cognitive paradigmsin psychological research, conceptual knowledge is generally construed assomething that lies behind or under performance in concrete social activities. In the presentstudy, students' responses to questions, supposedly tapping conceptual knowledge,have been studied as parts of concrete communicative practices. Our focus has been onthe differences between talk and text. The most frequent approach for generatinginsight into conceptual knowledge is by means of written tests. However, the very mannerin which people handle the demands of this particular form of mediation is seldomattended to. This problem has been studied by means of two items taken from theinternational comparison of knowledge and achievement in mathematics and science, TIMSS.The results reveal that it is highly doubtful if the items test knowledge of scienceconcepts to any significant extent. In both instances, the difficulties students have, asrevealed in the interview setting, seem to be grounded in problems in understanding somedetails in the written questions. These difficulties are generally easily resolved in aninteractive setting. It is argued that the low performance on these items can to a largeextent be accounted for by the abstract and highly demanding form of communication that iswritten language.
Article
A phenomenographic study of students’ experience of oral presentations in an open learning theology programme constituted three contrasting conceptions of oral presentations—as transmission of ideas; as a test of students’ understanding of what they were studying; and as a position to be argued. Each of these conceptions represented a combination of related aspects of students’ experience, namely, their awareness of the audience and their interaction with that audience, how they perceived the nature of theology, affective factors, and how they compared the oral presentation format with that of written assignments. The conception of the presentation as a position to be argued was associated with a particularly powerful student learning experience, with students describing the oral presentation as being more demanding than the written assignments, more personal, requiring deeper understanding, and leading to better learning. The study draws our attention to the various ways in which students may perceive a single form of academic task and their need to develop their understanding of assessment formats.
Frege's writings on language and the spoken word
  • M Carter
Carter, M. 2008. Frege's writings on language and the spoken word. http://western-philosophy. suite101.com/article.cfm/freges_writings_on_language_and_spoken_word#ixzz0HgAdX 4Pl&D (accessed December 13, 2009).
Principles of assessment In Handbook for teaching and learning in higher education
  • R Wakeford
Wakeford, R. 2000. Principles of assessment. In Handbook for teaching and learning in higher education, ed. H. Fry, S. Ketteridge, and S.A. Marshall, 42–61.
Clever Hans. A horse's tale
  • J Jackson
Jackson, J. 2005. Clever Hans. A horse's tale. http://www.skeptics.org.uk/article.php?dir= articles&article=clever_hans.php (accessed January 15, 2010).