ArticlePDF Available

Oral versus written assessments: A test of student performance and attitudes

Authors:

Abstract

Student performance in and attitudes towards oral and written assessments were compared using quantitative and qualitative methods. Two separate cohorts of students were examined. The first larger cohort of students (n = 99) was randomly divided into ‘oral’ and ‘written’ groups, and the marks that they achieved in the same biology questions were compared. Students in the second smaller cohort (n = 29) were all examined using both written and oral questions concerning both ‘scientific’ and ‘personal development’ topics. Both cohorts showed highly significant differences in the mean marks achieved, with better performance in the oral assessment. There was no evidence of particular groups of students being disadvantaged in the oral tests. These students and also an additional cohort were asked about their attitudes to the two different assessment approaches. Although they tended to be more nervous in the face of oral assessments, many students thought oral assessments were more useful than written assessments. An important theme involved the perceived authenticity or ‘professionalism’ of an oral examination. This study suggests that oral assessments may be more inclusive than written ones and that they can act as powerful tools in helping students establish a ‘professional identity’.
Author Query Sheet
Manuscript Information
Journal Acronym cAEH
Volume (issue) G07998 Flow Article
Author’s name M. Huxham et al.
Manuscript No.
(if applicable) 515012
AUTHOR: The following queries have arisen during the editing of your manuscript.
Please answer the queries by making the necessary corrections on the CATS online
corrections form. Once you have added all your corrections, please press the SUBMIT
button.
QUERY
NO. QUERY DETAILS
1. Please check and confirm the insertions in affiliation.
2. Please provide full reference details for reference [Joughin 1999].
3.
It is mentioned as
Questions were marked on a seven-point scale
But description of only three scales (i.e. 0, 3 and 6) is given. Please provide the
descriptions for other scales also (i.e. 1, 2, 4 and 7).
4. Please provide page range for the quotation “responding to abstract questions in writing is
the natural context in which knowledge appears”.
5. Please provide a short biographical note for each author, for example current academic
position, research interests or books published.
6.
The URL
‘www.napier.ac.uk/studentvoices/download/Final_report_studentvoice_web.pdf’ provided
in the reference [Campbell et al. 2007] is not linking to any working website. Please
provide a valid site for the search purpose.
7. Please provide page range for the referred chapter title in the reference [Wakeford 2000].
AQ
Artwork Query (from Artwork Dept.)
Please resupply this artwork in a format suitable for printing.
After resampling and resizing for typesetting, the resolution of the figure
[Fig1_181, Fig2_96] is not appropriate [the min. requirement is of 300 dpi].
CE: VAG QA: SS
Assessment & Evaluation in Higher Education
Vol. 00, No. 0, Month 2010, 1–12
ISSN 0260-2938 print/ISSN 1469-297X online
© 2010 Taylor & Francis
DOI: 10.1080/02602938.2010.515012
http://www.informaworld.com
5
10
15
20
25
30
35
40
Oral versus written assessments: a test of student performance
and attitudes
Mark Huxham*, Fiona Campbell and Jenny Westwood
School of Life Sciences, Edinburgh Napier University, Edinburgh EH10 5DT, UK
Taylor and FrancisCAEH_A_515012.sgm10.1080/02602938.2010.515012Assessment & Evaluation in Higher Education0260-2938 (print)/1469-297X (online)Original Article2010Taylor & Francis0000000002010MarkHuxhamm.huxham@napier.ac.uk
Student performance in and attitudes towards oral and written assessments were
compared using quantitative and qualitative methods. Two separate cohorts of
students were examined. The first larger cohort students (n = 99) were
randomly divided into ‘oral’ and ‘written’ groups, and the marks that they
achieved in the same biology questions were compared. Students in the second
smaller cohort (n = 29) were all examined using both written and oral questions
concerning both ‘scientific’ and ‘personal development’ topics. Both cohorts
showed highly significant differences in the mean marks achieved, with better
performance in the oral assessment. There was no evidence of particular groups
of students being disadvantaged in the oral tests. These students and also an
additional cohort were asked about their attitudes to the two different
assessment approaches. Although they tended to be more nervous in the face of
oral assessments, many students thought oral assessments were more useful than
written assessments. An important theme involved the perceived authenticity or
‘professionalism’ of an oral examination. This study suggests that oral
assessments may be more inclusive than written ones and that they can act as
powerful tools in helping students establish a ‘professional identity’.
Keywords: oral assessment; authenticity; identity; performance; inclusive
Introduction
The oral examination (or viva voce), in which the candidate gives spoken responses to
questions from one or more examiner, is perhaps the oldest form of assessment; it has
certainly been traditional practice in some areas of academic life, such as the Ph.D.
viva and clinical examination, for decades if not centuries. But, despite this antiquity
it is now rare or absent in many undergraduate courses. For example, Hounsell et al.
(2007) reviewed the recent UK literature on ‘innovative assessment’. Of 317 papers
considered, only 31 dealt with ‘non-written assessments’, and within this category
only 13% addressed the use of oral examinations; oral group presentations were by far
the most commonly cited non-written assessment, at 50% of the total sample.
The apparent rarity of the oral examination is surprising given its many possible
advantages. Five suggested key benefits are: first, the development of oral communi-
cation skills. These are seen as essential for graduates, which means these skills must
be explicitly taught and assessed (Wisker 2004). Second, oral examinations are more
authentic than most types of assessment (Joughin 1998). Virtually all graduates will
attend job interviews, and will have to defend their ideas and work in verbal
*Corresponding author. Email: m.huxham@napier.ac.uk
AQ1
CAEH_A_515012.fm Page 1 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
2 M. Huxham et al.
5
10
15
20
25
30
35
40
45
exchanges, whilst most will never sit another written examination after they graduate.
Third, oral assessment may be more inclusive. For example, Waterfield and West
(2006) report the views of 229 students with disabilities on different types of assess-
ment. Written exams were the least preferred type, whilst oral examinations consis-
tently came near the top; students with dyslexia were particularly likely to favour oral
assessments. Fourth, oral examinations are powerful ways to gauge understanding and
encourage critical thinking (Gent, Johnston, and Prosser 1999). Because of the possi-
bility of discourse and genuine exchange, oral examinations can allow a focus on deep
understanding and critique, rather than on the superficial regurgitation often found in
written examinations. Fifth, oral examinations are resistant to plagiarism (Joughin
1998); students must explain their own understanding using their own words.
In addition to these advantages, there is a deeper dimension to oral assessment that
involves fundamental distinctions between oral and written communication. The
philosopher Frege emphasised the ambiguity and fluidity of language, and discussed
how the ability of spoken, as opposed to written, language to carry emotional charge
allowed it a flexibility and finesse not possible on the written page (Carter 2008). This
reflects a long-held position in philosophy, going back at least to Plato, that elevates
the spoken word above the ‘mere shadow’ that is the written (Joughin 1999). The idea
that speech reflects, and creates, the person more accurately and fully than writing has
been developed more recently by Barnett, who considers how students struggle in the
‘risky’ environment of higher education to find new ways of defining themselves:
‘speech is one way in which individuals help to form their own pedagogical identities.
It has an authenticity that writing cannot possess’ (Barnett 2007, 89). Related to these
ideas is the pervasive and important notion that higher education at its best consists of
dialogue and learning conversation. To adapt a phrase from psychoanalysis, teaching
is ‘an alchemy of discourse’ (Hayes 2009) from which new understandings can arise.
Hence there are fundamental reasons why higher education might value oral assess-
ments.
So why, despite these arguments, might oral examinations be rare? One obvious
reason could be the perception that they take a long time; individual interviews with
300 first years will generally be impossible (although it is worth considering the possi-
ble savings in time gained from not marking written work). But there is a more explicit
concern about reliability and bias. For example, Wakeford (2000) advises: ‘The new
practitioner in higher education is counselled to beware of and avoid orals’, since they
may be open to bias; clearly, for example, anonymous assessment will be impossible
and producing evidence for external examiners is more difficult. There is a concern
too that oral examinations are very stressful, and might unfairly favour the extravert
and confident student (Wisker 2004). They are often seen as an ‘alternative approach’
which might be valid for a minority of disabled students but which should not apply
to the majority (Waterfield and West 2006). In addition, oral examinations may be
seen as suitable for assessing more emotive or personal issues, such as the ability to
reflect, but as not appropriate for abstract reasoning: ‘only an exceptional person
would prefer to be judged on the basis of a spoken rather than written performance
when the assessment relates to complex abstract ideas’ (Lloyd et al. 1984, 586).
Hence despite the strong arguments in favour of oral examinations, tutors might
legitimately fear using them given pressures on time, warnings that they may not
reach transparent standards of reliability and may be biased against some students
and feelings that they are only for ‘special’ groups. There is currently little in the
literature that might help a balanced assessment of the strengths and weaknesses of
AQ2
CAEH_A_515012.fm Page 2 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 3
5
10
15
20
25
30
35
40
45
oral versus written assessments (but see Joughin 2007). For example, there are to our
knowledge no explicit tests of performance in the same examination administered
orally and in writing to higher education students. The main aim of the current work
is to help fill this gap by performing such a test. In addition, we considered the
following questions: (1) Do the results in oral and written examinations differ
between different types of questions (in particular, between abstract ‘scientific’ ques-
tions and those requiring reflection on personal skills)? (2) Do students find oral
assessments more stressful than written assessments? (3) What do students feel are
the strengths and weaknesses of oral versus written assessments?
Methods
Student groups
Three groups of students were involved as participants in this research. The largest
group was a first-year (Level 7) cohort of 99 biology students taking an introductory
module in evolutionary biology, 28% of whom were male and who ranged in age from
17 to 45 (with a majority in the 17–20-year age group). The second group included 29
third-year (Level 9) students taking a field methods module, with eight males, ranging
in age from 18 to 42. The third group included 18 third-year students, seven of whom
were males and ranging in age from 19 to 29, who studied the same field methods
module the previous year.
Randomised test
In October 2007 the first-year students were randomly allocated to either a ‘written’
or an ‘oral’ group. Students were told of their allocation four weeks before the
assessment, which was a small formative test designed to encourage review and
revision of module material before major summative assessments. After explaining
the purpose of the division into two groups, students were told that they could
request a change of group if they wished. The test involved seven short-answer
questions that were taken from a list of ‘revision points’ that students had already
seen after lectures. Questions dealt with evolution and ecology and were intended to
test for understanding rather than recall; for example, question two was: ‘What
explanation can you give for the fact that most wild plants have even, as opposed to
odd, numbers of chromosomes?’, whilst question three asked: ‘Birds and bats share
the analogous similarity of wings. What is meant by this phrase, and what has
caused the similarity?’. Students allocated to the ‘written’ group were given 30
minutes to answer the questions under standard, silent examination conditions.
Students allocated to the ‘oral’ group had a maximum of 15 minutes in a one-to-one
oral examination. The additional time allowed for the written test was to compensate
for the relative slowness of writing compared with talking; experience in previous
years had shown that the time allocated was more than sufficient for full answers in
both formats. A team of 10 volunteer interviewers was involved. All the candidates
came to a single room before their designated interview slot, and they were accom-
panied from there to the interview room to prevent any opportunity of speaking with
previous candidates before the test. Interviewers followed a standard interview
protocol; questions were read out and were repeated if the candidate asked. Inter-
viewers were also permitted to clarify questions if asked, but only by re-phrasing
rather than by interpreting the question – appropriate clarification was discussed
CAEH_A_515012.fm Page 3 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
4 M. Huxham et al.
5
10
15
20
25
30
35
40
45
between interviewers during training sessions before-hand. Interviewers also
endeavoured to generate a friendly and relaxed atmosphere.
Questions in the written and oral tests were marked on a scale of 0 (no answer or
completely wrong), 1 (partially correct) or 2 (correct and including all key points);
hence the maximum score was 14. Interviewers had standard marking sheets and
had discussed all the questions together before the interviews; they made short rele-
vant notes during the interview and then produced a final mark immediately after-
wards, before the next candidate arrived. Written questions were double-blind
marked. At the end of the written test and of each interview, all students were asked
to complete a very simple questionnaire with the single question ‘how nervous were
you about taking this test?’(answers from 0 ‘not at all nervous’ through 4 ‘very
nervous’).
Mean scores were compared between ‘written’ and ‘oral’ groups using a t-test
(after testing for normality and heteroscedasticity). The distributions of responses to
the ‘nerves’ questionnaire were compared using a chi-squared test.
Paired test
An oral examination with four questions – two ‘scientific analysis’ questions on a
field report submitted by the candidate and two ‘personal and professional develop-
ment’ questions asking for reflection on, for example, communication and group work
skills developed and used during the fieldwork – is the most important assessment
component in the ‘applied terrestrial ecology’ module taken by the third-year cohort.
Questions are specific to each candidate and are developed based on each individual’s
report and field performance. The usual test was modified in 2008 by the addition of
a written element, involving two additional questions (one ‘scientific analysis’ and
one ‘personal and professional development’). Questions were first devised for each
candidate and then selected at random for the oral or written component. All candi-
dates were taken initially to an examination room where they had eight minutes to
complete the written questions, before being led to the interview room for a 15-minute
oral examination.
In these interviews, the assessor quickly promoted a positive and friendly environ-
ment for the each student by providing a warm welcome, establishing a rapport
through use of their first names, clarifying what was to happen in the oral assessment
and thanking them for their report. The questions asked had a clear context (e.g. they
referred to a specific figure or table in the student’s field report) and where students
did not fully answer questions, they were asked another supplementary – although not
leading – question (e.g. if a student was asked ‘why did you choose to use an ANOVA
test for the data in Table 2?’ a supplementary question might be ‘under what general
circumstances do you use ANOVA?’).
Questions were marked on a seven-point scale (0 = no response, 3 = bare pass,
showing a very basic understanding but no knowledge of the broader context or
evidence of wider reading and synthesis of knowledge from elsewhere, 6 = excellent
answer, showing clear understanding and an ability to place the answer in a broad
context of relevant literature or experience); one-third of the oral examinations were
double marked by two interviewers, and all written questions were double marked.
Mean scores (out of the total of four questions in the oral and two in the written tests)
were compared, paired within candidates, using a paired t-test. Marks were also subdi-
vided into those for ‘scientific analysis’ and ‘personal and professional development’
AQ3
CAEH_A_515012.fm Page 4 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 5
5
10
15
20
25
30
35
40
45
questions, and mean marks achieved in the oral and written tests for these were
compared using paired tests.
Qualitative evaluation
Three different sets of qualitative data were collected. First, students from the first-
year cohort were invited to participate in a focus group to discuss their experiences.
The discussion was facilitated by a member of staff from outside the programme team
and students were advised that staff involved in the module would not be present.
Equal number of students who had experienced the written and oral tests participated
and different student groups were invited including home, international, school-
leavers, mature students, and males and females. The discussion was recorded and
students participating gave permission for their contributions to be used on the basis
that their input would be anonymous. To encourage participation, the invitation made
clear that their input was valued; they were also offered a sandwich lunch.
Qualitative feedback was collected from the third-year cohort in 2007, who took
an oral assessment identical to that described for the 2008 cohort but without the addi-
tion of the written component. This is the first time these students had experienced this
kind of viva voce test at university. After the tests had been marked and feedback had
been provided, students were asked by email to respond to the following statement:
Please describe how you felt the interview went. In particular, how did you perform
compared to a more conventional assessment (such as a written exam)? What do you
think the advantages and disadvantages of being assessed by interview are, and what
lessons can you learn from the experience?
Students in the third-year cohort in 2008 were also invited to participate in a focus
group to discuss their experiences of the viva voce. The focus group ran on the same
basis as that described for first-year students above.
Recordings from the focus groups were transcribed, and thematic analysis was
used on these transcripts and on the email texts to identify key themes and illustrative
quotes.
Results
Randomised test
Four students requested transfers from the group to which they had randomly been
assigned; two non-native English speakers asked to be moved from the oral to the
written examination. Two students also asked to transfer from the written to the oral
group; one on the grounds of dyslexia and one for undisclosed reasons.
A total of 91 students took the assessments (45 sat the oral examination and 46 the
written one). The mean scores achieved in the oral and written tests were 8.17 and
6.24 respectively, a highly significant difference (two-sample t-test: t-value = 3.46, df
= 89, P-value = 0.001; Figure 1). Separating by gender gave a highly significant
difference between females, with a difference of 2.03 between mean scores. Males
showed a similar trend, with orally assessed students doing better by 1.50 marks on
average, however this was not significant (two-sample t-test: t-value = 1.39, df = 26,
P-value = 0.176). There was no significant difference in the marks given by the two
independent markers to the written test results.
CAEH_A_515012.fm Page 5 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
6 M. Huxham et al.
5
10
15
20
25
30
35
40
45
Figure 1. Boxplots (showing medians, central line, interquartile range, box margins and outliers) of data obtained from the first-year students’ results in oral ( n = 45) and written (n = 46) tests.
The distributions of scores recorded in the ‘nerves’ questionnaire are shown in
Figure 2. There was a tendency for students to record higher scores (i.e. a greater
degree of nervousness) in the oral group, although this was not quite a significant
difference (chi-squared test: chi-Sq = 6.778, df = 3, P-value = 0.079).
Figure 2. Frequency distributions of self-reported ‘nervousness’ of first-year students who took the oral and written tests; 1 = ‘not at all nervous’, 4 = ‘very nervous’.
Figure 1. Boxplots (showing medians, central line, interquartile range, box margins and out-
liers) of data obtained from the first-year students’ results in oral (n = 45) and written (n = 46)
tests.
Figure 2. Frequency distributions of self-reported ‘nervousness’ of first-year students who
took the oral and written tests; 1 = ‘not at all nervous’, 4 = ‘very nervous’.
AQ
AQ
CAEH_A_515012.fm Page 6 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 7
5
10
15
20
25
30
35
40
45
Paired test
Twenty-four students completed the oral and written tests. There was a highly signif-
icant difference between the marks scored by each student in the oral (mean = 5.4) and
written (mean = 4.6) components (paired t-test: t = 3.84, P = 0.001). The better perfor-
mance in the oral assessment was consistent between question types with the signifi-
cant differences remaining for both the subsamples of ‘scientific analysis’ and
‘personal and professional development’ questions.
Qualitative evaluation
Fifteen (out of a total of 18) third-year students responded to the email request for
feedback in 2007 (comments from this group are henceforth indicated by ‘3rd 2007’).
In common with similar work seeking to capture the student voice (Campbell et al.
2007), recruiting participants for the focus groups proved problematic and only three
first-year students (comments indicated by ‘1st 2008’) and four third-year students
(3rd 2008) attended their respective groups. However those who did attend contrib-
uted their views enthusiastically and perceptively.
An important theme in the student responses concerned anxiety; seven students in
the 2007 cohort mentioned feeling particularly nervous in the face of the interview,
and this was also raised in the focus groups:
I felt I did poorly in the oral exam, however I can honestly say that much of this was
down to nerves. I felt uncomfortable and was concentrating so hard on trying to sound
professional and not make mistakes. (3rd 2007)
You had to think [quickly] and then you are thinking you will be short on time and so
you panic. (3rd 2008)
However, two students in 2007 and two in the focus groups said they felt less nervous
than in written examinations. Students also identified interviews as challenging
because they required real understanding:
In comparison to a conventional exam I thought it was just as challenging, if not a little
more. To be able to cram for an exam and put it all down on a piece of paper is one thing,
but to be able to talk about a subject, clearly and concisely, you have to really understand
it, and I think that is the challenge in an interview. (3rd 2007)
You need to understand what you are saying, what you are trained to explain. (3rd 2008)
Despite the reported anxiety, 13 of the students stated explicitly that they preferred the
oral examination to a traditional written one, whilst only four stated that they would
have preferred a written test. Most of the students valued the opportunity to practice
interview skills and gain relevant experience:
I think having an assessed interview is a good idea. It give me an insight into what I’ll
inevitably have to deal with in the future, interview skills don’t come naturally so I think
the more practice we get the better equipped we’ll be for leaving university and applying
for jobs. (3rd 2007)
One student described preferring an interview because he was dyslexic.
An additional theme concerned how easy it was to express thoughts and opinions
in the two formats, with some students identifying oral communication as more ‘natural’:
CAEH_A_515012.fm Page 7 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
8 M. Huxham et al.
5
10
15
20
25
30
35
40
45
I did keep thinking back, thinking ‘they are next door saying what they mean and I am
struggling to put down on paper’. (1st 2008)
I thought it was easier to explain yourself and explain what you are doing to a person
rather than trying to [write it down]. Its easy to get muddled up with your words and try
to explain something in writing. If you talk to someone in person it’s a lot more natural.
(3rd 2008)
Discussion
Students performed better in oral compared with written tests; this result was consis-
tent between year groups, between different types of questions and when using
paired and un-paired designs. There are a number of possible explanations for this
strong effect, including bias in the assessment procedures. The famous case of
‘clever’ Hans, ‘the counting horse’ illustrates the potential influence of unconscious
cues from the interviewers (Jackson 2005). Hans was able to ‘count’ by stamping its
hoof until its owner unwittingly signalled when to stop. Such effects may have
occurred in our study (although, of course, the current questions were much more
complex and less open to simple cues than counting). We agreed with standard
interview procedures which excluded explicit prompts and encouragement, but did
not curtail all normal social interaction. We were concerned to preserve the ‘ecologi-
cal integrity’ of the interviews and wanted to avoid the highly artificial circumstance
of interviewers simply speaking a question and then remaining silent, like disembod-
ied recorders. Instead the experience was designed to be much closer to an authentic
viva voce or job interview. The current study was therefore not designed as tightly
controlled psychological research, but rather as a comparison of oral and written
assessments under realistic educational settings. As such, the possible existence of
‘clever Hans effects’ can be regarded as an integral part of most oral assessments, in
the same way that the ability to write legibly and quickly is integral to most written
assessments. There were no a-priori expectations that the oral performances would
be better; in fact, given the suggestions that oral assessments can lead to bias against
certain groups of students and can induce stress, a significantly worse performance
seemed equally likely.
The current work supports the evidence that oral assessments might induce
more anxiety than written ones. The quantitative comparison approached signifi-
cance (Figure 2), and anxiety was an important theme raised in the qualitative
responses. However this is not necessarily negative, indeed it may explain the
better average performance, with students preparing more thoroughly than for a
‘standard’ assessment. Interestingly a majority of the third-year students, who
chose to identify anxiety as a feature of the oral assessment, nevertheless stated
that they preferred it to a written test. In his phenomenographic study of student
experiences of oral presentations, Joughin (2007) found that greater anxiety about
oral compared with written assessment was associated with a richer conception of
the oral task as requiring deeper understanding and the need to explain to others.
Thus anxiety was a product of deeper and more transformative learning. The
reported anxiety might also simply reflect the relative lack of experience in oral
compared with written assessments, which was a point made explicitly in the qual-
itative evaluation:
I think the oral is quite different from the writing and we should have some training
because we don’t have experience. (3rd 2008)
CAEH_A_515012.fm Page 8 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 9
5
10
15
20
25
30
35
40
45
As with all types of assessment, it is likely that oral examinations will suit some learn-
ing styles and personalities better than others. It is not surprising that students with
dyslexia might favour oral assessments (Waterfield and West 2006). The current
research lends qualitative support to this idea, with two first-year students identifying
dyslexia as the reason why they chose to swap from the written to the oral group and
students raising the issue in the evaluation:
Before we actually did the [written] test I was a bit apprehensive as I have really bad
spelling so I do get quite conscious about that. (1st 2008)
I think I performed to a higher standard than in written tests. The reason for this I have
dyslexia, and dyspraxia, so reading and writing for me has always been harder than just
plain speak. (3rd 2007)
However, there is no support here for the notion that oral assessments should be
regarded as somehow marginal or suited only for ‘special’ groups of students.
Although sample sizes were not sufficiently large to allow multiple sub-divisions into
different social and demographic groups, there was no evidence that particular types
of students did worse at orals. Although the discrepancy in mean marks obtained in
oral compared with written tests was not as large for male as for female students, the
trend was the same and the lack of significance may have been a result of lower
sample sizes. Clearly it would be interesting to investigate possible gender differences
further, but our results do not suggest males would be disadvantaged by using oral
assessments.
Because oral language may generally carry a bigger ‘emotional charge’ than writ-
ten (Carter 2008), and of course is supplemented in most cases with a range of body
language that can transmit emotional messages, it may be true that oral assessment
will be better fitted to affective and reflective tasks. In contrast the enunciation of
complex abstract ideas might be easier in writing; a clear example would be mathe-
matics. These arguments might suggest the promotion of oral assessments specifically
for developing and measuring reflective skills, whilst abstract conceptual thinking
should be assessed using traditional written formats. However, the current work
showed no such distinction. The first-year cohorts were tested on theoretical, abstract
ideas such as ‘the argument from design’ and aspects of nitrogen cycling in ecosys-
tems, and yet, students performed better on these questions when responding orally.
The third-year students were assessed on questions divided into ‘scientific analysis’
and ‘personal and professional development’ categories, but a similar result of better
performance in the oral compared with written responses was found for both. Hence
there is no support here for the idea of restricting oral assessments to ‘special’ or
emotional categories of learning. The Third International Mathematical and Science
Study (TIMMS) programme tested thousands of children using the same standard
written tests in different countries to allow international comparisons. Schoultz, Säljö,
and Wyndhamn (2001) interviewed 25 secondary school children using two TIMMS
questions on physics and chemistry concepts. They found much better performance in
the oral tests than the average scores in the written tests for children of the relevant
age; their qualitative analyses showed that their subjects often understood the core
concepts being tested but failed to interpret the written questions correctly without
guidance. Hence the ability to re-phrase the question in an oral setting allowed a genu-
ine test of students’ conceptual understanding, and thus better performance. A similar
CAEH_A_515012.fm Page 9 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
10 M. Huxham et al.
5
10
15
20
25
30
35
40
45
effect may explain some or all of the differences we found, and we endorse their
recommendation to challenge the often implicit assumption that ‘responding to
abstract questions in writing is the natural context in which knowledge appears’.
A long tradition in philosophy and discursive psychology views language as
constitutive rather than simply transmissive; people create key aspects of their reality
(particularly their social and subjective realities) through language and especially
through ‘speech acts’. This tradition is concerned with language as a form of social
action, which helps construct such attributes as ‘the self’ during conversation and
discourse (Horton-Salway 2007). This discursive approach, related to Barnett’s idea
of students creating ‘pedagogical identities’ through speech (Barnett 2007), can help
interpret an important theme in the experiences reported by the students concerning
the performative aspects of the viva. One reason students reported greater anxiety was
because they were ‘performing’ in a social space:
This experience has taught me that it is really important to prepare as much as possible
for an interview. There is a big difference between going over things in your head and
saying them out loud clearly and confidently. (3rd 2007)
There was a perception that the oral interview required a different approach from a
written test:
I think that an oral exam allows people to use grammar and words that they may not use
when writing. (3rd 2007)
With a lot of written assessments, I think, you just memorise the paragraph like a parrot
and not know what it means. But you can tell when someone is doing that when you
speak to them because they get that glazed look in their eyes as they recite it. (1st 2008)
This different approach was seen as being more ‘professional’:
I felt uncomfortable and was concentrating so hard on trying to sound professional and
not make mistakes. This is why I was reluctant to use the word ‘niche’, I thought 95%
that it was the correct word to use. (3rd 2007)
There is an impression here of students striving to create ‘professional’ and ‘confi-
dent’ personalities (Gent, Johnston, and Prosser 1999). Zadie Smith describes one
of her working-class characters using the words ‘modern’ and ‘science’: ‘as if
someone had lent him the words and made him swear not to break them’ (2000,
522); the oral assessments involved students using professional language without
‘breaking it’.
Written examinations do not seem to elicit the same feelings, perhaps because such
examinations are so strongly identified with the worlds of school and college, rather
than work, and perhaps because they are usually private and anonymous:
Because we have done [written assessments] since we have been in school, its normal
for us but once you leave school/education you will never need [to do them] again
whereas talking to somebody you will always use. (3rd 2008)
Whilst most academics recognise how assessments can drive student learning, they
may not appreciate how the mode of assessment – including the ‘social performance’
of the assessment – may shape students’ approaches and even identities.
AQ4
CAEH_A_515012.fm Page 10 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
Assessment & Evaluation in Higher Education 11
5
10
15
20
25
30
35
40
45
In her discussion of the power of the spoken word in the ancient world, Karen
Armstrong describes Socrates’ low opinion of the written text compared with the vivi-
fying effect of living dialogue: ‘Written words were like figures in a painting. They
seemed alive, but if you questioned them they remained “solemnly silent”. Without
the spirited interchange of a human encounter, the knowledge imparted by a written
text tended to become static’ (2009, 64). There is a sense of fluidity, of students
‘trying things out’ during the interchange of the oral assessment – this exploration
might be of identities but also of concepts such as ‘niche’. This stands in contrast to
the ‘static’ representation in written assessments, and is a powerful endorsement of the
use of oral assessments. The current work has found no evidence of disadvantage
accruing from oral assessments to particular groups of students, nor of the need to
restrict orals to particular types of questions. Rather our quantitative and qualitative
results suggest important benefits to students from their use. Our sample size was rela-
tively small and was restricted to biology students at a single institution; if our results
prove representative of broader groups of students, then they support attempts to
uphold and enhance the ‘spirited interchange’ of the oral as a form of assessment in
higher education.
Notes on contributors
References
Armstrong, K. 2009. The case for god: What religion really means. London: Bodley Head.
Barnett, R. 2007. A will to learn: Being a student in an age of uncertainty. Maidenhead:
McGraw-Hill/Open University Press.
Campbell, F., L. Beasley, J. Eland, and A. Rumpus. 2007. Final report of Hearing the student
voice project: Promoting and encouraging the effective use of the student voice to
enhance professional development in learning, teaching and assessment within higher
education. Edinburgh: Napier University. www.napier.ac.uk/studentvoices/download/
Final_report_studentvoice_web.pdf.
Carter, M. 2008. Frege’s writings on language and the spoken word. http://western-philosophy.
suite101.com/article.cfm/freges_writings_on_language_and_spoken_word#ixzz0HgAdX
4Pl&D (accessed December 13, 2009).
Gent, I., B. Johnston, and P. Prosser. 1999. Thinking on your feet in undergraduate computer
science: A constructivist approach to developing and assessing critical thinking. Teaching
in Higher Education 4, no. 4: 511–22.
Hayes, J. 2009. Who is it that can tell me who I am? London: Constable.
Horton-Salway, M. (ed.). 2007. Social psychology: Critical perspectives on self and others.
Milton Keynes: Open University Press.
Hounsell, D., N. Falchikov, J. Hounsell, M. Klampfleitner, M. Huxham, K. Thompson, and S.
Blair. 2007. Innovative assessment across the disciplines: An analytical review of the
literature. York: Higher Education Academy.
Jackson, J. 2005. Clever hans. A horses tale. http://www.skeptics.org.uk/article.php?dir=
articles&article=clever_hans.php (accessed January 15, 2010).
Joughin, G. 1998. Dimensions of oral assessment. Assessment & Evaluation in Higher Education
23: 367–78.
Joughin, G. 2007. Student conceptions of oral presentations. Studies in Higher Education 32,
no. 3: 323–36.
AQ5
AQ6
CAEH_A_515012.fm Page 11 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
12 M. Huxham et al.
5
10
15
20
25
30
35
40
45
Lloyd, P., A. Mayes, A. Manstead, P. Meudell, and H. Wagner. 1984. Introduction to psychology.
An integrated approach. London: Fontana.
Schoultz, J., R. Säljö, and J. Wyndhamn. 2001. Conceptual knowledge in talk and text: What
does it take to understand a science question? Instructional Science 29: 213–36.
Smith, Z. 2000. White teeth. London: Penguin Books.
Wakeford, R. 2000. Principles of assessment. In Handbook for teaching and learning in
higher education, ed. H. Fry, S. Ketteridge, and S.A. Marshall. London: Routledge.
Waterfield, J., and B. West. 2006. Inclusive assessment in higher education: A resource for
change. Plymouth: University of Plymouth. http://www.plymouth.ac.uk/pages/view.asp?
page=10494 (accessed January 15, 2010).
Wisker, G. 2004. Developing and assessing students’ oral skills. Birmingham: Staff Education
and Development Association.
AQ7
CAEH_A_515012.fm Page 12 Tuesday, August 10, 2010 7:46 PM
CE: VAG QA: SS
... The debate between written and oral assessment has existed for years (Huxham, Campbell, & Westwood, 2010). From addition to arguments regarding achievability and medium for additional self-study, this finding seems to emphasize the participants' belief that the written form could help participants avoid anxiety (which in line with Dwiyanti & Suwastini, 2021;Hamp-Lyons, 2002;Nodoushan, 2014). ...
... The majority of language learners prefer written assessment since they can receive clear feedback on formative or summative testsenhancing the outcome of self-study (Nodoushan, 2014;Zia, 2019). Even at the college level, this situation persists, demonstrating that the fear of oral assessment is the prime motive for choosing written to oral evaluation (Huxham et al., 2010). ...
... It indicates a tendency for learners to have minimal experience using English vocally (Kang et al., 2019); supporting the notion that participants prefer written over spoken language. In contrast, as noted by participants who prefer the spoken form, oral assessment could definitely reflect students' commitment to learning English (Huxham et al., 2010). Indeed, the oral evaluation is believed to be more difficult for a variety of reasons, but a number of studies indicate that learners who regard spoken assessment as a pathway to becoming a professional English learner (Huxham et al., 2010). ...
Article
Full-text available
The incorporation of technology into education has impacted numerous facets, assessment being no exception. This study employed a descriptive-qualitative methodology to investigate the English assessment preferences of adolescent learners. There were 126 eighth-grade students voluntarily engaged and completed an open-ended online survey about issue under discussion. Through interactive data analysis, gathered data were examined qualitatively. The primary results portray three main findings: 1) in general, the majority of participants tend to prefer written over spoken form of assessment in the English lesson; 2) more participants prefer game quizzes as the assessment preference; and 3) the majority of participants believe English assessment should be differentiated to accommodate learners’ diversity. Presented findings illustrate a pattern indicating English proficiency of learners significantly influences their assessment preferences. Additionally, it is discovered that ICT-based evaluation has emerged among recent adolescent learners. As the results span a vast range of topics, it is anticipated that additional study will be conducted utilizing this research's gaps.
... De presterade också bättre på den skriftliga tentamen i slutet av kursen än studenter som valde att inte delta i den muntliga (Lundgren, 1998). Studenter förefaller dessutom generellt prestera bättre vid muntlig examination jämfört med skriftlig 3 (Huxham et al., 2012). ...
... Dialogen mellan student och lärare möjliggör en djupare och mer uttömmande bedömning av studenternas förståelse av begrepp och samband. Om studenten missförstått frågeställningen kan detta enkelt redas ut och om studentens svar är alltför kortfattat kan följdfrågor ställas (Eriksson, 2014;Huxham et al., 2012). ...
... Pereira et al., 2016a). Det har också visat sig att studenter, som trots att de upplever större nervositet i samband med muntlig examination, uppskattar denna eftersom den liknar situationer i arbetslivet och bidrar till att utveckla professionalism och professionell identitet (Huxham et al., 2012). Liknande iakttagelser har gjorts i Sverige av Weurlander et al., (2012) och i Australien av Pearce och Lee (2009). ...
Article
Full-text available
Syftet med denna studie är att bidra med insikter om hur muntlig examination kan användas för att stödja studenters aktiva lärande. Muntlig examination är den äldsta formen för bedömning av enskilda studenters kunskaper, men används idag oftast vid bedömning av presentationer av gemensamt arbete som också avrapporteras skriftligt. Artikeln är baserad på ett konkret kursutvecklingsarbete där det överordnade målet var att stödja aktivt lärande. För att uppnå målet introducerades dels en individuell muntlig tentamen som var avgörande för kursbetyget, dels ett nytt upplägg av undervisningen. Studien indikerar att muntlig examination kan bidra till ett aktivt lärande under förutsättning att den är individuell och att undervisningsupplägget tar fasta på fördelarna med samarbete mellan studenter. Vidare pekar studien på att det finns fördelar med att den muntliga examinationen väger tungt i kursbetyget under förutsättning att undervisningsupplägget förbereder studenterna väl. Framtida studier föreslås inriktas mot att följa studenter på individnivå för att belysa deras förutsättningar till aktivt lärande vid kursstart, vad som händer med dem under kursens gång och vilka resultat de uppnår i olika delar av examinationen. ENGLISH ABSTRACT Working together, performing individually – How oral examination can be used to support active learning The purpose of this study is to contribute with insights into how oral examination can be used to support students’ active learning. Oral examination is the oldest form of knowledge assessment, but is today most often used when assessing presentations of joint work also reported in writing. The article is based on a concrete course development project where the overall goal was to support active learning. An individual oral exam, decisive for the course grade, was introduced together with a new teaching structure. The study suggests that oral examination can contribute to active learning, provided that it is individual and that the teaching structure recognizes the advantages of collaboration between students. Furthermore, the study indicates that the oral examination should be given significant weight in the course grade and supplemented with a structure that prepares the students well. Future studies are suggested to focus on following students individually to shed light on their possibilities of active learning at the course start, what happens to them during the course, and what results they achieve in different parts of the examination.
... Akimov and Malin (2020) suggested that students value the conversational style of an interactive oral, as they believe exploratory and probing questions help them perform better. This is not limited to perception, as students have been shown to achieve higher grades than the written version of the same task and were more satisfied with this assessment process (Huxham et al., 2012). This satisfaction may be due to affordances of the format for identifying areas where students may be struggling or require support (Carless, 2002). ...
... Carless (2002) also found that the use of a short interactive oral enhances assessment-for-learning by providing an opportunity for students to reflect on areas for improvement. This can help to foster a culture of academic integrity, as students are encouraged to take ownership of their own development (Huxham et al., 2012). The increased level of interaction and personal accountability provided in an interactive oral can help to promote a sense of community among students, which may discourage cheating (Eaton et al., 2019;Whitley & Keith-Spiegel, 2001). ...
... An interactive oral can be used to assess content knowledge (Dobson, 2008;Pearce & Lee, 2009). However, it can also facilitate the development of graduate attributes such as problem-solving skills (Huxham et al., 2012), general communication skills (Carless, 2002;Tan et al., 2021), and specific science communication skills, which are important for psychology students learning to communicate psychological knowledge accurately (Turner & Davila-Ross, 2015). The assessment format offers an effective form of assessing learning outcomes across a range of disciplines (Anscomb et al., 2019;Malik, 2015;Smith, 2016;Theobold, 2021), including psychology (Beccaria, 2013). ...
Article
Full-text available
ChatGPT is an artificial intelligence tool that generates human-like text, making it difficult to detect. Despite its potential as a pedagogical tool, educators must be aware of the risk of academic misconduct when students use ChatGPT to submit third-party material as their own work. To ensure authorship of assignments, ChatGPT is requiring psychology educators to make the learning journey more visible. Interactive oral assessments require students to engage in conversation and demonstrate their knowledge of a topic in real time, making it challenging for students to rely solely on ChatGPT. To maximize the effectiveness of the interactive oral in preserving academic integrity, it is important for psychology educators to carefully design and implement the assessment, establish clear guidelines, and provide training for markers and students. In addition, educators should hold honest discussions about ChatGPT and modeling to promote integrity among students. By implementing interactive oral assessments, educators can balance assessment security and academic integrity while providing students with opportunities to demonstrate their understanding of the material.
... In [10], students are seated in a classroom and, after selecting their preferred subject from a list of five options, proceed to take a written test with questions from a college entrance test. In some educational systems, students take oral tests as an adjunct or an alternative to written tests, and several studies have highlighted the benefits of oral tests as assessment method [25][26][27]. The reliance on oral tests changes from country to country, for example they are central to the Italian educational system [28,29], while the US educational system prefers written tests [26,27,30,31]. ...
... In some educational systems, students take oral tests as an adjunct or an alternative to written tests, and several studies have highlighted the benefits of oral tests as assessment method [25][26][27]. The reliance on oral tests changes from country to country, for example they are central to the Italian educational system [28,29], while the US educational system prefers written tests [26,27,30,31]. As already mentioned, oral tests elicit higher levels of anxiety than written tests [11], and the availability of VRE systems for oral tests could thus benefit a large population of students worldwide. ...
Article
Full-text available
Test anxiety is an emotional state characterized by subjective feelings of discomfort, fear, and worry that can considerably affect students’ academic performance. Virtual Reality exposure (VRE) is a promising approach to address test anxiety, but the few VRE systems for test anxiety in the literature concern only written exams. Since oral exams elicit more anxiety than written exams, the availability of VRE systems for oral exams would be precious to a large population of students worldwide. Another limitation of existing VRE systems for test anxiety is that they require the availability of a head-mounted display, posing a barrier to widespread use. This paper aims to address both issues, proposing a VRE system that deals with oral exams and can be used with common PC displays. The design of the proposed system is organized in three oral test scenarios in which a virtual agent acts as the student’s examiner. The virtual examiner behaves friendly in the first scenario and increasingly reduces its friendliness in the two subsequent scenarios. The paper assesses the feasibility for VRE of the proposed system with two complementary methods. First, we describe a quantitative user study of the three system scenarios, showing that they induce increasing levels of anxiety. Second, we present a qualitative thematic analysis of participants’ post-exposure interviews that sheds further light on the aspects of the virtual experience that contributed to eliciting negative or positive affect in participants, and provides insights for improving VRE systems for test anxiety.
... The oral component of the QTPA sets it apart from written only TPAs currently in use. It allows for a more just and authentic experience as well as providing pre-service teachers with powerful preparation for job interviews which typically require that they verbally articulate their teaching beliefs and practices (Huxham et al., 2012), their impact on students' learning, and the reasons behind their teaching content and pedagogical choices. An oral assessment is also a method for understanding pre-service teachers' knowledge and critical reasoning (Gent et al., 1999). ...
... Oral examinations may prevent "superficial regurgitation" (Huxham et al., 2012, p.126) found in written assessments and can strengthen academic integrity (Joughin, 1998) as pre-service teachers must explain their own understanding in their own words. We believe that incorporating an oral is appropriate for graduate teachers as the education of students relies primarily on dialogue and learning conversations (Huxham et al., 2012). ...
... Assessments that are well prepared and designed and the feedback they generate can help improve the learning process (Hattie and Timperley 2007). However, if assessment is poorly planned and implemented, it can lead to low selfesteem (Betts et al. 2009), increased anxiety (Huxham, Campbell, and Westwood 2012), decreased motivation to learn (Zilberberg et al. 2009), and unwanted learning behaviors such as surface learning (Scouller 1998). ...
Preprint
Full-text available
Background: Assessment is one of the most important aspects of university education since it has such a big influence on students' learning processes and results. Continuous assessment (CA) is a technique for improving the quality of students' learning throughout time. It offers a lot of benefits and a few drawbacks. The purpose of this research is to examine students' perceptions of CA and correlate it with their academic results and intrinsic motivation. Methods: The study employed a mixed study design, using both qualitative and quantitative data. Quantitative data has been collected using an online-based questionnaire using the Google form platform and the students' grades, while qualitative data was collected through focus group discussions. Factor analysis using varimax rotation was used to analyse the perception of students about CA and presented using descriptive statistics alongside correlation tests to correlate students' perception with achievement and intrinsic motivation while the qualitative data was thematically analyzed.
... The viva is defined as an "assessment in which a student's response to the assessment task is verbal, in the sense of being expressed and conveyed by speech instead of writing" (Joughin 1998, p.367). It is also defined as a situation in which the candidate gives spoken responses to questions from one or more examiners (Huxham et al., 2012). Vivas have been used in different contexts and disciplines. ...
Article
Full-text available
The aim of this pilot study was to develop structured online vivas as authentic assessments for capstone students enrolled in the medical sonography program in an Australian university. The objective was to evaluate the professional competency of trainee students who were transitioning to graduate as accredited sonographers. Prior to 2020, capstone students, located all over Australia, travelled to South Australia to undertake an on-campus, objective structured clinical examination for practical skills assessment. The interstate travel restrictions imposed by the COVID-19 pandemic resulted in the need for this examination to be conducted in an online mode. The emergency adaptation model employed in 2020 did not assess conceptual understanding and communication skills. To address this gap, structured online vivas were created and implemented through action research with 71 students in November 2021. A response rate of 51% from student feedback and 100% from external examiner feedback helped to enhance future iterations. Structuring of online vivas enabled assessment of professional competency skills, reduced both examiner bias and student anxiety, and promoted academic integrity. Our study discusses the action research process of developing and implementing structured online vivas, student performance, and evaluating student and examiner experiences with online vivas.
Article
Full-text available
University of Surrey's School of Veterinary Medicine introduced an assessment called On The Spot Presentation-based Assessment (OTSPA) into the 3rd and 4th year of a 5-year veterinary degree programme. The OTSPA is designed as a low-weightage summative assessment, conducted in a supportive learning environment to create a better learning experience. The OTSPA is a timed oral assessment with an 'on the spot' selection of taught topics, i.e., students prepare to be assessed on all topics but a subset is chosen on the day. The OTSPA was designed to test the students' depth of knowledge while promoting skills like communication and public speaking. The aim of this study is to describe the design and operation of the OTSPA, to evaluate student perception of the approach, and to assess the OTSPA's predictive value in relation to the final written summative assessment (FWSA), which is an indicator of academic performance. This study assessed the student perceptions (N = 98) and predictive value of the OTSPA on the FWSA in three modules: Zoological Medicine (ZM), Fundamentals of Veterinary Practice (FVP), and Veterinary Research and Evidence-based Veterinary Medicine (VREBVM). In the perception study, 79.6% of students felt that their preparation for OTSPAs drove an understanding and learning of topics that formed part of the module learning outcomes. Only a small group (21.4%) reported the assessment to be enjoyable; however, 54.1% saw value in it being an authentic assessment, reflecting real-life situations. The majority of students felt that the OTSPA helped with improving communication skills (80.4%). There was a small but significant positive correlation between the performance in OTSPAs and the FWSA in all modules. This suggests that OTSPAs can be useful in predicting the outcomes of the FWSA and, furthermore, could have utility in identifying where support may be helpful for students to improve academic performance. Outcomes from this study indicate that the OTSPA is an effective low stake summative assessment within the Surrey veterinary undergraduate programme.
Article
The project discussed here aimed to develop student's critical thinking about computer science by applying research on student learning to the design of teaching method and assessment. A complementary aim was to develop student confidence and competence in group discussion and oral presentation. Interactions between student learning strategy and lecturer teaching strategy are analysed to establish teaching and assessment practices suited to overcoming the student tendency to concentrate on examination requirements to the detriment of their critical thinking abilities.The project took the form of design and delivery of a one‐semester credit bearing undergraduate module where assessment of student performance was by oral examination only. Course design features discussed are: course design as educational development; lecturers as models of critical thinking; student tutorial discussion and oral presentation; lecturer/student feedback and collaboration; oral examination of student performance.The findings emphasise the complexity of a critical thinking pedagogy; challenge simple generic vs discipline specific accounts of critical thinking; and suggest that active engagement between students and lecturers with a collaborative approach to teaching, learning and assessment are key determinants of education for critical thinking.
Article
An analysis of the literature on oral assessment in higher education has identified six dimensions of oral assessment: primary content type; interaction; authenticity; structure; examiners and orality. These dimensions lead to a clearer understanding of the nature of oral assessment, a clearer differentiation of the various forms within this type, a better capacity to describe and analyse these forms, and a better understanding of how the various dimensions of oral assessment may interact with other elements of teaching and learning.
Article
What is referred to as conceptual knowledge is one of the mostimportant deliverables of modern schooling. Following the dominance of cognitive paradigmsin psychological research, conceptual knowledge is generally construed assomething that lies behind or under performance in concrete social activities. In the presentstudy, students' responses to questions, supposedly tapping conceptual knowledge,have been studied as parts of concrete communicative practices. Our focus has been onthe differences between talk and text. The most frequent approach for generatinginsight into conceptual knowledge is by means of written tests. However, the very mannerin which people handle the demands of this particular form of mediation is seldomattended to. This problem has been studied by means of two items taken from theinternational comparison of knowledge and achievement in mathematics and science, TIMSS.The results reveal that it is highly doubtful if the items test knowledge of scienceconcepts to any significant extent. In both instances, the difficulties students have, asrevealed in the interview setting, seem to be grounded in problems in understanding somedetails in the written questions. These difficulties are generally easily resolved in aninteractive setting. It is argued that the low performance on these items can to a largeextent be accounted for by the abstract and highly demanding form of communication that iswritten language.
Article
A phenomenographic study of students’ experience of oral presentations in an open learning theology programme constituted three contrasting conceptions of oral presentations—as transmission of ideas; as a test of students’ understanding of what they were studying; and as a position to be argued. Each of these conceptions represented a combination of related aspects of students’ experience, namely, their awareness of the audience and their interaction with that audience, how they perceived the nature of theology, affective factors, and how they compared the oral presentation format with that of written assignments. The conception of the presentation as a position to be argued was associated with a particularly powerful student learning experience, with students describing the oral presentation as being more demanding than the written assignments, more personal, requiring deeper understanding, and leading to better learning. The study draws our attention to the various ways in which students may perceive a single form of academic task and their need to develop their understanding of assessment formats.
Frege's writings on language and the spoken word
  • M Carter
Carter, M. 2008. Frege's writings on language and the spoken word. http://western-philosophy. suite101.com/article.cfm/freges_writings_on_language_and_spoken_word#ixzz0HgAdX 4Pl&D (accessed December 13, 2009).
Principles of assessment In Handbook for teaching and learning in higher education
  • R Wakeford
Wakeford, R. 2000. Principles of assessment. In Handbook for teaching and learning in higher education, ed. H. Fry, S. Ketteridge, and S.A. Marshall, 42–61.
Clever Hans. A horse's tale
  • J Jackson
Jackson, J. 2005. Clever Hans. A horse's tale. http://www.skeptics.org.uk/article.php?dir= articles&article=clever_hans.php (accessed January 15, 2010).