ArticlePDF Available

The Place of the Closed Book, Invigilated Final Examination in a Knowledge Economy


Abstract and Figures

This paper argues that, in the information age, the closed book, invigilated final examination has become an anachronism. Most significantly, it is an assessment instrument that does not assess deep conceptual understanding and process skills. Indeed, the anecdotal evidence one often hears from students is that ‘cramming’ the night before amounts to ‘data-dumping’ on the day, with little knowledge retention thereafter. The defence of the traditionalists is that we have to have invigilated final examinations or students will cheat. However, as this paper posits, it is possible to structure a summative final assessment item in such a way that the scope for plagiarism/cheating is minimal. This requires a commitment to authentic assessment where real-world problems take centrestage, and the information and communication technologies are harnessed to allow for an element of interaction. In the process, the student is engaged more effectively with the assessment task which, in turn, serves to induce deeper learning.
Content may be subject to copyright.
Educational Media International,
Vol. 43, No. 2, June 2006, pp. 107–119
ISSN 0952-3987 (print)/ISSN 1469-5790 (online)/06/020107–13
© 2006 International Council for Educational Media
DOI: 10.1080/09523980500237864
The place of the closed book, invigilated final
examination in a knowledge economy
Jeremy B. Williams
Universitas 21 Global, Singapore
Taylor and Francis LtdREMI123769.sgm10.1080/09523980500237864Educational Media International0952-3987 (print)/1469-5790 (online)Original Article2005Taylor & Francis Group Ltd424
This paper argues that in the information age the closed book, invigilated final examination has become an
anachronism. Most significantly, it is an assessment instrument that does not assess deep conceptual under-
standing and process skills. Indeed, the anecdotal evidence one often hears from students is that ‘cramming’
the night before amounts to ‘data dumping’ on the day, with little knowledge retention thereafter. The defence
of the traditionalists is that we have to have invigilated final examinations or students will cheat. However, as
this paper posits, it is possible to structure a summative final assessment item in such a way that the scope for
plagiarism/cheating is minimal. This requires a commitment to authentic assessment where real-world prob-
lems take centre stage and the information and communication technologies are harnessed to allow an element
of interaction. In the process the student is engaged more effectively with the assessment task which, in turn,
serves to induce deeper learning.
La place de l’examen final surveillé, à livre fermé dans une économie du savoir
Cet article soutient que les examens finaux surveillés sont devenus un anachronisme. Plus particulièrement, c’est
une estimation qui ne permet pas d’analyser la pensée conceptuelle et les connaissances de l’étudiant. En effet,
comme les étudiants eux-mêmes le disent souvent, étudier à fond la veille de l’examen amène à emmagasiner
des informations pour le jour même de l’examen, mais aboutit à peu de connaissances réellement acquises. Les
traditionnalistes disent qu’il est nécessaire d’avoir des examens surveillés, sinon les étudiants ont la possibilité
de tricher. Néanmoins, comme l’article l’avance, il est possible d’organiser un examen final de manière à ce que
le plagiat / la tricherie soit réduit(e) au minimum. Cela demande un engagement pour une évaluation authen-
tique où les problèmes réels sont au centre, et les technologies d’information et de communication étroitement
liés pour permettre une interaction. Avec ce procédé, l’étudiant est plus investi dans l’évaluation de son travail,
ce qui l’incite à continuer d’approfondir ses connaissances.
Sind beaufsichtigte Abschlussprüfungen und Verbot von Hilfsmitteln noch Zeitgemäss?
Dieser Beitrag erörtert, dass ein beaufsichtigtes Abschlussexamen mit keinerlei Zugang zu Büchern oder anderen
Informationen ein Anachronismus ist. Am bezeichnendsten ist die Tatsache, dass dieses Instrument weder
profundes Verständnis der Materie feststellt noch prozessbezogene Fähigkeiten bewertet. Im Gegenteil,
Studenten berichten oft von “Lernen in letzter Minute” sowie davon, dass die behaltenen Daten und Fakten am
Prüfungstag lediglich “abgeladen” werden und nach einem Examen die Retentionsrate des Gelernten minimal
ist. Traditionalisten verteidigen dies mit dem Argument, dass Abschlussexamen überwacht und beaufsichtigt
werden müssen, da sonst geschwindelt und gemogelt würde. Dieser Artikel zeigt jedoch, dass es möglich ist,
Universitas 21 Global, 5 Shenton Way, Singapore 068808. Email:
J. B. Williams
Fragen für ein Abschlussexamen so zu strukturieren, dass etwaiges Schwindeln oder das Auftreten von Plagiaten
minimiert wird. Dies erfordert eine Bereitschaft zu authentischen Prüfungen, die die Bewältigung real existier-
ender Problemstellungen erfordern und die die Möglichkeiten für Interaktion, die sich durch Informations- und
Kommunikationstechnologien bieten, gezielt nützen. In diesem Prozess wird der Student besser in die Prüfungs-
frage miteingebunden und engagiert sich auf einem Niveau, das vertieftes Lernen und Wissen herbeiführt.
The purpose of this paper is to present the case for a different type of final examination to that
traditionally offered in universities. Importantly, it will present the case for an examination
format that is not only ideally suited to online delivery, but one that is pedagogically appropriate
given student demographics and the expertise and skills one might ordinarily expect tertiary level
students to acquire in a knowledge economy.
The second section of the paper provides background on the context of this study, namely a
completely online business school, Universitas 21 Global (U21G). It describes the final exami-
nation format currently being used at this institution, and its partnership with Prometric, a
company internationally renowned for the provision of online testing services. It describes the
meticulous and sometimes quite painstaking processes involved in producing a single examina-
tion. It also comments upon some of the problems encountered during the delivery of these
examinations, despite the best efforts of staff at U21G and Prometric. Most significantly, though,
this introductory section acknowledges the feedback received from faculty and students about
the apparent misalignment of U21G pedagogy and the final examination assessment instrument.
The third section then goes on to provide a theoretical justification for an alternative final
examination instrument (in addition to some quite sound practical reasons). There is a
discussion of the broad philosophy behind the alternative model, referred to here as an open
book, open web (OBOW) examination, and the theoretical advantages it offers when compared
with the more traditional model. The defining characteristic of this alternative approach is a
commitment to authentic assessment.
The fourth section provides a description of the methodology used in OBOW testing, the
technology employed in delivery of the examination and the steps taken to confirm the identity
of the examination candidate. (An example examination assessment item is included in the
Appendix to this paper.)
The fifth section of the paper presents the preliminary findings of student evaluations of the
OBOW trial at U21G and discusses the criteria that will determine its relative success and
subsequent adoption (or otherwise). This is followed by a summary and conclusions.
The context of this study
U21G is a joint venture between Universitas 21, an international network of 16 leading
research-intensive universities, and Thomson Learning, one of the world’s largest companies
supplying learning resources. A truly global operation, U21G commenced operations in mid-
2003 and during the first year enrolled around 340 students from all over the world within its
flagship Master of Business Administration (M.B.A.) programme. This course is delivered
entirely online and relies almost exclusively on asynchronous communication between faculty
and students, given that the student body spans so many different time zones.
The closed book, invigilated exam
U21G’s preferred pedagogy is unabashedly student-centred. The curriculum revolves around
problem-based learning (PBL) and Harvard Business School cases are used extensively.
Programme delivery is based on the combined use of traditional resources (e.g. text) and the
information and communication resources that characterize the knowledge economy. All course
units are faculty led, but not faculty directed, the academic taking on the role of facilitator and
mentor rather than lecturer. Student evaluations to date show that this is an approach that meets
with the broad approval of the student body.
While the U21G pedagogy is generally consistent with the constructivist tradition, there have
been inconsistencies when it comes to assessment. In an M.B.A. course (of all courses) it would
seem to make an infinite amount of sense to ensure that assessment items require application of
empirical and theoretical knowledge to elements of professional practice. To this end there are
many instances of student-centred learning with case- and scenario-based summative assess-
ments appearing at the end of each segment of study. These assessment items have a high degree
of authenticity, providing students with a great opportunity to apply their newly constructed
knowledge in a meaningful way. The big challenge has been the structure of the final examina-
tion instrument, which has drawn some negative comments from students and faculty alike.
This has been a disappointing outcome given the time and effort committed to the production
of examinations during the first 9 months of U21G operation.
The development of final examinations for each of the first five subjects offered by U21G neces-
sitated quite meticulous planning, together with its test delivery partner, Prometric. Incorporat-
ing 40 multiple choice questions (MCQs) and between one and four so-called free response
questions (FRQs), the original thinking was to develop an objective test that did a little more
than test knowledge recall. To this end, test objectives were authored specifically to require
reasoning and critical analysis on the part of the examinee. A 90-day schedule was drawn up to
ensure that ample time was available for each step of the examination process. These steps include
the authoring of test objectives based on each subject’s defined learning outcomes, item writing
training for the authors of the examinations, the writing of the examination itself, item editing
by a Prometric psychometrician, final proof-reading and then upload of the completed exami-
nation to the Prometric test driver. Thereafter the examinations are delivered to students on a
prescribed day at one of the 3000 or more Prometric testing centres located around the globe.
Each of the examinations described above had a second form (or version) and in some cases
a third. In these instances the 90-day cycle was shortened somewhat as the test objectives for
each subject had already been defined. This notwithstanding, it is reasonable to assume that
there are not too many tertiary educational institutions in the world that are quite so scrupu-
lously careful in their approach to the production of a single 3-hour examination and go to such
great lengths in the interests of quality assurance. How is it possible, therefore, that the product
of such endeavours could attract criticism from faculty and students?
There was certainly no question in the minds of U21G faculty and the item writers that the
examinations they were producing were superior to those they had been associated with in other
tertiary institutions in that never before had they been involved in a process where test objec-
tives are defined in advance of exams, where the authors of exams received item writing training
and where items are subject to psychometric editing. The problem, simply, was that the exam-
ination format did not integrate very well with the case-oriented, problem-based learning
approach favoured by U21G. Although the FRQs provided some opportunity for students to
J. B. Williams
solve unstructured problems, despite the best efforts of the U21G item writers the MCQ
component of the examination did not. In other words, MCQ assessment is not in alignment
with the U21G preferred pedagogy and with designated learning outcomes [a state of affairs
that Biggs (1999) would refer to as a lack of ‘constructive alignment’]. In the light of this
feedback it was determined that there were strong grounds for developing and trialling a new
model for the examination instrument.
Theoretical considerations
A burgeoning academic literature on constructivist learning has come to dominate mainstream
educational thinking, particularly over the last decade or so. Led by Marton and Säljö (1976a,
b), Biggs (1987, 1993) and Ramsden (1992), this educational philosophy posits that meaning is
not imposed or transmitted by direct instruction, rather it is created (constructed) by the
students’ learning activities. This perspective diverges from the instructivist (objectivist) view of
education that presumes knowledge exists independently of the knower and that understanding
is coming to know what already exists. The constructivists argue that deep learning will occur
only when the learner is actively engaged in, operating upon or mentally processing incoming
stimuli. Importantly, the interpretation of stimuli depends upon previously constructed learning.
What this means is that thinking or learning about the process of learning (the meta-cognitive
process) becomes more significant than the material being learned. In short, constructivism
focuses on knowledge construction, not knowledge reproduction (Herrington & Standen, 2000).
The defining characteristic of the OBOW approach is a commitment to authentic assessment.
As the literature on authentic assessment reveals, it is solidly based on constructivism and
acknowledges the learner as the chief architect of knowledge building (see, for example, Wiggins,
1989, 1998; Herrington & Herrington, 1998). It is a form of assessment that fosters understand-
ing of learning processes in terms of real-life performance, as opposed to a display of inert knowl-
edge. The students are presented with real-world challenges that require them to apply the
relevant skills and knowledge, rather than select from predetermined options, as is the case with
MCQ tests, for example. Most important of all, it is an approach that engages students because
the task is something for which they will have empathy. This, in turn, elicits deeper learning.
A problem with MCQ testing is that, quite apart from the fact it is at odds with the U21G
pedagogy, it is not a form of assessment that is representative of any real-world setting, particularly
those settings likely to be faced by an M.B.A. graduate. Consider the following scenario, common
to workplaces all over the world, each and every day. Which is the more probable?
(a) Boss to employee. Look, we’ve got a real problem here … you’ve got an M.B.A. haven’t you? Can
you write me a report on this, and email it to me by 9 a.m. tomorrow?
(b) Boss to employee. Look, we’ve got a real problem here … you’ve got an M.B.A. haven’t you? Can
you lock yourself away in that room, don’t talk to anyone, don’t browse the web or open any books and
give me your answers to these multiple-choice questions in 3 hours’ time?
Supporters of the use of MCQs might reasonably argue that it is possible to construct questions
that correspond to the complex cognitive objectives in Bloom’s Taxonomy (Bloom, 1956). The
‘assertion–reason’ type of MCQs, for example, are more sophisticated in their structure, induc-
ing a lot more reasoning on the part of the student than is the case with the more ‘traditional’
The closed book, invigilated exam
type of MCQ. However, there is evidence to suggest that it is the linguistic complexity that
presents students with the challenge with this type of question, rather than the complexity of the
problem framed within the question. Thus, to ensure equitable treatment of students (particu-
larly those for whom English is a second language) it is probably wise to use this type of question
for formative purposes only as an interactive, self-paced learning device, where there is no time
constraint and where there is ample opportunity for students to master any linguistic intricacies
(see Williams, forthcoming). In short, while the properties of the instrument may be technically
sound, this does not necessarily determine the quality of the learning that takes place (Lauril-
lard, 2002, p. 148).
However, whether the examination instrument uses MCQ or some other testing format, one
might reasonably ask the question as to whether the closed book, invigilated final examination
belongs to some bygone era (Williams, 2004a). Some universities have been doing the same thing
now for centuries, the main innovation during this period, some might argue, being the transition
from the ink well and the quill to the ball-point pen. In the case of U21G the closed book, invig-
ilated examinations delivered via Prometric testing centres are not quite so antiquated in that at
least they make use of a computer, the tool the majority of people in the world of business and
commerce have been using on a daily basis since the mid-1980s (Chaptal & Pouzard, 2004).
However, epistemologically speaking the Prometric delivered examinations are conceivably as
outdated as the on-campus variety. In an era where a wealth of information is available at our
fingertips (literally and metaphorically), to have examinations which treat knowledge and its
acquisition as a memory test is an anachronism. An online business school, of all business
schools, is especially well placed to take advantage of the various information and communica-
tion technologies (ICTs) to validate its students’ learning, specifically their ability to handle
complex, unstructured problems in authentic settings.
This is hardly a new debate, as there has been a question mark over the usefulness of exami-
nations for many years, at least in the way they have been traditionally delivered. Entwistle and
Entwistle (1991), for example, were in no doubt that examinations do not assess deep concep-
tual understanding and process skills. Indeed, as many a student will no doubt testify when
quizzed about their examination strategy, it is often a case of ‘cramming’ the night before and
‘data dumping’ on the day, with little knowledge retention thereafter. Despite this criticism,
there has been little substance to the argument mounted by those who speak in favour of the
status quo. A search of the major educational databases for an article in a refereed journal
published in the last 30 years that extols the virtues of closed book, invigilated final examinations
produces a nil return.
A defence usually proffered by those favouring the continued use of closed book, invigilated
final examinations is that students will cheat unless they are supervised. This justification has
two defects: (i) it is implicitly assumed that students do not cheat in invigilated examinations,
which the central examinations division in every university in the world will likely confirm is not
the case; (ii) the goal of policing the small minority of cheats is implicitly elevated above the goal
of producing superior learning outcomes for the vast majority of students (Morgan & O’Reilly,
1999, p. 80).
The OBOW examinations represent a serious attempt to engage students, rather than alienate
them. The opportunity for academically dishonest practice is less because of the way they are
structured, but so is the temptation to resort to this kind of behaviour in the first place. Students
J. B. Williams
will have a greater empathy with the task that lies before them if they can see the point of it. As
was pointed out earlier, in an M.B.A. course it is particularly important to devise assessment tasks
that require application of empirical and theoretical knowledge to elements of professional prac-
tice. By ensuring that assessment items are thoroughly grounded in authentic contexts, students
have an excellent opportunity to apply their newly constructed knowledge in a meaningful way.
A further barrier to the acceptance of the OBOW approach on the part of those preferring the
traditional approach is largely epistemological. If knowledge is viewed as being static, with learn-
ers taking a passive role, then ideologically it will be difficult to persuade someone of the merits
of an approach that conceives of knowledge as being adaptive, with learners taking an active role
in the construction of their knowledge.
It follows that they will likely reject the idea that an assessment item can be structured in such
a way that plagiarism/cheating ceases to be an option because the format of the question will not
test knowledge as they conceive of it. This is all very well, except in circumstances where taking
such a position is at odds with the objectives and pedagogy of the degree programme as a whole.
This would serve only to confuse students as to what is required of them if they are to perform
well in their assessments.
Having had this theoretical debate, faculty at U21G decided that there were solid grounds for
the trial of an OBOW examination format. Importantly, the general philosophy behind this
alternative examination instrument is consistent with the U21G pedagogy. In addition, the
resulting examinations are decidedly easier to create and administer, requiring far fewer
resources, which amounts to savings of tens of thousands of dollars per year. Another benefit is
that U21G has total control of the examination process from start to finish, which serves to avoid
the hazards that can sometimes arise when certain elements of the process are outsourced to a
third party.
Methodological considerations
Broad guidelines for the construction of authentic assessment items may be summarized as
follows (Williams, 2004b).
Design assessment items where the emphasis is on the importance of critical analysis, rather
than content knowledge.
Design assessment items so that they explicitly focus on learning outcomes; i.e. students need
to see the point of what they are doing.
Design assessment items where students are motivated by the quality of their learning and the
generic skills they acquire, rather than the content they memorize.
Design assessment items so that the learning experience is authentic (within a suitably
limited time period), but make it as specific to the course unit as possible (to thwart the cheat
The Appendix to this paper includes an example assessment item that uses the OBOW
approach. Each examination is unique and is not reused. One common feature is that the learner
is placed in the role of problem-solver or decision-maker. Role play provides an effective bridge
between a learner’s education and their professional practice, and the role of ‘expert witness’ is
a useful mechanism for the validation of student learning. Importantly, the real-world
The closed book, invigilated exam
problem(s) at the heart of these examination questions are brought to life through the integra-
tion of hyperlinks to the web and streaming media that serve to enhance engagement with the
student (Hung & Chen, 2003). It is this that differentiates the OBOW approach from the more
traditional open book ‘take home’ examination.
The example OBOW exam in the Appendix presents a scenario that a lay person would
understand and be able to relate to. It is a ‘story’ about coffee and how producers in the less
developed world are faced with falling prices, yet this does not seem to be reflected in the average
price of a cappuccino in the high street cafes of the developed world. The main objective of this
semi-structured mini case (or ‘caselette’) is to get students to think conceptually about this
problem, applying the skills and techniques they have acquired in their study of managerial
economics. Having set the context, the definition of the assessment task might amount to no
more than a paragraph (see Appendix,
Your task
). The
Guide to the assessment task
that follows
the assessment task definition is not to ‘spoon-feed’ the student but to ensure that the task is not
so unstructured that the student is either struck by ‘writers’ block’ or goes off on a tangent not
addressing the crux of the problem.
Another key element is the inclusion of very specific instructions relating to the preparation
and submission of the assessment item which makes it very difficult for a student to get some-
one else to do the work for them. Insisting that the work is submitted electronically in order to
make use of plagiarism detection software is a deterrent, but more importantly there is little
point in a student getting a friend or relative to write an answer for them if it is a condition that
a student’s answer make direct references to course-specific materials (see Appendix,
information regarding the preparation of your work
, point 1). The student’s accomplice would first
have to immerse their self in the subject materials, something that would be made doubly diffi-
cult if the time period allowed to complete the task is sufficiently tight. Buying an assignment
online, meanwhile, is a non-starter if the assignment is highly contextualized (Williams, 2002).
It is also important to make it clear that critical analysis (rather than recall of content knowl-
edge) is the key to success. Ideally the assessment task should invite a wide variety of ‘equally
correct’ answers (see Appendix,
Important information regarding the preparation of your work
points 4 and 5).
The authoring of an OBOW exam is not a particularly onerous task. An ‘Authentic assessment
web site’ has been developed for training purposes (see Williams, 2004c). Adjunct faculty (the
primary authors of examinations at U21G) are pointed to this resource in the first instance and
then they work with lead faculty to develop an idea for a case. Various learning objects are gath-
ered together (i.e. text, audio, video and flash animations) and the examination is constructed
over a period of several weeks. It is an iterative process that allows someone new to the format
an opportunity to experiment until they have fully grasped the essence of the project. To this
end they are counselled by lead faculty and U21G assessment advisors to ensure that the task(s)
set addresses as many of the stated learning objectives of the subject as possible and the wording
is sufficiently broad to invite the students to draw on as much of the course material as they wish.
The author of the exam is also responsible for providing an outline of the typical answer that
they expect to receive. Once the first draft is complete it is forwarded to the resident U21G
editor, who makes suggested changes. The final draft, once proof-read, is then uploaded to the
examination delivery system that resides within the U21G learning management system (LMS)
(Williams, 2004b).
J. B. Williams
Preliminary evaluation
The Prometric examinations were delivered over a period of 9 months. The plan is to trial the
OBOW format for examinations over a similar period. To date all students who have completed
both formats of examinations have been asked to respond to a 10-question online questionnaire
via the survey tool within the LMS. The questionnaire focuses on the relative merits of the
two examination formats. In broad terms the questions focus on the relative depth of learning,
real-world relevance, consistency of the examinations with the pedagogy, time allowed for the
examinations, opportunities for plagiarism and cheating and overall preferences regarding
examination format. There is also opportunity for students (and faculty) to submit qualitative
feedback in the form of written comments.
After 3 months 120 candidates had sat OBOW exams, and 54 had responded to the online ques-
tionnaire (a response rate of 45%). Strangely, there were five respondents who elected to submit
no answer to all of the questions. One possible explanation for this is that they came across the
survey when completing other surveys and accessed it out of curiosity while not being eligible to
respond, i.e. they had not completed an OBOW exam. Taking these five individuals out of the anal-
ysis still produces a relatively high response rate of around 41%, making it a representative sample.
Perhaps the most significant statistic was that all students either agreed (27%) or strongly
agreed (73%) that, overall, OBOW examinations were preferable to a closed book, invigilated
examination format. The other options on the five point Likert scale (strongly disagree, disagree
and neither agree nor disagree) received no votes. Other similarly resounding results were that:
96% either agreed or strongly agreed that a 24-hour period for the OBOW examination was
about right; 98% either agreed or strongly agreed that it was more convenient; a similar propor-
tion believed the format to have greater relevance to their business education.
Academically, 96% either agreed or strongly agreed that the OBOW examination format was
more closely aligned with the U21G pedagogy than the closed book, invigilated format; 88%
either agreed or strongly agreed that, in comparison, it produced higher quality outcomes; 84%
either agreed or strongly agreed that the OBOW format was more intellectually challenging; a
similar number found the interactive nature of the examination more engaging.
The student responses with respect to the opportunities for plagiarism and cheating were far
less skewed, many students taking a neutral stance. When asked the question whether the format
of the OBOW exam means students can cheat, around half disagreed (30%) or strongly
disagreed (20%). Meanwhile, 27% remained neutral and 23% agreed (but did not strongly
agree) that students can cheat.
Interestingly, when asked the question whether the format of a closed book, invigilated exam
means students cannot cheat a broadly similar picture emerges. This time slightly less remained
neutral (20%), with the balance split fairly evenly among those that disagreed (22%) or strongly
disagreed (18%) that students cannot cheat in a closed book, invigilated exam and those that
agreed (27%) or strongly agreed (13%).
One might make the observation from this data that respondents are more positive in their
disagreement that students can cheat in an OBOW examination than they are in their agreement
that a closed book, invigilated examination means students cannot cheat. A more important
observation, however, is that neither system offers a perfect solution when it comes to the
policing of unethical practice. As one respondent noted in their written comments:
The closed book, invigilated exam
If students are intent on cheating they will do it no matter what the format of the exam. An open book
exam is not going to encourage anyone to cheat who would not normally have done so.
Summary and conclusions
This paper set out to present the case for a different type of final examination, one that is more
relevant to the human capital needs of a knowledge economy. It has argued that a commitment
to authentic assessment will provide a vehicle for such an examination, where real-world
problems take centre stage and information technology is harnessed to allow an element of inter-
action. In the process the student is engaged more effectively with the assessment task and this,
in turn, serves to induce deeper learning. A trial with an OBOW examination format at U21G
has so far yielded positive results, to the extent that there are strong grounds for continuing with
this trial. It is an examination format that is not only ideally suited for e-learning, but it appears
to be one that is pedagogically appropriate given the clientele a business school typically attracts.
It is true that there will always be a small minority of students who will cheat (even within a
cohort of mature M.B.A. students), but common sense would suggest that the main priority
should be to focus on the quality learning outcomes of the majority, rather than cater for the
lowest common denominator. Certainly, where there is equal scope for cheating (as feedback in
the student survey would seem to indicate) then, intuitively, the model that maximizes student
learning would be the superior option.
If a tertiary educational institution is truly committed to excellence, then it should make it
clear, through its choice of assessment methods, that the quality of the learning experience of its
students is of paramount importance. Of all institutions, business schools need to be particularly
alert to this, given the intensity of competition in the market for a graduate business education.
A great many business schools make bold claims about the real-world applicability of their
courses in their marketing materials. However, this is not always apparent in these schools when
one examines their assessment practices and the format for examinations in particular. In those
schools where assessment practices are deemed to be of key strategic importance, this is a clear
demonstration of their growing maturity in the field of graduate business education and their
determination to be a significant player in the knowledge economy.
Notes on contributor
Jeremy Williams currently holds the positions of Director of Pedagogy and Assessment and
Associate Professor in eLearning at U21 Global, Singapore. He is also Adjunct Professor at
the Brisbane Graduate School of Business at Queensland University of Technology,
Australia, where he was previously Director of the MBA program and Associate Professor
in Economics.
Biggs, J. (1987)
Student approaches to learning and studying
(Hawthorn, Australia, Australian Council for Educa-
tional Research).
Biggs, J. (1993) What do inventories of students’ learning process really measure? A theoretical review and
British Journal of Educational Psychology,
83, 3–19.
J. B. Williams
Biggs, J. (1999)
Teaching for quality learning at university
(Oxford, UK, Oxford University Press).
Bloom, B. S. (Ed.) (1956)
Taxonomy of educational objectives: the classification of educational goals: Handbook I,
cognitive domain
(London, Longman).
Chaptal, A. & Pouzard, G. (2004) New exams for new professional styles,
Educational Media International,
41(2), 129–133.
Entwistle, N. J. & Entwistle, A. C. (1991) Contrasting forms of understanding for degree examination: the
student experience and its implications,
Higher Education,
22, 205–227.
Herrington, J. & Herrington, A. (1998) Authentic assessment and multimedia: how university students
respond to a model of authentic assessment,
Higher Education Research and Development,
17(3), 305–322.
Herrington, J. & Standen, P. (2000) Moving from an instructivist to a constructivist multimedia learning envi-
Journal of Educational Multimedia and Hypermedia,
9(3), 195–205.
Hung, D. & Chen, D. T. (2003) A proposed framework for the design of a CMC learning environment:
facilitating the emergence of authenticity,
Educational Media International,
40(1/2), 7–13.
Laurillard, D. (2002)
Rethinking university teaching: a conversational framework for the effective use of learning
(2nd edn) (London, Routledge).
Marton, F. & Säljö, R. (1976a) On qualitative differences in learning—1: outcome and process,
British Journal
of Educational Psychology,
46, 4–11.
Marton, F. & Säljö, R. (1976b) On qualitative differences in learning—2: outcome as a function of the
learner’s conception of the task,
British Journal of Educational Psychology,
46, 115–127.
Morgan, C. & O’Reilly, M. (1999)
Assessing open and distance learners
(London, Kogan Page).
Ramsden, P. (1992)
Learning to teach in higher education
(London, Routledge).
Thomas, P., Price, B., Paine, C. & Richards, M. (2002) Remote electronic examinations: student experiences,
British Journal of Educational Technology,
33(5), 537–549.
Wiggins, G. (1989) A true test: toward more authentic and equitable assessment,
Phi Delta Kappan,
Wiggins, G. (1998)
Educative assessment
(San Francisco, Jossey Bass).
Williams, J. B. (2002) The plagiarism problem: are students entirely to blame?, in: A. Williamson, C. Gunn, A.
Young & T. Clear (Eds)
Proceedings of the 19th annual conference of the Australasian Society for Computers in
Learning in Tertiary Education (ASCILITE),
721–730. Available online at:
proceedings/papers/189.pdf (accessed 16 January 2005).
Williams, J. B. (2004a) The five key benefits of on-line final examinations (with three free bonus benefits), in:
R. Ottewill, E. Borredon, L. Falque, B. Macfarlane & A. Wall (Eds)
Educational innovation in economics
and business VIII: pedagogy, technology and innovation
(Dordrecht, The Netherlands, Kluwer Academic),
Williams, J. B. (2004b) Creating authentic assessments: a method for the authoring of open book open web
examinations, in: R. Atkinson, C. McBeath, D. Jonas-Dwyer & R. Phillips (Eds)
Beyond the comfort zone:
proceedings of the 21st ASCILITE conference,
934–937. Available online at:
perth04/procs/pdf/williams.pdf (accessed 16 January 2005).
Williams, J. B. (2004c)
The authentic assessment web site.
Available online at:
tic/ (accessed 16 January 2005).
Williams, J. B. (forthcoming) Assertion-reason multiple-choice testing as a tool for deep learning: a qualitative
Assessment and Evaluation in Higher Education,
in press.
The closed book, invigilated exam
Appendix. Managerial economics: open book, open web exam
The context
Figure A1
J. B. Williams
Your task
The chief executive officer (CEO) of your company, a large multinational coffee roaster and
retailer, has been advised by the Board that shareholders are becoming increasingly concerned
about the poor public image of the company as a result of consumer complaints about exploita-
tion of coffee farmers in LDCs. The CEO is unsure how to respond to this problem. She has
read the Executive Summary of the World Bank discussion paper ( but remains
confused. To this end, she is seeking the counsel of staff within the organization and has called
for submissions in the form of discussion papers.
Having just completed a 12 week managerial economics subject at a leading international busi-
ness school, you decide to make a submission. Your brief, very simply, is to explain the structure
of the coffee industry and comment on recent trends using the relevant economic theory. About
80% of your discussion paper should be devoted to this task. The remaining 20% should focus
on possible strategies the firm might take to alleviate the concerns of the shareholders. The CEO
is aware that a brief as broad as this is likely to attract a variety of proposals, but this is quite
deliberate on her part as she wants to encourage people to come up with some creative solutions
to the problem.
Guide to the assessment task
To help guide your thinking, you have discussed the matter with colleagues and, among other
things, they recommend you contemplate the following:
when explaining the structure of the coffee industry be sure to discuss both the long-term
perspective and the recent changes that have taken place;
using economic concepts to describe what is going on, analyse the industry under four
separate headings: (1) the trends facing the farmers who grow the coffee beans; (2) the
changes occurring in the coffee roasting and retailing industry; (3) the trends in the consumer
markets for coffee; (4) the responses from governments (or those that might be forthcoming);
illustrate your explanations with diagrams;
before suggesting a strategy, or strategies, for the company, start by explaining to the
shareholders the possible factors which might have caused the drop in farmer receipts from
US$10 billion to US$5.5 billion over the last 10 years.
Important information regarding the preparation of your work
1. In completing this task, be sure to draw on the concepts and analytical tools you have learnt
about during the course, making direct references to the subject materials (i.e. the
prescribed text, courseware and other resources). Students who fail to comply with this
directive will not receive a passing grade.
2. You must upload a written response of 2000 words (
10%, excluding references) in 24
hours time via the link at the course web site. Take a look at this link now so you know what
is required of you.
The closed book, invigilated exam
3. The piece of writing you submit should be referenced in the normal way, using an interna-
tionally recognized referencing system (i.e. the Harvard system and the numbered notes
system). Students who fail to comply with this directive will not receive a passing grade.
4. This is a broad question that invites a variety of ‘equally correct’ answers.
5. High marks will be awarded for good, critical analysis, rather than content cut and pasted
from web sites and other electronic sources.
6. The expectation is that you will not have the time to submit an answer of the quality of a
term time assignment, but it should resemble an answer of that quality.
... As evidence that students cheat in closed book examinations, Morgan and O'Reilly (1999) cite the existence of university disciplinary boards created to deal with examination dishonesty (Morgan and O'Reilly 1999). Williams (2006) asserts that in the information age the closed-book proctored final examination has become an anachronism, given the supposed purpose of education as being deep learning and critical thinking. While universities generally embrace the fruits of technology in their administration and pedagogy, they have arguably resisted to adopt ICT in conducting the final written examinations. ...
... Before advancing our experiences with a recent OOBE in an ODEL institution we will now expound the conceptual framework which we employ to argue that the OOBE can potentially advance critical thinking and transform the antiquated practice of the closed-book, proctored examination (Williams 2004;2006). ...
... According to constructivist philosophy, meaning is constructed by the learner and not imposed externally, which would be a form of instructivism. Instructivism is an objectification of knowledge, meaning that knowledge exists independently of the knower and understanding is coming to know what already exists (Williams 2006). In short, constructivism focuses on knowledge construction and not knowledge reproduction (Herrington and Standen 2000). ...
Full-text available
Under COVID-19 lockdown conditions, the imposition of social distancing and restricted mobility, disrupted the traditional way of assessment in higher education. The closed book examination, conducted under proctored conditions, had to be substituted for the online open book examination (OOBE), posing challenges to both conventional and Open Distance Learning (ODL) institutions. The OOBE became a new experience to lecturers and students. Considering COVID-19 as a potential catalyst for educational transformation, the experiences gained in this format of assessment presents a valuable frame of reference for future learning. The aim is to extract lessons from this innovative learning experience to inform future assessment practices. The study is set in the context of a B.Ed. (Hons) compulsory module, offered at an Open Distance Learning (ODL) institution in South Africa. It is guided by the research question: "what were students' experiences of their first online, open-book final examination and what are the implications for policy, practice and research?" This is a qualitative study, using as data, student emails on their experiences of the OOBE. The results show that the OOBE is an innovative assessment practice in higher education, in need of deeper understanding and (re)training. We conclude that the OOBE offers transformational opportunities in higher education assessment practices, to replace the traditional closed-book examination. We make recommendations to assist lecturers and students in approaching the OOBE in future.
... In contrast to instructivist approaches that typically employ closed-book exams, educational practices are moving towards adopting constructivist approaches to teaching and assessment. Constructivists emphasize deep learning through active engagement, encouraging students to build off of their existing knowledge (Williams, 2006). Research on the use of open book exams is an area of growing interest in the pursuit of this goal. ...
... Another argument against closed-book exams is that they have little or no applicability to the real world. Once students enter the workforce, the problems they encounter will not be constrained to a single correct solution nor will they be left without access to reference materials if needed (Boud, 1991;Feller, 1994;Gibbs, Habeshaw, & Habeshaw, 1988;Williams, 2006). Finally, critics of closed-book exams argue that students are more susceptible to unnecessary exam anxiety and stress due to the arduous memory task involved and fear of not being able to correctly recall specific facts (Theophilides & Dionysiou, 1996;Tussing, 1951). ...
... Further, some studies provide evidence of gains of roughly a full letter grade on open-book exams when compared to a closed-book control (Agarwal, et al.;2008). Open-book problem solving also appears to better simulate engineering practice than closed-book testing (Jonassen, Strobel, & Lee;2006). ...
... Yükseköğretim programlarında bu tür davranışları engellemek amaçlı birçok çevrim içi platform kullanılmakla beraber günümüzde bu uygulamalardan elde edilen sonuçlara hâlâ şüphe çekici yaklaşılmaktadır. Bu nedenle düzey belirleyici değerlendirme uygulamalarının esnek bir zaman aralığında yapıldığı ve kaynakların açık kullanılabildiği soru ve sınav türlerinin kullanılması önerilmektedir (Williams, 2006). ...
Full-text available
Covid 19 Genel Salgınınnın Eğitime Etkileri
... Since OBE provide a good depiction of how much a student has understood a subject, teachers can check whether their students have a clear knowledge of the subject and thus can give the feedback accordingly. Online examinations facilitate the opportunity to receive feedback [69], which might raise students' self-efficacy beliefs and through them their academic performance. Studies examining students' experiences in class and online examinations with MCQ show that online settings, where the students received immediate feedback, reduced stress and allowed them to focus on learning [70]. ...
Full-text available
Background: Due to the current COVID-19 pandemic situation many factors around us have changed. Preliminary surveys and reports have indicated this unprecedented situation is putting high academic demands and extraordinary pressure on the students. The newly introduced open book pattern of online examination which was conducted very recently definitely was a different experience for all the students as well as teachers and it also had its own pros and cons. Given the recency of this pandemic situation and pervasive use of traditional methods of closed book examination, scientific studies are lacking in this context. Methodology: It was a cross-sectional survey conducted in physiotherapy students using an online questionnaire. Through this qualitative analysis we assessed the overall impact of online examination with an aim to determine the prevalence of stress and its determinants. In addition, students' experience with respect to different phases of online examination (preparing, responding and learning) was investigated. Results & Conclusion: The prevalence of academic stress was very high (94.4%) among undergraduate physiotherapy students (n=642) in the context of online exams conducted in the pandemic crisis situation. This alarmingly high proportion of students experiencing stress warrants urgent and special attention with effective interventions. While examination related factors are identified as major stressors, many psychosocial factors are also found to exert a considerable influence on exam experience and are implicated to affect the mental health and academics of the students. Open book exam was advantageous to the students in terms of reducing stress while mixed results were found in terms of learning and study skills.
... Setiap soal terdiri dari 5 butir soal dengan tiga soal pertama berupa pilihan ganda yang membantu siswa mengenali cara berargumentasi melalui pilihan jawaban argumentasi yang telah disediakan dan dua soal terakhir berupa esai yang membantu siswa mengembangkan cara berargumentasi melalui jawaban yang diberikan. Hal ini karena jawaban pada soal pilihan ganda bersifat pasti dan hanya menyediakan satu kemungkinan jawaban benar (Suwandi, 2011) (Williams, 2006). Oleh karena itu, siswa dapat mencari jawaban yang benar atas claim yang diberikan dari berbagai sumber informasi, kemudian dapat menemukan alasan dan bukti yang tepat dan mendukung claim sehingga mampu berargumentasi menggunakan claim, reasoning dan evidence yang tepat. ...
The research aims to know the effect of Assessment for Learning (AfL) to argumentative skill of high school students. The research used a quasi experimental design with nonequivalent control group design. The design consists of control class and experiment class. Two clasess was randomly selected from 9 classes. The participants were 67 students from grade 10 of one high school in Surakarta. Data of argumentative skill was a pre-post test of AfL. The hypothesis test using t test and paired sample t test. The results of hypothesis testing showed that the significant value gained 0.000 <0.05, therefore H<sub>0</sub> is rejected or H<sub>1</sub> is accepted. The effect of Assessment for Learning (AfL) to argumentative skill of high school students is the significantly different between control and experimental class
... On the other hand, the results of online tests may still be doubtful due to the fact that use of technology and cheating behavior are becoming more complex day by day. Therefore, it is also recommended to use assessment methods that students can complete the test in a flexible timeframe, and they can use resources explicitly within the scope of summative assessment (Williams, 2006). ...
Full-text available
COVID-19 has been one of the biggest challenges which education systems have ever faced. Distance education solutions have become the ‘mandatory choice’ of countries to maintain educational processes in this period. Concurrently, the long-term impact of distance education on educational outputs and suggestions towards minimizing the negative effects have begun to be discussed. Predictions on the potential increase in educational inequalities after COVID-19 pandemic have become prominent in these discussions. The aim of this study is to discuss the effects of transition to distance education and potential increase in educational inequalities in detail. Steps taken by Ministry of National Education (MoNE) in pandemic and the expected educational transformations after the pandemic are also discussed. MoNE has maintained the educational services via distance education through the digital platform EBA and television broadcasting. MoNE has successfully activated the production potential and human resources particularly in vocational education institutions against the pandemic. After the pandemic, it is predicted that transitions towards digital education platforms will be accelerated, and revisions of educational processes based on the new skills demanded by labor market will be critical. The importance of remedial education is emphasized against permanent effects of inequalities in opportunity particularly in disadvantaged schools.
... Bununla birlikte teknoloji kullanımının ve kopya çekme davranışının sürekli daha karmaşık hale geldiği günümüzde çevrimiçi sınav uygulamalarından elde edilen sonuçlar hala şüphe çekici olabilmektedir. Bu nedenle düzey belirleyici değerlendirme uygulamaları kapsamında kaynakların açık kullanılabildiği, esnek bir zaman aralığında cevaplanabilen sınav türlerinin kullanılması da öneriler arasındadır (Williams, 2006). ...
Full-text available
COVID-19 salgını eğitim sistemlerinin bugüne kadar yüzleştiği en büyük sorunlardan birisi olmuştur. Uzaktan eğitim çözümleri, bu dönemde eğitim süreçlerinin devamını sağlamak için ülkelerin zorunlu tercihi haline gelmiştir. Bununla birlikte ülkelerde uzaktan eğitime geçişin eğitim çıktıları üzerinde uzun vadeli oluşturacağı etkiler de tartışılmaya ve olumsuz etkilerin giderilmesi için öneriler geliştirilmeye başlanmıştır. Bu tartışmalarda özellikle COVID-19 salgını sonrasında eğitim eşitsizliklerinin artabileceği öngörüsü dikkat çekmektedir. Bu çalışmada uzaktan eğitime zorunlu geçişin oluşturacağı etkiler ve eğitimde olası fırsat eşitsizliği artışları farklı boyutlarıyla ele alınmaktadır. Ayrıca, Türkiye’de Millî Eğitim Bakanlığı’nın (MEB) salgın sürecinde attığı adımlar detaylı olarak incelenmekte ve salgın sonrası dönemde eğitimin geçireceği dönüşüm tartışılmaktadır. MEB salgın dolayısıyla uzaktan eğitime geçiş yapmış ve eğitim hizmetlerini dijital platformu EBA ve televizyon aracılığıyla devam ettirmiştir. Ayrıca MEB, mesleki ve teknik eğitim başta olmak üzere tüm kurumlarının üretim ve insan kaynağı kapasitesini salgının yayılımını engellemek için başarıyla kullanmıştır. Salgın sonrasında eğitimin dijital platformlara yönelme hızının artacağı değerlendirilmekte, eğitimin hızla dönüşecek iş piyasasının talep edeceği yeni becerileri karşılayabilmek için süreçlerini gözden geçirmesinin önemine değinilmektedir. Ayrıca, COVID-19 sonrası yeni dönemde salgının yol açabileceği fırsat eşitsizliklerinin toplumlarda kalıcı olmaması ve daha fazla derinleşmemesi için özellikle dezavantajlı okullarda telafi edici ilave eğitimlerin yapılmasının önemine dikkat çekilmektedir.
This research examines the measurement and evaluation practices in postgraduate education in the emergency distance education process that came with the Covid-19 pandemic. A case study design in line with qualitative research approaches has been used in the study. The study group consisted of 27 students enrolled in a postgraduate education institute and receiving education in different programs. It was found in the study that performance-based evaluations that are required to be delivered within a certain deadline are the most preferred evaluation approaches in postgraduate classes and that time limitations are implemented in each online exam. Reiteration, responsibility, cooperation, saving time, space independence, not experiencing exam anxiety, fast communication, and providing feedback were positive aspects of distance evaluation practices. While technical problems, content and evaluation incompatibility, difficulty in accessing e-resources, subjective grading, decrease in motivation, excessive assignments and task descriptions, cheating behavior, and digital literacy insufficiency were found as the negative aspects. The current study suggests that when performing the distance evaluation, students and teachers should have digital skills and adopt not result but the process and product-oriented evaluation approaches to keep up with this process.
The current post-apartheid system of education in South Africa requires students to be critical thinkers. Education institutions utilise a variety of assessment strategies; peer assessment, self-assessment, and group assessment are some of the formative assessments that are crucial in the promotion of students. This article reports on the findings of a qualitative study conducted at a South African higher education institution where the emphasis was on the open book examination as an alternative form of assessment. The research was conducted among 32 teacher-learners who were enrolled in a two year part-time (teacher education) distance programme. Five of the teacher-learners’ facilitators were part of the sample. The majority of participants concurred that the open book examinations have a number of advantages. One advantage is that this model helps students in allaying anxiety usually associated with closed book examinations. However, there are few challenges posed by this form of assessment, as some current students simply reproduce extracts from the texts in their examination books without any critical interpretation. When properly applied, the open book examination presents an effective assessment model.
Full-text available
Penelitian ini merupakan studi eksploratori, bertujuan untuk mengkaji persepsi mahasiswa terhadap sistem ujian akhir 'buka buku', dan kemungkinan memberikan kebebasan kepada mahasiswa mengakses informasi online selama ujian. Responden penelitian sebanyak 92 mahasiswa magister, berusia 22 – 60 tahun, 67% berusia antara 28 sampai dengan 50 tahun. Instrumen berupa kuesioner yang terdiri atas 14 pertanyaan tertutup tentang sistem ujian 'buka buku' menggunakan skala Likert 1-5, dan pertanyaan terbuka tentang kebebasan menggunakan internet dalam ujian'buka buku.' Hasil penelitian menunjukkan bahwa pada umumnya (86.8%) mahasiswa mempunyai persepsi yang positif terhadap sistem ini dan sebagian besar (77%) lebih menyukai sistem ujian 'buka buku' daripada 'tutup buku'. Sistem ini dinilai lebih sesuai untuk jenjang pendidikan pascasarjana yang sejalan dengan prinsip pendidikan yang baik karena tidak mengandalkan hafalan, mengurangi kecemasan mahasiswa dalam ujian, dan membuat mahasiswa lebih serius mempersiapkan diri. Penggunaan internet secara bebas dalam ujian sistem'buka buku' dinilai baik, karena mahasiswa dapat melengkapi jawabannya dengan analisis materi internet yang diakses. Penelitian ini menyimpulkan bahwa mahasiswa mempunyai persepsi yang positif terhadap sistem ujian ‘buka buku’ dan penggunaan internet dalam sistem ini, baik karena pertimbangan akademik maupun psikologis. Oleh sebab itu, sistem ujian ini dapat lebih luas digunakan pada berbagai jenjang pendidikan sebagai alternatif sistem ujian. Pada saat yang sama, perlu dilakukan kajian lebih lanjut mengenai pengaruh sistem ini pada mahasiswa, serta berbagai strategi untuk menjadikan sistem ujian ini efektif.
Full-text available
A problem for educators and the developers of interactive multimedia is the apparent incongruity between the demands of authentic assessment and the deliverables of computer‐based assessment. Lecturers wishing to use interactive multimedia are commonly limited to assessment using multiple choice tests which are easily marked by the computer.This article describes seven defining characteristics of authentic assessment which have been operationalized in a learning environment employing interactive multimedia. The article describes the multimedia program and its implementation with a class of pre‐service teachers. The implication of these findings for educational practice are that authentic assessment can be used within interactive multimedia learning environments, albeit not totally contained within the software itself. The qualitative study reported here showed that students responded favourably to the elements of authentic assessment; that they had a good understanding of the content of the interactive multimedia program; and that the assessment was corroborated by observation of teaching strategies used by the students in their teaching practice.
Part 1: Learning and Teaching in Higher Education 1.Introduction 2.Ways if Understanding Teaching 3.What Students Learn 4.Approaches to Learning 5.Learning form the Student's Perspective 6.The Nature of Good Teaching in Higher Education 7.Theories of Teaching in Higher Education Part 2: Design for Learning 8.The Goals and Structure of a Course 9.Tecahing Strategies for Effective Learning 10.Assessing for Understanding Part 3: Evaluating and Improving the Quality of Teaching and Learning 11.Evaluating the Quality of Higher Education 12.What Does it Take to Improve Teaching?
This April 2011 article is a reprint of the original May 1989 (V70N9) article and includes a new one-page introduction (on page 63 of this issue) by the author. The problem of assessment in education persists, the author maintains, because we have not yet properly framed the problem. We need to determine what are the actual performances we want students to be good at, he urges, define authentic standards and tasks to judge intellectual ability, and then design a test that measures the performance. The article focuses on the authentic test, which is a contextualized, complex intellectual challenge, rather than a collection of fragmented and static bits or tasks.
Recruiting professionals for schools’ media resources centres needs to take into account the new dimensions of the information society. This article describes the radical reform of the very high‐stakes and competitive national selection of those professionals in France. The new exam is now driven by a problem‐solving approach and based on an extensive use of electronic resources. This has raised very serious issues in terms of fairness, reliability and security. This article describes how those issues have been addressed. De nouveaux examens pour de nouveaux styles professionels. Recruter les personnels en charge des centres de documentation des établissements scolaires exige de prendre en compte les nouvelles dimensions de la Société de l’Information. Cet article décrit la transformation radicale de ce difficile concours de recrutement en France. Les nouvelles épreuves sont désormais conçues selon une approche de résolution de problèmes et fondées sur l’utilisation systématique des ressources électroniques. Ce qui a soulevé de délicats problèmes d’équité, de fiabilité et de sécurité. Cet article explique comment ils ont été résolus. Neue Prüfungen für neue Berufsanforderungen. Mitarbeiter für die Medien Ressource Zentren der Schulen einzustellen, erfordert, die neuen Dimensionen der Informationsgesellschaft mit in Betracht zu ziehen. Dieser Artikel beschreibt die radikale Reform der mit hohem Einsatz und konkurrenzbetont ablaufenden nationalen Auswahl dieser Berufstätigen in Frankreich. Die neue Prüfung basiert jetzt auf einem Problemlöse‐ Ansatz und wird von umfassender Verwendung elektronischer Ressourcen bestimmt. Damit ergaben sich sehr ernste Fragen in Bezug auf Gerechtigkeit, Zuverlässigkeit und Sicherheit. Dieser Beitrag beschreibt, wie diese Problemstellungen angesprochen worden sind.
This paper is an attempt to situate CSCL and CMS tools in the context of recent developments in constructivist learning environments (CLEs). The computer-mediated tools for collaboration are an integral part of the design of CLEs. In view of the situated nature of learning, a further distinction in the design of dynamic learning environments (DLEs) is considered and a formulation of learner involvement in the problem authenticity is proposed. The paper attempts to formulate design principles based on CLE and DLE conceptualizations. A prototype has not been developed at this stage of the conceptualizations. CMC tools can be an integrated part of the design of such learning environments and throughout the social construction of knowledge process. Proposition d’un plan pour élaborer un environnement d’apprentissage CMC (computer mediated communication) favorisant l’émergence de l’authenticité. Cet article cherche À situer les outils CSCL et CMS dans le contexte des développements récents dans les environnementsd’apprentissage constructivistes (CLEs). Les outils fournis par l’ordinateur pour la collaboration sont une partie intégrale del’élaboration des CLEs. Au vu de la nature de l’apprentissage, il faut considérer en plus une distinction dans l’élaboration desenvironnements d’enseignement dynamiques (DLEs) et on propose une formulation de l’implication de l’étudiant dans leproblème de l’authenticité. L’article essaie de formuler des principes d’élaboration basés sur les conceptualisations CLE et DLE. Un prototype n’a pas encore été développpé À ce stade des conceptualisations. Les outils CMC peuvent être une partie intégrante de l’élaboration de tels environments pendant la construction sociale du processus de la connaissance. Ein vorgeschlagenes Richtlinienprogramm zur Gestaltung eines CMC Lernumfeldes: Wie lässt sich dieAuthentizität besser erkennen. Dieser Beitrag ist ein Versuch CSCL und CMS Werkzeuge im Rahmen der neuesten Entwicklung von Constructivist Learning Environments (CLEs) einzuordnen. Die Computer gestützten Programme, die der Zusammenarbeit dienen sind ein wichtiger Bestandteil bei der Programmierung von CLEs. Angesichts der vorgegebenen Lernweise ist eine weitere Unterscheidung bei der Gestaltung der Dynamic Learning Environments (DLEs) notwendig und deshalb wird eine Mitwirkung des Lernenden bei dem Problem der Authentizität vorgeschlagen. Der Beitrag versucht Richtlinien zu erarbeiten, die auf den Konzepten von CLE und DLE beruhen. Ein Prototyp ist in diesem Entwicklungsstadium noch nicht hergestellt. CMC tools kann eine wichtige Rolle bei der Entwicklung solcher learning environments beim gemeinsamen Aufbau des Wissenstandes spielen.
Describes an attempt to identify different levels of processing of information among groups of Swedish university students who were asked to read substantial passages of prose. Ss were asked questions about the meaning of the passages and also about how they set about reading the passages, thus allowing for the examination of processes and strategies of learning and the outcomes in terms of what is understood and remembered. It was posited that learning has to be described in terms of its content. From this point differences in what is learned, rather than differences in how much is learned, are described. It was found that in each study a number of categories (levels of outcome) containing basically different conceptions of the content of the learning task could be identified. The corresponding differences in level of processing are described in terms of whether the learner is engaged in surface-level or deep-level processing. (PsycINFO Database Record (c) 2012 APA, all rights reserved)