ArticlePDF Available

Abstract and Figures

Within an authentic assessment regime, a student is evaluated in terms of their ability to demonstrate application of a body of knowledge to a scenario situated in an actual, or a near replica of a real-world context. At Universitas 21 Global (U21Global), a completely online graduate school backed by 16 universities from around the world, the entire pedagogical model is founded on such an approach. One unique feature of the U21Global model is its interactive examination instrument which harnesses the power of the various information and communication technologies (ICTs). This instrument, referred to as the Open-Book Open-Web (OBOW) exam, presents students with a description of a simulated business problem using multimedia. They are then asked to assume a particular role and make recommendations about how to go about solving the problem. Feedback to date indicates that students are generally very positive about OBOW exams. On the minus side, the construction of OBOW exams presents a number of challenges. Not least of these is the steep learning curve it presents for exam authors unaccustomed to working within this paradigm.
Content may be subject to copyright.
Lam, W., Williams, J. B., & Chua, A. Y. K. (2007). E-xams: harnessing the power of ICTs to enhance authenticity. Educational
Technology & Society, 10 (3), 209-221.
209
ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the
copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by
others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.
E-xams: harnessing the power of ICTs to enhance authenticity
Wing Lam1, Jeremy B. Williams1 and Alton Y.K. Chua2
1School of Business, Universitas 21 Global, Singapore // Tel: +65 6410 1300 // Fax +65 6410 1358 //
wing.lam@u21global.edu.sg // jeremy.williams@u21global.edu.sg
2School of Communication and Information, Nanyang Technological University, Singapore // Tel: +65 6790 5810 //
Fax +65 6790 5214 // altonchua@ntu.edu.sg
ABSTRACT
Within an authentic assessment regime, a student is evaluated in terms of their ability to demonstrate application
of a body of knowledge to a scenario situated in an actual, or a near replica of a real-world context. At
Universitas 21 Global (U21Global), a completely online graduate school backed by 16 universities from around
the world, the entire pedagogical model is founded on such an approach. One unique feature of the U21Global
model is its interactive examination instrument which harnesses the power of the various information and
communication technologies (ICTs). This instrument, referred to as the Open-Book Open-Web (OBOW) exam,
presents students with a description of a simulated business problem using multimedia. They are then asked to
assume a particular role and make recommendations about how to go about solving the problem. Feedback to
date indicates that students are generally very positive about OBOW exams. On the minus side, the construction
of OBOW exams presents a number of challenges. Not least of these is the steep learning curve it presents for
exam authors unaccustomed to working within this paradigm.
Keywords
Authentic assessment, Examinations, Constructivism, ELearning
Introduction
In recent years, there has been a growing interest in authentic assessment (see, for example, Svinicki 2005; Laurillard
2002; Hanna 2002). This is largely fuelled by the realisation that traditional assessment, which relies on indirect,
simplistic or proxy items to make inferences about a student’s performance, no longer provides (if it ever did!) an
adequate and realistic measure of knowledge (and its application) in a fast-changing world. In an authentic
assessment setting, a student is evaluated in terms of their ability to demonstrate application of a body of knowledge
to a scenario situated in an actual, or a near replica of a real-world context.
Wiggins (1998) notes that authentic assessment often involves ‘ill-structured’ challenges and roles that help students
rehearse for the complex ambiguities of real life. Authentic assessment focuses on students’ analytical skills, the
ability to integrate new learning, and gives equal weight to the process as much as the finished product. Arguing in a
similar vein, Mueller (2005) identifies several benefits of authentic assessment. First, authentic assessments are
direct measures of a student’s ability to apply knowledge and skills. Second, authentic assessment encourages a
constructivist approach to learning, where students learn through application. Third, authentic assessment gives
students considerable freedom to demonstrate what they have learnt without being limited to a particular set answer.
Authentic assessment is particularly relevant to applied disciplines such as business, where a student may be assessed
on their ability, for example, to develop a marketing strategy for a company rather than critique a theory of market
segmentation.
The use of business cases and problem-based learning found in many graduate schools can be construed as a form of
authentic assessment (Savery and Duffy l995). However, by and large, these paper-based cases are very static, and
fail to harness the liveliness that can potentially be brought about by the inclusion of multi-media elements. Indeed,
given the advent of the information age and the Internet, it is surprising that cases are still presented in a traditional
format. Furthermore, because of the time needed for publication, cases tend to be at least one year old and often do
not capture important topical issues emerging in the subject area. Importantly though, once published, the ‘answer’
to a case readily surfaces in the public domain, so calling into question the use of those cases as an instrument for a
student’s final examination.
210
The School of Business at Universitas 21 Global (U21Global), one of the new breed of online academic institutions,
has been using authentic assessment in its MBA program since it commenced operations in mid-2003. This paper
reports on the overall experiences of U21Global with authentic assessment, specifically with a unique final
examination instrument it has developed, known as the Open-Book Open-Web (OBOW) exam. While there may be
individual professors working in other institutions, such as in the Wharton School of Business (Cole 2006), who are
giving their students real-life cases to analyse as part of their final examination, these appear to be less well
developed as an authentic assessment instrument as OBOW exams. Furthermore, U21Global is distinctive in that
OBOW exams are an institution-wide approach to final examinations that is mandatory to all subjects, not just a
select few.
The paper begins with a brief overview of U21Global. This is followed by a description of the assessment regime at
U21Global and how OBOW exams are constructed and delivered. The lessons learnt from experience with OBOW
exams and their implications are then discussed. The paper concludes that authentic assessment in the form of
OBOW exams is a positive step from a pedagogical standpoint but the acceptance and successful implementation of
such a model is contingent upon some form of training/ professional development for exam authors.
The Context: Universitas 21 Global
Organisational overview
U21Global is a joint venture between Thomson Learning, one of the world’s largest publishers, and Universitas 21
(U21), a network of research-oriented universities spanning four continents. Nineteen of the U21 universities have an
equity stake in U21Global including McGill University, University of British Columbia, University of Virginia,
Tecnológico de Monterrey, University of Birmingham, University of Edinburgh, University of Glasgow, University
of Nottingham, Lund University, University College Dublin, University of Melbourne, University of New South
Wales, University of Queensland, University of Auckland, National University of Singapore, Korea University,
University of Hong Kong, Shanghai Jiao Tong University, and Fudan University. U21Global is head-quartered in
Singapore, with regional offices throughout the Asia-Pacific.
U21Global commenced, first of all, with the MBA program, and offered its first classes in July 2003. This program
(including students enrolled in associated diploma, certificate and single subject programs) has since attracted nearly
2000 students. The typical profile of a U21Global MBA student is a working adult in a middle management position.
The average age of students is 35 years old, average work experience is 11 years, and the majority are married
(72%). No fewer than 83% travel to other countries in the course of their jobs. Singapore, India and the Middle East
countries supply more than half of the students, but there are more than 50 different nationalities on the program in
total.
Overall learning design
Programs offered by U21Global are delivered entirely online; i.e. there is no face-to-face classroom study. Subjects
last for 12 weeks and there is an expectation that, on average, students will spend 10 to 12 hours studying per week
per subject. Since students are geographically dispersed among many different time-zones, interaction is largely
asynchronous through discussion forums and email housed within U21Global’s learning management system (LMS).
However, students may also use synchronous tools such as online chat and audio conferencing.
The online courseware integrates with a prescribed textbook (although subjects are not textbook-driven) and exploits
the power of the Internet to deliver text, graphics, interactive exercises, animations, downloadable resources, and
hyperlinks to web sites. There are no lectures as there are in the conventional classroom. Indeed, this is a pedagogy
that is quite at odds with the constructivist approach favoured by U21Global. While each class is led by a professor
who maintains a presence throughout the duration of the class, he or she is the ‘guide on the side’ rather than the
‘sage on the stage’. Students exercise considerable control over the direction of their learning and navigate their way
through the web-based materials, accessing the electronic library when necessary, and only drawing on the expertise
of the professor in an advisory capacity.
211
Importantly, there is a strong emphasis on peer learning, not least because the student demographic is such that they
clearly have a lot to learn from one another given the diversity of professional experience, nationalities and cultures.
Aside from the extensive interaction on discussion boards, there is considerable opportunity for students to
collaborate through team assignments and projects (all individual contributions being subject to peer assessment),
which help to contribute to the development of a robust online learning community.
The assessment regime at U21Global
All U21Global MBA subjects feature four assessment instruments; namely, written case study assignments,
discussion board contributions, a final project and a final examination. The description of these instruments and their
relative weights are shown in Table 1.
Table 1: Assessment instruments
Assessment
Instrument
Description Weighting (% of
overall mark)
Case analyses Students complete up to 4 business case analyses (at least one as a
member of a team, and at least one on an individual basis).
30
Discussion boards Students are assessed on the quality of their contributions to
discussion boards (according to 4 ‘categories of interaction’
(MacKinnon 2000)
30
Final project Students complete a major business case analysis, usually as a
member of a team.
15
Final examination Students are given an OBOW examination. This exam must be passed
in order to pass the subject overall.
25
From the outset, U21Global committed itself to a case-oriented, problem-based learning approach where student
learning would be firmly grounded in reality. Assignments and discussion topics are incorporated extensively into
the courseware at logical junctures to help students reinforce learning. Assignments primarily take the form of
business cases drawn from Harvard, the Ivey School of Business or the European Case Clearing House (ECCH)
while discussion topics usually take the form of contentious or open-ended issues that seek to solicit a multiplicity of
views from students. Typically, half of the assignments require students to work in teams of three to five. The final
project, submitted at Week 12, may also a team assignment. In Week 14, two weeks after the class ends, students are
required to sit the final exam.
After a year of experience with this assessment regime, it became clear that there were inherent structural problems.
It was U21Global’s intention to model the final exam format and conditions as closely as possible after those found
in traditional universities. The final exam was originally designed as a 3-hour exam which comprised multiple-
choice questions and short-answer questions, and it was administered in a proctored environment by Prometric, a
Thomson-owned company, which operates test centres throughout the major cities in the world.
However, it became increasingly evident that the existing examination instrument had serious operational and
pedagogical shortcomings. Scheduling final exams for a rapidly growing pool of students from all over the world
became a logistical challenge. More significantly, the final exam format was not consistent with U21Global’s
preferred pedagogy; that is, there was a lack of what Biggs (1999) refers to as ‘constructive alignment’. First of all,
an objective of the U21Global MBA program is to develop strategic problem-solvers in the workplace. The approach
used in the existing final examination was considered inappropriate to further this goal. Second, the conditions under
which the existing examination was conducted were far too remote from those in the real-world. The 3-hour exam
gave students little time to ponder, investigate and reflect on realistic business problems. Rather, the short time
fostered ‘quick-fixes’ and memorisation rather than encouraging deep-thinking and integration of learning. For these
reasons, U21Global turned instead to an approach founded on the principles of authentic assessment introducing
what it calls OBOW exams.
212
Authentic assessment and OBOW exams
Defining characteristics of the OBOW examination instrument
The OBOW exams used at U21Global represents a significant departure from the conventional, closed book,
invigilated model for examinations in that they not only leverage the rich media resources available through the
World Wide Web, they are also situated in an authentic context. In keeping with the constructivist tradition, the
OBOW exam comprises a case-story that invites students to draw upon all they have learned throughout the subject,
and in assembling this knowledge, they demonstrate what they know rather than what they do not know. There is no
call for individuals to memorise and regurgitate facts and concepts in a controlled setting. Such case-stories are
recognised as powerful learning instruments as well as assessment instruments (Hung et al. 2004).
Students can complete the OBOW exam at any physical location of their choosing within a 75-hour window over a
designated weekend (usually the end of week 14). Once the OBOW exam paper has been downloaded from the
U21Global LMS, the students have 24 hours to submit their response (via the LMS). A wide range of resources such
as the text books, electronic library and the World Wide Web are at the students’ disposal throughout the duration of
the examination.
The construction of the OBOW exam at U21Global follows a six-step process (Williams, 2004). The first step is to
generate some preliminary ideas for the examination. Rich sources of ideas include local newspaper, current affairs
publications or professional journals. Compared to text books and academic papers in general, news items or articles
written for the general reader have the capacity to engage the student more readily. Since interesting ideas usually
take time to incubate, it is wise to maintain vigilance in amassing relevant materials and shaping the theme rather
than hastily developing the questions just before the examination time.
The second step involves creating a context which holds a story in a non-academic manner. The context could be
situated in a government department, a company, or one that faces an individual. As the subject matter expert, the
author of an OBOW exam has the capacity to read a newspaper article or watch a television news report through the
lens of their academic discipline. In constructing an authentic assessment item such as this, the objective is to create
an opportunity for students to tackle an issue quite differently than if they had not had the benefit of formal learning
in the discipline in question.
The third is to enrich the story with various media such as photographs, audio clips and streaming video that add a
human dimension to the task and effectively bring the case to life (Herrington & Herrington 1998). Hyperlinks to
company web sites and news portals can also be provided to attest to the genuineness of the case. The text and
media are selected on the basis of their relevance in describing a situation that currently confronts the central
character in the case.
The fourth step is to define the assessment task. Presenting students with the task in context and then setting them up
as key decision maker, the expert advisor, or the auditor is an effective mechanism for validating their learning. It is
important to note that assessment tasks are crafted in conjunction with the stated learning outcomes for the subject.
The purpose of the assessment tasks is therefore to afford students maximum opportunity to demonstrate that they
have achieved these learning outcomes.
The fifth step is to provide a task guide that offers some broad plan as to how the students might approach the task
without being overly prescriptive. The objective here is to maintain students’ focus on the tackling the assessment
task in a way that is aligned to the learning outcomes.
The sixth and final step in the process is essentially administrative, but quite critical in terms of the overall design of
the examination instrument. There are statements about the importance of critical analysis and the rejection of exam
scripts for late submission, but also advice specifically designed to combat plagiarism and cheating. In particular,
that it is a requirement that students to draw on the concepts and analytical tools referred to in the U21Global subject
they have studied and that they demonstrate this through direct reference to course materials. This condition, together
with the fact the assessment task is heavily contextualised, make it extremely difficult for students to cheat.
Table 2 summarises the six steps in the construction of an OBOW examination.
213
Table 2: Process in constructing an OBOW exam
Step Process Description
1 Generate the idea
Generate the idea from a variety of sources such as newspaper, current affairs
publications or professional journals.
2 Set the context Set the context of the case in a government department, a company, or one facing
an individual.
3 Enrich the story Bring together various media such as photographs, audio clips and streaming video
to enrich the story.
4 Define the task Place students in the role of an expert witness and define the assessment task in
conjunction with the stated learning outcomes for the subject.
5 Provide a guide Offer broad guidelines on how the task could be approached.
6 Issue administrative
instructions
Specify the necessary administrative instructions to maintain rigour and integrity in
the examination system.
An example OBOW exam
An example of an OBOW exam used in the subject ‘IT Systems for Business’, an introductory course in IT, is given
in the Appendix. It is quite unique and will not be used again for examination purposes. The length of an OBOW
exam paper is deliberately kept relatively short and succinct, and typically, the exam comprises three components;
namely, ‘The Context’, ‘The Task’ and the ‘Guide to the Task’. The Context introduces the case and provides some
background information. The Task specifies what students are required to do while the Guide to the Task outlines the
approaches students may take in response to The Task. Table 3 summarises the main components of the OBOW
exam.
Table 3: The components of an OBOW exam
Description Concise example
The Context Describes a real-world problem “Company XYZ wants to improve the
efficiency of its logistics operation…”
The Task Describes the role the student is playing and
what needs to be done
“You are a consultant in a major consulting
firm who has been asked to develop
recommendations…”
Guide to the Task Provides suggestions about how the student
might go about addressing the problem
without being overly prescriptive
“Your colleagues suggest that you examine
the logistic processes used at Company
XYZ…
Although the template for the OBOW exam (The Context, The Task and the Guide to the Task) remains unchanged,
the content within this template is deliberately quite varied and unstructured. The reason for this, quite simply, is that
in the real world, information about a business problem is rarely straightforward and neatly structured. Thus the
content provided in The Context section of an OBOW exam is incomplete and loose, so as to simulate a real-world
setting. Similarly, in the Guide to the Task, it is important to avoid being too prescriptive as to how the student might
go about solving the problem. The idea here is to provide no more information than a consultant would ordinarily
receive in a brief from a prospective client. The onus is on the students to manage the fuzziness, make realistic
assumptions where needed, interpret the core issues in a problem, and piece together a convincing and cogent
solution to the problem.
According to Wiggins (1990), for assessment to be authentic it will display the following characteristics:
i. The assessment is realistic and reflects the way the information or skills would be used in the real world;
ii. The assessment requires judgment and innovation and is based on solving unstructured problems that could
easily have more than one right answer and, as such, requires the learner to make an informed choice;
214
iii. The assessment asks the student to do the subject; that is, to go through the procedures that are typical to the
discipline under study;
iv. The assessment is done under situations as similar to the contexts in which the related skills are performed
as possible;
v. The assessment requires the student to demonstrate a wide range of skills that are related to the complex
problem, including some that involve judgment; and
vi. The assessment allows for feedback, practice, and second chances to solve the problem being addressed.
The OBOW examination instrument would appear to exhibit the first five of these characteristics, the summative
nature of the final examination precluding any ‘second chances’. However, students are quite at liberty to seek
feedback on their performance from their professors, and while there might not be an opportunity to ‘practice’
solving a similar problem within the confines of the subject they have just completed, the knowledge acquired –
specifically the generic skills of sound critical analysis and synthesis – are transferable to other subjects and, indeed,
in the course of their professional lives.
Evaluating the effectiveness of OBOW exams
Researching the effectiveness of the OBOW exam instrument,is a complex task, and a definitive analysis is still
some way off. To date, a major source of quantitative data has been the surveys (mandatory for all students
completing a subject) collected from students at the end of every class. Other sources of data include the qualitative
feedback obtained from the full-time faculty at U21Global who are largely responsible for the implementation of the
OBOW exam approach at U21Global, and the adjunct faculty who supervise the online classes and author the
OBOW exam papers. A synthesis of this research data is presented below, together with preliminary analysis of the
findings to date. Such a research strategy can be justified for an exploratory investigation aimed at evaluating the
general utility of the OBOW approach. A major longitudinal study is currently in process that compares learning
outcomes from the OBOW instrument with those derived from more traditional examination instruments.
Preliminary Findings and Lessons Learnt
Student approval
In late 2004, a survey of students completing both the original and OBOW formats of examination showed the
student body to be extremely happy with the OBOW model. Questions focused on the relative depth of learning, real
world relevance, the consistency of the examinations with the pedagogy, the time allowed for the examinations, the
opportunities for plagiarism and cheating, and overall preferences regarding examination format. The questionnaires
were submitted voluntarily and there was a response rate of 45% from a population of 120. The most significant
statistic was that all students either agreed (27%) or strongly agreed (73%) that, overall, OBOW examinations were
preferable to a closed book, invigilated examination format. Other similarly resounding results were that 96% either
agreed or strongly agreed that a 24 hour period for the OBOW examination was about right; 98% either agreed or
strongly agreed that it was more convenient; and a similar proportion believed the format to have greater relevance to
their business education. From an educational perspective, 96% either agreed or strongly agreed that the OBOW
examination format was more closely aligned with the U21Global pedagogy than the closed book, invigilated
format; 88% either agreed or strongly agreed that, by comparison, it produced higher quality outcomes; 84% either
agreed or strongly agreed that the OBOW format was more intellectually challenging; with a similar number finding
the interactive nature of the examination more engaging (Williams 2006).
Combating plagiarism and cheating
As an assessment instrument, the OBOW exam is supposed to be completed solely as an individual piece of work.
Students are at liberty to discuss various approaches to a problem beforehand in the same way as they would discuss
a problem with colleagues in the workplace because this, after all, constitutes learning. The final exam remains,
however, an assessment of the individual student’s abilities, and there can be no collaboration in its completion. With
a more traditional exam model, ensuring there is no collaboration in a non-proctored or ‘take home’ exam can be
215
difficult. With the OBOW model, unethical practice is much easier to detect. One advantage of using an authentic
assessment approach is that, presented with a very open and unstructured problem, it is unlikely that any two exam
candidates will present similar responses. Students may refer to the same broad concepts, but the highly
contextualised way in which they are required to articulate and present the concepts makes cheating difficult
(Williams 2002). It is impossible, for example, for someone to buy a ‘ready-made’ essay from one of the numerous
online ‘paper mills’ because in an authentic assessment setting, where the application of theory in a real-world
context is the quintessential factor, one will never see, for example, the likes of “Define eBusiness. What are key
characteristics of a sound eBusiness strategy?” This type of assessment task is quite antithetical to constructivist
pedagogy and clearly at odds with a commitment to authentic assessment (Herrington & Standen 2000). To date,
where students have presented OBOW exam answers that are very similar, these are easily detected by U21Global
professors and the individual students have been called to account.
In OBOW exams, as the example in the Appendix demonstrates, students are encouraged to make use of the course
materials, the Web, and other available resources in preparing their answer. This is consistent with the philosophy of
authentic assessment, where students would have access to similar resources in a real-world setting. However,
making exams open in such a manner brings its own set of problems. Not least of these is the vexed issue of
plagiarism.
Plagiarism is a phenomenon that is general to education, of course, but it warrants further attention in relation to
authentic assessment. At U21Global, like other institutions, the policy to deal with plagiarism is quite unambiguous
in that the inclusion of any external material must be appropriately referenced. Therefore, in the case of a student
who has done a ‘cut and paste’ from a website into their OBOW exam response without attributing the source
something easily detected with the assistance of a search engine like Google or Dogpile – there is little room for
debate. However, one has to be mindful of the somewhat of blurred line between plagiarism and what might be
considered ‘knowledge reuse’ where, for example, a student has identified solutions from elsewhere and tailored
them to solve the OBOW problem at hand. This case is much ‘greyer’ than the straightforward cut and paste, and it
could be argued that a student has reused knowledge from elsewhere to solve a problem in a new context. This is not
inconsistent with an authentic assessment philosophy and, ultimately, it may boil down to the professor’s judgement.
When quizzed about the opportunities for plagiarism and cheating in the 2004 survey on OBOW (referred to above),
many U21Global students elected to take a neutral stance. When asked the question whether the format of the
OBOW exam meant students can cheat, around half disagreed (30%) or strongly disagreed (20%). Meanwhile, 27%
remained neutral and 23% agreed (but did not strongly agree) that students can cheat. Interestingly, when asked the
question whether the format of a closed book, invigilated exam meant students cannot cheat, a broadly similar
picture emerges. This time, slightly less remained neutral (20%), with the balance split fairly evenly among those
that disagreed (22%) or strongly disagreed (18%) that students cannot cheat in a closed book, invigilated exam, and
those that agreed (27%) or strongly agreed (13%).
A point often overlooked is that there is a tendency for people to implicitly assume that the on-campus model is the
perfect system. If one were to ask the Registrar on every campus of every university world-wide whether they caught
anyone cheating this semester they would, of course, answer in the affirmative. The U21Global position is that, in
the absence of a perfect system, it is better to concentrate one’s efforts on developing an assessment instrument that
caters for the vast majority of students who are motivated by the quality and depth of learning, rather than go for a
pedagogically inferior option that may (or may not!) thwart the cheats.
A steep learning curve
U21Global is heavily reliant upon adjunct professors drawn from many business schools from around the world.
Many of U21Global’s adjunct professors (around one half) experience significant difficulties in writing OBOW
exams. This difficulty stems, in part, from an unfamiliarity with an authentic assessment approach and the fact that
are used to more traditional methods of exam question setting. A dedicated Authentic Assessment Website has been
set up to assist with professional development and some professors are very keen to learn. Unfortunately, however,
such a resource tends to be of little benefit in the case of those professors who have been using instructivist forms of
assessment their entire academic career, and who are generally very resistant to any form of change. A further
difficulty encountered is that writing OBOW exam questions requires professors to have an understanding of real-
216
world problems in relation to the subjects that they deliver. This can be a testing experience for professors who have
had little or no experience of solving real-world problems, either through research or consultancy.
The grading of OBOW exam responses can represent a challenge for the uninitiated. Given their open-ended and
unstructured nature, OBOW exams do not lend themselves to any pre-defined ‘model answer’. Rather, students may
provide a multitude of very different answers, all of which are equally valid responses to the problem presented in
the OBOW exam. Like in real-life, one student may propose a solution to a problem, while another may think along
different lines to propose a radically different solution. Hence, it is difficult for the professor to develop a detailed
marking scheme beforehand, something that might be unsettling for some professors. To further complicate matters,
there may be no clearly-defined basis for saying that one solution is superior to the other, and should therefore
receive a higher mark. Consistent grading is therefore more difficult to achieve given that solutions may not be
meaningfully compared (Svinicki 2005). While U21Global has developed an assessment cover sheet with generic
assessment criteria that focus on a student’s powers of analysis and synthesis, and an associated grade descriptions
document, the professor must still use his or her judgement in evaluating the utility and soundness of a solution and
the cogency with which it is being described and presented by the student. As mentioned earlier, if a professor has
little experience of solving real-world problems themselves, they may find it difficult to evaluate OBOW exam
responses, causing them to take longer complete the job.
While the OBOW exam is a summative piece of assessment, meaning that professors are not normally expected to
provide feedback to students, they do have the right to request such feedback from their professors. Although this
does not happen too often, it can be a challenge for a professor who is accustomed to having the ‘safety blanket’ of a
model answer or detailed marking scheme that is heavily content-oriented. Instead, the professor must ‘get to grips
with’ the response provided by each and every student, probing the strengths and weaknesses of each answer. The
feedback given to a student is therefore of a highly personalised nature which is a positive point, of course, but it
may require greater effort on the part of the professor.
The transferability of the model
Since their introduction, OBOW examinations have been used in 130 separate class sections within 30 individual
subjects in U21Global’s MBA program, across a variety of disciplines, including subjects of a qualitative nature (e.g,
organisational behaviour, marketing management, and human resource management) as well as subjects of a largely
quantitative nature (e.g. accounting, finance, and data analysis). It can be stated with some degree of confidence,
therefore, that the OBOW examination instrument has general applicability in management-type subjects at the
graduate level. Experience has shown, however, that some subjects are more naturally amenable to OBOW
examinations than others. Devising OBOW examinations for qualitative subjects has, on the whole, been an exercise
that most professors have adapted to quite easily. On the other hand, it has been a greater challenge for professors of
quantitative subjects. In part, this can be explained by the fact that professors in this domain have become
accustomed to setting examinations in a particular way. An authentic and constructivist pedagogy is not as common
in these subjects (Fitzsimmons & Williams 2005), so setting an OBOW examination which requires students to
demonstrate how they interpret and apply the results of quantitative analysis to solve an unstructured problem can be
a ‘counter-cultural’ experience; these professors typically being used to setting questions that require students to
perform a calculation that leads to a ‘right’ answer. As such, it has been necessary for U21Global full-time faculty to
‘coach’ such professors to think differently about how they examine quantitative subjects.
Summary and conclusions
The use of OBOW exams has certainly been adjudged a success at U21Global by staff and students alike. The
experience has also caused U21Global to rethink certain aspects of the learning design within the MBA and other
programs. Some subjects were developed long before OBOW exams were introduced, and were designed without
such an examination instrument being considered. One specific area of course revision following the introduction of
OBOW exams has been to revisit the learning objectives associated with each subject. Some of the learning
objectives were formulated in a descriptive fashion; e.g. “identify and explain the main concepts in IT planning in
large organisations”. However, OBOW exams, and authentic assessment more generally, is less concerned with
recall (declarative knowledge) and more to do with reasoning (procedural knowledge). In the information age, in an
217
online graduate school of all places, it is appropriate that a lot more energy can now be devoted to what students can
accomplish in terms of real-world problem-solving. In an age when information is literally (and metaphorically) at
our finger tips, time is better spent making sense of this information rather than trying to memorise it. Thus, many of
the descriptive learning objectives have been re-cast prescriptive learning objectives, incorporating higher level
cognitive tasks (Bloom 1956); e.g. “develop an IT plan for a large organisation”.
In summary, the authors of this paper believe that the introduction of authentic assessment in the form of OBOW
exams has been a positive step from a pedagogical standpoint particularly given the applied business disciplines in
the MBA program. Significantly, the OBOW learning design is not something exclusive to online education, and it is
clear that campus-based institutions could also implement a similar form of authentic assessment either as formative
assessment or, as U21Global has done, in the form of a summative assessment instrument. We would recommend
professors introduce one assignment as a formative piece of assessment as a pilot in order to develop a level of
comfort with authentic assessment. We do remain concerned about the high proportion of adjunct professors at
U21Global who experience difficulties in writing OBOW exam cases, clearly indicating that some form of training
or faculty development program is needed particularly to support those who are unfamiliar with authentic
assessment.
One of the early reservations when U21Global was deliberating over the introduction of authentic assessment was
that it would develop students who could solve problems but not be able to master the ‘basics’ pertaining to a
particular domain. So far, there is little evidence to suggest that this is the case at U21Global, particularly as
U21Global adopts a pluralist approach to its pedagogy through the use of other assessment instruments such as self-
assessment exercises, discussion board assignments and business case analyses. Hence, a broad mix of assessment
instruments may be the ideal.
The authors acknowledge that research into the efficacy of the OBOW instrument is still at a relatively formative
stage and have noted several avenues for further work. One of these, quite simply, is to gather additional data aimed
at answering more specific research questions related to the OBOW examination approach. For example, comparing
the feedback on OBOW examinations in qualitative versus quantitative subjects would throw further light on the
issue of the transferability of the model. Another interesting question would be to see if there is any correlation
between the feedback on OBOW examinations and examination results; i.e. are students more likely to give positive
feedback on OBOW examinations if they perform well? Another avenue for research, in conjunction with the first,
is to improve the richness of the data collected. To this end, a project is under way to collect qualitative data from
both student and faculty focus groups to enable research results to date to be better corroborated. Finally, there is the
complex issue of whether OBOW examinations do, in fact, contribute to real-world problem solving skills in the way
they have been purposely designed. For this, U21Global is in the process of creating survey instruments for its MBA
graduates aimed at establishing whether or not the skills amassed though taking OBOW examinations have proved
useful post-graduation.
References
Biggs, J. (1999). Teaching for Quality Learning at University, Oxford: Oxford University Press.
Bloom, B. S. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals: Handbook I,
Cognitive Domain, London: Longman Group.
Cole, V. (2006). Hot Coffee is a lesson for MBA students, Retrieved June 1, 2007, from http://www.joystiq.com/
2006/03/01/hot-coffee-is-a-lesson-for-mba-students/.
Fitzsimmons, J., & Williams, J. B. (2005). Creating authentic learning environments: a strategy for student
engagement and deeper learning in introductory statistics. OLT-2005: Beyond Delivery Conference, Retrieved June
1, 2007, from https://olt.qut.edu.au/udf/OLT2005/index.cfm?fa=getFile&rNum=2420394.
Hanna, N. R. (2002). Effective use of a range of authentic assessments in a web assisted pharmacology course.
Educational Technology & Society, 5(3), 123-137.
218
Herrington, J., & Herrington, A. (1998). Authentic assessment and multimedia: how university students respond to a
model of authentic assessment. Higher Education Research and Development, 17(3), 305-322.
Herrington, J., & Standen, P. (2000). Moving from an instructivist to a constructivist multimedia learning
environment. Journal of Educational Multimedia and Hypermedia, 9(3), 195-205.
Hung, D., Tan, S. C., Cheung, W. S., & Hu, C. (2004). Supporting problem solving with case-stories learning
scenario and video-based collaborative learning technology. Educational Technology & Society, 7(2), 120-128.
MacKinnon, G. R. (2000). The dilemma of evaluating electronic discussion groups. Journal of Research on
Computing in Education, 33(2), 125-131.
Laurillard, D. (2002). Rethinking University Teaching: A Conversational Framework for the Effective Use of
Learning Technologies (2nd Ed.), London and New York: Routledge.
Mueller, J. (2005). Authentic Assessment Toolbox, Retrieved June 2, 2007, from
http://jonathan.mueller.faculty.noctrl.edu/toolbox/index.htm.
Savery, J. R., & Duffy, T. (l995). Problem-based learning: an instructional model and its constructivist framework.
Educational Technology, 35(5), 31-38.
Svinicki, M. D. (2005). Authentic assessment: testing in reality. New Directions for Teaching and Learning, 100, 23-
29.
Wiggins, G. (1998). Educative Assessment: Designing Assessments to Inform and Improve Student Performance, San
Francisco: Jossey-Bass.
Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research & Evaluation, 2(2),
Retrieved June 2, 2007, from http://pareonline.net/getvn.asp?v=2&n=2.
Williams, J. B. (2002). The plagiarism problem: are students entirely to blame? Proceedings of the 19th ASCILITE
Conference, 721-30, Retrieved June 2, 2007, from http://www.ascilite.org.au/conferences/auckland02/proceedings/
papers/189.pdf.
Williams, J. B. (2004). Creating authentic assessments: a method for the authoring of open book open web
examinations. Proceedings of the 21st ASCILITE Conference, 934-937, Retrieved June 2, 2007, from
http://www.ascilite.org.au/conferences/perth04/procs/williams.html.
Williams, J. B. (2006). The place of the closed book, invigilated final examination in a knowledge economy.
Educational Media International, 43(2), 107–119.
219
Appendix
MBA 770 IT Systems for Business
Final Examination for Sections MBA650-0501-3
August 2005
PLEASE READ THESE INSTRUCTIONS CAREFULLY
This is an open-book, ‘open-web’, essay-type examination that you can complete at a location of your own choice.
The maximum time period allowed for this exam is 24 hours. Importantly, you get to pick which 24-hour period you
want to utilise over the weekend. For our purposes ‘the weekend’ is defined here as the 75-hour period between 12
noon (Singapore time), Friday 19 August 2005, and 3pm (Singapore time), Monday 22 August 2005.
You must select a 24-hour period that falls WITHIN these 75 hours. No exam submissions will be accepted
after 3pm (Singapore time), Monday 17 January 2005.
When you have completed this assessment item, upload your work via the Final Exam option.
This examination material is purely confidential and remains the property of Universitas 21 Global.
By taking this examination, you acknowledge this and agree not to disclose, publish or disseminate the examination
materials or make infringing copies, either in whole or in part.
THE CONTEXT
FARLEY LASERLAB specialises in the design, manufacture and installation of computer-controlled plate
processing machines. The machines employ the most advanced cutting and drilling technologies. For almost 20
years now, the name FARLEY LASERLAB has been synonymous with high performance cutting and drilling
systems, and the company has developed a well-established reputation for both innovation and reliability.
Image source: http://www.farleylaserlab.com.au/
The competitive advantage of FARLEY LASERLAB specifically lies in its deployment of advanced technologies
that provides added-value to clients in a way that competitors find hard to replicate. Clients tend to seek
improvements in overall productivity, which is dependent not only on computer-controlled machinery, but also the
job scheduling and materials planning systems servicing the client. While many competitors provide a limited
selection of these services, FARLEY LASERLAB is the only company in Australia that provides the whole range of
220
services required by most clients in its industry.
With head-quarters in Melbourne, Australia, the company now employs over 400 people. FARLEY LASERLAB
has close to 600 installations across Australia, and has an annual turnover close to AUS$50 million. FARLEY
LASERLAB’s strategic performance has so far been sound. Though it is still a small player in global terms, its
market share in Australia is over 70%. The company’s sales and profits have been growing steadily in recent years.
Sales revenue grew at an average 10% per annum in the last 5 years, and average operating profit after tax was also
in a healthy upward trend in the last 3 years.
FARLEY LASERLAB’s main customers are in the domestic Australian industry. It also supports a few overseas
agents for limited exporting. Though it keeps an eye on the world market, it has acted prudently to avoid over
commitment.
Strategic Business Review
At a recent strategic business review meeting between Bernard Ragon,
CEO of FARLEY LASERLAB and the senior management team, the key
issue of offshore markets were discussed.
Like many companies, FARLEY LASERLAB faces critical issues of how
to sustain its growth and survive in an increasingly competitive
environment. Though the company is currently well positioned, Australia
is almost a saturated market. There is no big room left for further
expansion. While maintaining the market leader’s position in Australia,
CEO Bernard Regan recently announced a strategy to expand more
actively in offshore markets.
Bernard Ragon, CEO FARLEY
LASERLAB
Image source: http://www.farleylaserlab.com.au/
Though FARLEY LASERLAB has a wide range of products covering most of the market segments in Australia,
Bernard and the management have determined to penetrate overseas market with its high end products which
demonstrate FARLEY LASERLAB’s core competency of technological advancement. It plans to adopt a focus-
differentiation strategy to avoid direct competition with other industry giants such as ESAB which has a century of
history in cutting machines. The focused area is of high technology components that provide new features no other
suppliers can provide.
IT Architecture
To support the new offshore markets strategy, Bernard has asked the Chief Technology Officer (CTO) at FARLEY
LASERLAB to move forward with the upgrade of the company’s current IT/IS infrastructure and architecture which
has been overdue for some time. Technology plays a key role in the industry and the rapidly changing nature of
information technology often changes the playing field of competition.
At the present, FARLEY LASERLAB has a number of IT systems servicing various functional departments. Most of
the systems were implemented many years ago. Though these systems still meet the basis needs of individual
business units in a discrete way, the interfaces between the systems has not only been described as obsolete, but even
as dangerously inadequate, jeopardising the strategic mission of the company. To support the new strategic mission
of the company, FARLEY LASERLAB will need to set up international offices outside of Australia. The issue of
systems integration therefore becomes critical. The company is also keen to explore the use of web technology to
help penetrate global markets.
A few years ago, FARLEY LASERLAB developed a web-based remote operations, support, diagnosis and
maintenance system (ROSDAM), endeavoring to revolutionize customer service support in manufacturing industries.
ROSDAM uses the Internet to capture a wide range of information from the end user's machine installation, and
feeds this data back into software, design, and service improvements. It then creates process and service databases,
and establishes an expert system to help remote users with problem diagnosis and process improvement. However,
the full potential of the system was yet to be realised due to FARLEY LASERLAB’s limited global market presence.
221
Bernard is also concerned that with offshore expansion and the establishment of international offices, better ways are
needed for sharing information within the company. For example, the senior engineers in the Australian offices need
to impart technical know-how and advice to individuals working in offices outside of Australia. Teams would also
need to work together comprising of individuals from different offices. He recently heard an online talk given by
Marc Eisenstadt, from the Knowledge Media Institute of the Open University in the UK about how knowledge
management and online collaboration tools could facilitate information sharing and wondered if such tools would
also be useful at FARLEY LASERLAB as part of their IT/IS strategy.
YOUR TASK
It is in this context that you have been approached by Bernard Regan, CEO of FARLEY LASERLAB, to provide
your consultancy services as he is aware that you have recently completed the MBA 770 - IT Systems for Business
subject in your U21Globallobal MBA course. Your task is to produce a draft discussion paper on how FARLEY
LASERLAB might move forward.
After reflecting upon what you have studied in the IT Systems for Business subject, you have decided, in your paper,
to evaluate, critically, the current situation and recommend strategic plan and implementation approaches for IT/IS
infrastructure upgrade for FARLEY LASERLAB.
GUIDE TO THE TASK
To help guide your thinking, you have discussed the matter with your classmates and, amongst other things, they
suggest that you contemplate the following:
Critically analyse the business environment and using strategic tools, such as SISP alignment process, to re-
align the company’s information systems with its business strategy, and identify specific and critical
leverage points where FARLEY LASERLAB can use information technology most effectively to enhance
its competitive position.
Identify major implementation risks and, based on your risk assessment, select the kind of organisational
changes which maximises the opportunity of success, and list other organisational factors that can
potentially affect the implementation success.
IMPORTANT INFORMATION REGARDING THE PREPARATION OF YOUR WORK
1) In completing this task, be sure to draw on the concepts and analytical tools you have learnt about during
MBA 650 eBusiness, making direct references to the subject materials (ie, the prescribed text, courseware
and other resources). Students who fail to comply with this directive will not receive a passing grade.
2) You must upload a written response of 2,000 words (+/- 10%, excluding references) in 24 hours’ time via the
‘Final Exam’ option on the left hand side of your eClasses page. You are allowed to upload only ONE file. If
you need to upload more than one document, use WinZip to zip up your documents as a single file.
3) The piece of writing you submit should be referenced in the normal way, using an internationally
recognised referencing system. Students who fail to comply with this directive will not receive a passing
grade.
4) This is a broad question that invites a variety of ‘equally correct’ answers.
5) High marks will be awarded for good, critical analysis, rather than content cut and pasted from websites and
other electronic sources.
6) The expectation is that you will not have the time to submit an answer of the quality of a term-time
assignment. However, you should try, as much as possible, to submit an answer of similar quality.
END OF PAPER
... A different approach is to design assessment elements where plagiarism is less likely to occur. Universitas 21 Global [12] for example uses simulated business problems, requiring answers to cite course material, and exhibit critical thinking, ensuring the uniqueness of responses. Another study [13] concurs with these findings, and suggests that open-book, open-web examinations better reflect conditions in business and commerce practice; its users also considered that cheating in such customised exams is harder. ...
... Τα τελευταία χρόνια υπάρχει αυξανόμενο ενδιαφέρον για την αυθεντική και την εναλλακτική αξιολόγηση των εκπαιδευτικών διαδικασιών (Lam & Chua, 2007). Αλλάξανε οι παραδοσιακοί τρόποι διδασκαλίας και μάθησης, και, ταυτόχρονα, αλλάξανε και οι παραδοσιακοί ρόλοι του εκπαιδευτικού και του μαθητή. ...
Conference Paper
Full-text available
Τα τελευταία χρόνια υπάρχει αυξανόμενο ενδιαφέρον για την αυθεντική και την εναλλακτική αξιολόγηση των εκπαιδευτικών διαδικασιών (Lam & Chua, 2007). Αλλάξανε οι παραδοσιακοί τρόποι διδασκαλίας και μάθησης, και, ταυτόχρονα, αλλάξανε και οι παραδοσιακοί ρόλοι του εκπαιδευτικού και του μαθητή. Αυτές οι αλλαγές οφείλονται στην είσοδο και στη χρήση των Τεχνολογιών της Πληροφορίας και των Επικοινωνιών (ΤΠΕ) και ειδικότερα του διαδικτύου στην εκπαιδευτική πραγματικότητα. Στην πιο βασική της μορφή, η αξιολόγηση αφορά τις διαδικασίες απόδειξης των γνώσεων του μαθητή και την κριτική ερμηνεία στο «πώς» συντελέστηκε και επιτεύχθηκε αυτή η μάθηση. Ακόμη και σήμερα, στην κοινωνία της πληροφορίας, ο τρόπος αξιολόγησης της μάθησης δεν άλλαξε πραγματικά, αφού η μόνη αλλαγή που σημειώθηκε είναι η μετατροπή των διαδικασιών από την έντυπη μορφή σε ηλεκτρονική (Elliot, 2007). Όμως, οι τεχνολογικές εξελίξεις και ειδικότερα στον τομέα της επικοινωνίας, έχουν συμβάλει στη δημιουργία νέων μορφών μάθησης μέσα από τα καινοτόμα περιβάλλοντά τους, επιφέροντας την ανάγκη χρήσης εναλλακτικών μορφών αξιολόγησης, με μετρήσιμα στοιχεία που αποδίδουν περισσότερο την πραγματικότητα και μέσα από διαδικασίες άμεσης ανάδρασης, τόσο όσο αφορά τον εκπαιδευτικό όσο και τον ίδιο το μαθητή.
... This is an anachronism in itself, but more importantly, as an assessment instrument a closed book, invigilated exam -still the most commonly administered in universities today -is at odds with modern learning theory. An 'open book-open web' (OBOW) exam can be a superior assessment instrument on a number of dimensions (Lam et al, 2007). Significantly, opportunities for cheating are deemed to be roughly equal. ...
Conference Paper
Full-text available
This paper evaluates the effectiveness of 'open book-open web'(OBOW)examinations in comparison to invigilated closed book-pen and paper exams. An OBOW exam was conducted, wherein 127 students participated in it. The result obtained in this exam was compared with the invigilated exam taken by the same students previously. The percentage of marks obtained by the students were graded as "A" for 90-100% marks, "B", "C", "D" and "F" for 80-89%, 70-79%, 60-69% and 0-60% respectively. Some students were placed under ungraded category ("U" grade), as they faced some technical problems during the exam. Cheating was assessed based upon the time at which the student started taking the exam, the total time taken to complete it and the marks they scored. The results indicated that there was no notable difference in the results between the two types of exams. The number of students scoring "A" grade was almost the same in both the type of exams viz. 36% of all the students scored "A" grade in OBOW as against 38% of the students in the invigilated exam. However the number of students scoring lower grades i.e. "B" to "F" was more in OBOW exams then the invigilated exams. A few cheating cases were observed in the OBOW exam and also in the invigilated exam, which is unavoidable in any circumstance. About 10 students faced technical problems like loss of internet connection, slowing of the internet connection due to traffic congestion in the network, hanging of the user's computer system. It can be concluded that OBOW exams are better in accessing the student's ability to understand the subject and reproduce it.
... A paper-and-pencil test, on the other hand, would provide a much less valid form of assessment (cf. Lam, Williams, & Chua, 2007). ...
... Here, the idea had met with considerable resistance and it ultimately failed to receive the endorsement of the school teaching and learning committee as a legitimate summative assessment instrument. Conservative forces also prevailed at U21Global in early 2004, and it was only after a lengthy evaluation period that the OBOW model was officially sanctioned as the modus operandi for final exams (see Lam, Williams & Chua, 2007;Williams, 2006). ...
Article
Full-text available
Educators have long debated the usefulness (or otherwise) of final examinations; a debate that has typically revolved around the relative merits of closed-book exams, open-book exams, take-home exams or their substitution by some other assessment format (eg, project work). This paper adds a new dimension to the debate by considering how the final examination assessment instrument might be enhanced through harnessing the power of technology, more specifically, how the learner experience of the final examination might be made more authentic and, in the process, more constructively aligned with stated learning outcomes. The authors report on the latest findings of an ongoing research project evaluating the effectiveness of ‘open-book, open-web’ (OBOW) examinations delivered by an online university, vis-à-vis a closed-book, invigilated alternative. Earlier research had indicated that the OBOW model receives the strong endorsement of students in a number of respects, most particularly the quality of the learning outcomes.
... This changed in May 2004, when it became clear through feedback from faculty and students alike that this examination model was failing as a summative assessment instrument because it was not consonant with the constructivist pedagogy pervading U21Global courses and the attendant case-based, formative assessment regime. The OBOW model was trialled, and adjudged a success following evaluation (see Williams, 2006;Lam et al. 2007) and has been in use ever since. ...
Article
Full-text available
School of Business U21Global This paper reports on the latest findings of an on-going research project evaluating the effectiveness of 'open book, open web' (OBOW) examinations. An assessment instrument used in a growing number of higher education institutions around the world, the OBOW examination model under consideration in this project is distinguishable by its firm commitment to the notion of authentic assessment, and the harnessing of the information and communication technologies to bring the 'examination paper' to life. The results of previous research undertaken have indicated that the OBOW approach receives the strong endorsement of students on a number of fronts, not least the quality of the learning outcomes. Scepticism remains, however, on the part of some traditionalists within educational circles who argue for the retention of invigilated examinations as this is the only means of ensuring that a student's work is their own and theirs alone. This paper opposes this position, presenting the case for an examination instrument that is more in keeping with modern learning theory.
... nal task in a database available to both assessors and researchers. Most importantly, however, the use of technology makes it possible to make valid assessments of student competences in a way not possible without this technological support. For instance, the authenticity of the examination could not be brought about by a paper-and-pencil test (cf.Lam, Williams, & Chua, 2007), nor could the same effectiveness be achieved if the students were assessed while actually performing in practice -this is especially true for the teacher education with such a large number of students. A conclusion is thus that training and valid assessment of self-assessment skills can be facilitated through the Interactive Examinatio ...
Article
Full-text available
To assess own actions and define individual learning needs is fundamental for professional development. The development of self-assessment skills requires practice and feedback during the course of studies. The “Interactive Examination” is a methodology aiming to assist students developing their self-assessment skills. The present study describes the methodology and presents the results from a multicentre evaluation study at the Faculty of Odontology (OD) and School of Teacher Education (LUT) at Malmö University, Sweden. During the examination, students assessed their own competence and their self-assessments were matched to the judgement of their instructors (OD) or to their examination results (LUT). Students then received a personal task, which they had to respond to in written text. After submitting their response, the students received a document representing the way an “expert” in the field chose to deal with the same task. They then had to prepare a “comparison document”, where they identified differences between their own and the “expert” answer. Results showed that students appreciated the examination in both institutions. There was a somewhat different pattern of self-assessment in the two centres, and the qualitative analysis of students' comparison documents also revealed some interesting institutional differences. Yes Yes
Article
Full-text available
Authentic assessment aligns higher education with the practices of students' future professions, which are increasingly digitally mediated. However, previous frameworks for authentic assessment appear not to explicitly address how authenticity intersects with a broader digital world. This critical scoping review describes how the digital has been designed into authentic assessment in the higher education literature. Our findings imply that the digital was most often used to enhance assessment design and to develop students' digital skills. Other purposes for designing the digital into assessment were less present. Only eight studies situated the students within the wider context of digital societies, and none of the studies addressed students' critical digital literacies. Thus, while there are pockets of good practice found within the literature, the vast majority of the studies employed the digital as an instrumental tool for garnering efficiencies. We suggest that in order to fit its purpose of preparing students for the digital world, the digital needs to be designed into authentic assessment in meaningful ways.
Article
Full-text available
A problem for educators and the developers of interactive multimedia is the apparent incongruity between the demands of authentic assessment and the deliverables of computer‐based assessment. Lecturers wishing to use interactive multimedia are commonly limited to assessment using multiple choice tests which are easily marked by the computer.This article describes seven defining characteristics of authentic assessment which have been operationalized in a learning environment employing interactive multimedia. The article describes the multimedia program and its implementation with a class of pre‐service teachers. The implication of these findings for educational practice are that authentic assessment can be used within interactive multimedia learning environments, albeit not totally contained within the software itself. The qualitative study reported here showed that students responded favourably to the elements of authentic assessment; that they had a good understanding of the content of the interactive multimedia program; and that the assessment was corroborated by observation of teaching strategies used by the students in their teaching practice.
Article
Discussion of electronic discussion groups as an instructional technique addresses the dilemma of whether instructors should associate assessment schemes with the electronic discussion forum. Presents a coding technique as an example of how assessment can potentially promote substantive electronic discussions. (Author/LRW)
Article
The new learning paradigm brings with it the need for a change in assessment practices as well. In this chapter, one of the assessment practices that is most consistent with this paradigm, authentic assessment, is discussed.