Conference PaperPDF Available

Towards an integrated assessment model for complex learning ressources: Findings from an expert validation

Authors:

Abstract and Figures

Today's e-learning systems meet the challenge to provide interactive, personalized environments that support self-regulated learning as well as social collaboration and simulation. At the same time assessment procedures have to be adapted to the new learning environments by moving from isolated summative assessments to integrated assessment forms. In this paper an integrated model for assessment (IMA) is outlined, which incorporates complex learning resources and assessment forms as main components for the development of an enriched learning experience. For a validation the IMA was presented to a round of experts from the fields of cognitive science, pedagogy, and e-learning. The findings from the validation lead to several refinements of the model, which mainly concern the component forms of assessment and the integration of social aspects. Both aspects are accounted for in the revised model, the former by providing a detailed sub-model for assessment forms.
Content may be subject to copyright.
Towards an Integrated Assessment Model for
Complex Learning Ressources
Findings from an Expert Validation
Gudrun Wesiak and Mohammad Al-Smadi
Graz University of Technology
Graz, Austria
gudrun.wesiak@uni-graz.at, msmadi@iicm.edu
Christian Gütl
Graz University of Technology, Graz, Austria.
Curtin University of Technology, Perth, WA.
cguetl@iicm.edu
Abstract Today’s e-learning systems meet the challenge to
provide interactive, personalized environments that support self-
regulated learning as well as social collaboration and simulation.
At the same time assessment procedures have to be adapted to
the new learning environments by moving from isolated
summative assessments to integrated assessment forms. In this
paper an integrated model for assessment (IMA) is outlined,
which incorporates complex learning resources and assessment
forms as main components for the development of an enriched
learning experience. For a validation the IMA was presented to a
round of experts from the fields of cognitive science, pedagogy,
and e-learning. The findings from the validation lead to several
refinements of the model, which mainly concern the component
forms of assessment and the integration of social aspects. Both
aspects are accounted for in the revised model, the former by
providing a detailed sub-model for assessment forms.
Keywords-e-Assessment, Assesesment Model, Expert
Validation, Complex Learning Ressouces
I. INTRODUCTION
With the continuous development of information and
communication technology (ICT) in the context of learning, the
adjustment of educational goals, settings, and assessment
methods become a major challenge. Today’s e-learning
activities are expected to be interactive, challenging, and
personalized. Learners should be in control of their learning
experience, but simultaneously experience a supportive,
collaborative, and simulative learning environment. Thus, self-
regulating learning combined with social aspects and high
levels of motivation are asked for. These changing e-learning
activities also entail the need of changing assessment activities
[1]. E-assessment, i.e., assessment in context of e-learning
activities is a challenging field of research for Computational
Science, Pedagogy, and Psychology. Within the EC-funded
project “Adaptive Learning via Intuitive/Interactive
Collaborative and Emotional System (ALICE), our research
group at Graz University of Technology designed an integrated
framework for e-assessment that is based on the requirements
of different complex learning resources, such as collaborative
learning, storytelling, and serious games [2]. The resulting
integrated Model for e-Assessment (short IMA) describes the
components involved in an enriched learning experience,
including not only the learning objectives, resources, and
assessment methods, but also inputs to the learning experience
and interactions with other models [3]. In order to meet the
needs of different learning environments and resources, the
proposed model was evaluated and improved in two steps.
After a first round of experimentation and a model-validation
by an expert from the field of cognitive science, the model was
extended by means of a sub-model dealing with the different
forms of assessment. Then, the extended IMA was presented to
a round of experts from the fields of cognitive science, e-
learning, and pedagogy, who evaluated the model with regard
to its relevance and applicability in the field of e-assessment.
In Chapter II of this paper, the IMA and its sub-model on
assessment forms is presented in detail. Chapter III outlines an
example application in a collaborative learning environment.
Chapter IV gives an overview on the methodology used for the
expert validation and the derived results. Finally, in Chapter V,
we discuss our findings and give a short outlook for future
research.
II. INTEGRATED MODEL FOR E-ASSESSMENT (IMA)
A. Integrated Model for Enriched Learning experiences
The general IMA addresses the requirements of an
enriched learning experience as it is defined in the ALICE
project [2], namely as an experience that is based on complex
learning resources (e.g. collaborative and social learning,
storytelling, simulation and serious games) and integrated
assessment methods (e.g. cognitive and affective assessment).
This combination is expected to yield effective learning
processes such as reflective and experiential learning [4] as
well as socio-cognitive learning [5]. Fig. 1 depicts the abstract
level of IMA with its core-methodology, inputs to the learning
environment and adaptivity components interacting with the
learning resources and assessment. IMA’s core methodology
consists of the following four main components: (1) the
learning objectives, which usually refer to the goals defined by
the instructor of a course but also to related didactical
objectives such as gaining social competence or meta-
cognitive skills due to collaborative work or self-regulated
learning. Learning objectives influence the type of learning
resource as well as the assessment forms appropriate in a
given learning experience. For instance, if the learning
978-1-4673-2427-4/12/$31.00 ©2012 IEEE
objective is to apply knowledge (see [6] and [7] for a
taxonomy of educational objectives), the provision of text
material and a simple knowledge test will not suffice. In this
case a more complex learning resource and an assessment
including the application of knowledge are required (a very
simple example would be the application of a previously
learned formula). (2) Complex learning resources (CLR)
should be provided to support learners in achieving the
learning objectives by means of an active involvement in the
learning process. According to constructivist theories (see e.g.
[8] for a review) we build explanations of ourselves and our
environment to actively create knowledge. To meet the needs
of an active learner, enriched learning experiences are made
up of CLR including collaboration, simulation and serious
games, as well as storytelling. (3) New forms of assessment
should meet the high demands arising from the CLR by
considering different levels of educational objectives and
effective kinds of learning. See Section B of this Chapter for
more details. (4) Evaluation and validation processes should
be included on a regular basis to ensure a high quality learning
experience. Evaluation refers to the assessment of the used
methods and procedures, whereas validation means that the
measures provide a valid conclusion about the status of a
learner. Results from the evaluation and validation process can
again influence the first three components. Thus the
development of efficient learning environments should be seen
as cyclic process open to improvements.
Besides the core methodology, several components
influencing the learning experience have to be considered (red
arrows in Fig.1). These include educational aspects (e.g.
different learning styles or social learning), psychological
aspects (emotion or motivation), technical issues (e.g. adaptive
learning or tool selection) and existing standards and
specifications (e.g. best practices or ethical aspects).
To ensure a high quality standard of all activities in this
complex learning environment quality criteria should be
defined. Therefore, quality assurance which addresses all
components of the enriched learning experience is also
considered in the model. Aspects to be considered include best
practices and standards in the field in general, guidelines for
delivering assessment, scoring and interpreting, e.g. [9], or
ethical aspects (plagiarism, cheating, but also data protection,
voluntariness, and transparency of assessment activities). A
comprehensive framework for e-learning quality, which
includes criteria for infrastructure, technical standards, content
development, pedagogic practices, and institutional
development is given by [10].
The quality assurance is also relevant with respect to
indicators that are expected to result from the enriched
learning experience: indicators for its educational efficiency
and effectiveness. For instance, the theory of constructive
alignment [11] describes the compatibility between
instruction, learning, and assessment. According to this theory,
teaching is more effective when there are alignments between
what teachers want to teach, how they teach, and how they
assess students’ performance. Thus, when selecting an
assessment tool, both CLR and didactical objectives have to
be considered. For instance, did learning occur during a
collaborative activity or not? Should there be an individual, a
group, or a peer assessment? Should the assessment activity be
formative or summative? What exactly should be assessed?
The knowledge of the learner or whether he or she can apply
the knowledge or even create new appliances based on the
knowledge they acquired?
Finally, in order to ensure that the learning experience
allows adaptivity, the model also interacts with three other
important models: the learner model, the knowledge model,
and the didactic model. In co-operation with the learner
model, the cognitive status of the learner in terms of
knowledge and skills is updated, with the knowledge model
the ontology of learning is recovered and with the didactic
model, individual sequences of learning activities are build
and eventual alternative models are recovered.
B. New forms of assessment
Modern forms of assessment have to cover several aspects
based on cognitive and educational findings, as well as
technological standards. Thus, based on already existing ways
of assessment, the Alice assessment model combines these
assessment forms in order to provide a comprehensive
assessment of knowledge and skills, behavioral, motivational,
and emotional aspects for complex learning resources. Fig. 2
depicts the different assessment forms as eight questions that
should be answered when planning an assessment. For each
question the respective specifications are listed. Depending on
the learning objectives and the respective learning scenario,
adequate assessment forms can be found by going through the
specified aspects of assessment and selecting all the relevant
ones. Thus, by answering each of the eight questions, a full
assessment plan can be developed. Thereby, it has to be
considered that the different forms cannot be seen as
independent aspects, but influence each other. Hence, the
representation does not imply a linear order of the relevant
assessment forms. Nevertheless, it can be seen as a suggested
way of proceeding. The listed options are a summary of the
most relevant assessment forms, but the selection is of course
open to change and/or extensions. In practice, before starting
the assessment, the learning objectives should be mapped into
a set or dictionary of competencies, which are then used to
build assessment rubrics that give a detailed overview of the
learning goals. Furthermore, each goal should be connected to
a criterion that specifies how and when a goal is achieved.
Figure 2. Assessment model
III. MODEL APPLICATION IN A SELECTED COMPLEX
LEARNING SCENARIO
To give an example of how the components of the IMA are
reflected in a real learning scenario, we chose a self-directed
learning course with a collaborative writing assignment. In an
online-course on scientific writing students had to study
articles from provided course material, to collaboratively write
an essay about these articles, and to plan a study. For the
writing assignment the co-Wiki developed by [12] was used
which provides integrated self- and peer-assessments (see [13]
for an evaluation of the tool). For automatic assessments
during the reading task participants could use the automatic
questions creator AQC before, during, and after reading the
articles [14]. To investigate whether students could benefit
from the learning environment, questionnaires covering task
awareness, motivational and emotional aspects, and usability
were sent to the students at three points during the study. A
detailed description of the study can be found in [15].
A. Core Methodoloy
The main learning objective was to create a learning
environment that supports students in self-regulated learning
and working collaboratively. These goals are related to further
objectives such as gaining social competences (due to
collaborative work) or meta-cognitive skills (due to self-
regulated learning activities).
The complex learning resource is a self-directed learning
course integrated with a collaborative writing assignment. The
provided co-Wiki ensures that students work collaboratively,
its visualization functions support task and social awareness as
well as group well-being. Additionally it provides self-, peer-,
and instructor assessments including the use of an assessment
rubric designed for scientific writing. The rubric was also used
for the group-assessments, in which students had to assess the
work of one other group. The AQC creates tests automatically
by extracting concepts and generating questions (true/false,
single choice, fill-in-the-blank, open-ended) based on a
selected content, in this case the provided articles.
Additionally, the generation of questions based on self-
extracted concepts is possible. Testing one-self with questions
should stimulate the learning process and support students in
self-regulated learning.
Multiple forms of assessment were used. As far as the
criteria for mastering a learning objective are concerned,
assessment rubrics (using the categories literature, content,
and style with several subcategories each) were provided for
the group- and instructor assessment to insure a fair and
consistent assessment over all learning groups. The eight
aspects of assessment outlined in Fig. 2 were covered as
follows:
Assessment area: cognitive competencies were tested
on the knowledge level with automatically created
questions, affective dispositions by collecting data on
students’ motivation and emotional status. The
collaborative assignment covered the cognitive levels
comprehension (e.g. identify important steps for
planning a study), application (e.g. apply steps to own
research questions), and synthesis (e.g. plan and
formulate research design for given research question).
The level of evaluation is required by the group-
assessments.
Assessment referencing: norm-related, since students
compared their product with the work of other peers.
Assessment strategy I assessor: short self- and peer-
assessments after each change of the collaborative
writing assignment; detailed instructor and group
(peer) assessments of the final group products;
voluntary and required assessment by the system
(AQC) for the reading task.
Assessment strategy II who is assessed: regarding the
reading task (AQC-tests) individuals were assessed, for
the writing assignment individual and group
contributions were assessed (self/peer and
instructor/group assessments respectively).
Assessment type: formative assessment to monitor and
improve students’ learning process (self- and peer-
assessments, voluntary AQC knowledge tests);
summative assessment after the reading task (required
AQC test) and the writing assignment (instructor and
group-assessment); diagnostic assessment to check
students’ learning progress (questions regarding
students’ knowledge concerning scientific working
before and after the course).
Adaptivity: only on a very low level, namely regarding
the process of collaboratively creating a document,
because each review given by a peer influences the
next steps taken within the learning process.
Personalized adaptation of learning content or test-
items (e.g. based on students’ current knowledge,
motivational, or emotional status) was not embedded
yet.
Feedback: summative from instructor, i.e. at the end of
the course two tutors gave a detailed individual
feedback on the writing assignments; continuous from
peers by means of comments integrated in the short
peer assessments after each change of the contribution.
Assessment methods: quantitative and qualitative
methods concerning the cognitive as all as affective
domains. Quantitative: rating scales in the
questionnaires, number of correct questions achieved
in the AQC tests, and ratings given in self-, peer-, and
group reviews. Test-items included ratings, single-
choice questions, and fill in the blank items as fixed
formats, open response items by the AQC and the
essays as open response formats. Qualitative: open
answers in the questionnaires (e.g. regarding
improvements of the tool), comments in the instructor-,
group-, self-, and peer-assessments, and review of
essays by the tutors. Cognitive: knowledge tests (AQC)
on essays and assessments of writing assignment.
Affective: ratings regarding motivational aspects and
emotional status during collaborative assignment.
Regarding evaluation and validation, the quality of the
automatically created questions was evaluated and the impact
of the whole tool was validated by investigating students’
extrinsic and intrinsic motivation, emotional aspects, learning
styles and whether these components had an influence on the
learning process.
B. Inputs to the enriched learning experience
As far as educational aspects are concerned, we
investigated students’ learning styles by differentiating
between the elaborating and the repeating learning style [16]
and their relationship to intrinsic and extrinsic motivation.
Psychological aspects were covered by measuring motivation
during the self- and peer-assessments [17], as well as emotions
while using the tools [18]. Technological aspects in this study
concern the co-Wiki and the AQC. For the co-Wiki,
ScrewTurn wiki (an open source wiki using C# and ASP.Net
for the front-end presentation layer) has been selected to be
enhanced with features to maintain task and social-awareness
and group well-being. For assessment in self-directed learning,
the AQC was developed to automatically create assessment
items based on textual material. Regarding Standards and
specifications, the co-Wiki combines collaborative learning
and assessment activities, following the guidelines by [19].
For the AQC IMS-QTI assessment content specifications have
been used to represent the created items [20].
C. Efficiency, effectiveness and quality assurance
To evaluate the CLR (co-Wiki and AQC), students rated the
usability of the tools by means of the system usability scale
(SUS) [21] and made suggestions for improving the tools.
Regarding quality assurance, we planned the study under
consideration of the psychological quality criteria objectivity,
reliability, and validity.
D. Adaptivity components
The described study aimed at investigating the developed tools
which where therefore used stand-alone. However, to provide
adaptivity in the sense of a learner, knowledge, and didactic
model, in the meanwhile both tools were integrated in the
Intelligent Web Teacher (IWT) [22], which is a learning
management system allowing the definition and execution of
personalized e-learning experience tailored on the basis of
learners’ cognitive status and learning preferences.
IV. EXPERT VALIDATION
As mentioned above, the proposed model was developed in
several steps. The original model was validated by an expert
from the field of cognitive science, whose main suggestion was
to focus more on the assessment part of the IMA. Thus the
model was extended by the assessment sub-model as it is
depicted in Fig. 2. For a second round of validation, nine e-
learning experts from different European universities were
asked to validate the model concerning the importance of its
components, the accuracy of the relations among the
components, and its application and relevance in the field of e-
assessment. Additionally they were asked to test and evaluate
the two developed tools co-Wiki and AQC.
Five experts, two men and three women, from the fields of
cognitive science, e-learning, and pedagogy, participated in
the study. For the validation, the experts received a detailed
description of the model and sub-model, access and guidelines
to the tools, and a questionnaire with the 11 items listed in
Table I. Levels of agreement were generally stated on a 5-pt.
rating scale ranging from (1) “I strongly disagree” to (5) “I
strongly agree”(5). For question 9 a 7-pt. scale was used
TABLE I. MODEL VALIDATION BY EXPERTS
Question/Statement
Meana
(SD)
Comments
The model provides an accurate representation of the real world
2.80
(0.84)
too abstract
need of including mobile technologies or multimedia
no linear order in reality
model focuses on learning of individuals, learning processes of social
entities are missing
lots of important elements are considered
The model provides a substantially complete representation of the
real world.
2.20
(1.10)
missing aspects: social context, group dynamics, working/learning context,
problem based or project based learning as complex learning resources
assessment of relational factors
There is an obvious error in the model.
2.20
(1.14)
learner/user/student model instead of learning model
4 experts found no error
The components of the model are easy to comprehend.
2.80
(1.30)
interplay of components
illustration by a concrete example
adaption part is not clear
some components require reading the details
All of the included components are relevant and priorities are set
appropriately.
3.80
(0.45)
The relations between the components make sense.
2.80
(1.30)
add relation between educational/psychological aspects and learning goals
and technology
inside (single learning episode) vs. outside (whole educational design) the
box
The flows are correct.
2.80
(0.84)
no linear order
different order (text vs. model)
The model fits the requirements/objectives to “specify and design a
functional innovative framework to evaluate didactic experiences in
adaptive learning systems”.
3.75
(0.96)
clearer guidelines on how to evaluate didactic experiences
All in all, how would you rate the integrated model regarding to its
relevance in the field of e-assessment?
4.69*
(0.89)
emphasize benefit/advantage of this model
add more components
adaptive to underlying system
elaborate and well justified assessment part
What would you especially improve regarding the model?
-
priorities of the model more visible
Skip red arrows background/context
Integration of relational factors
Do you have any further comments?
-
Focus on individual learning experience, although talking about social
interaction and collaboration
a. 5-pt. rating scales from (1) I strongly disagree to (5) I strongly agree
which ranged from (1) not relevant” to (7) “very relevant.
Additionally experts were prompted to comment their ratings
and to give suggestions for improvements. Table I summarizes
the results.
Overall the experts gave medium ratings on the different
features of the model. A closer look at the comments shows
that two aspects stand out as they were mentioned by several
experts independent of their individual background. One point
is that social interaction and collaboration were not explicitly
considered in the original model. Thus, the social aspect of
learning was integrated more thoroughly by adding social
learning to the educational inputs of the IMA model and by
differentiating between assessing individuals and groups in the
assessment sub-model. The second main concern was the lack
of a concrete example, i.e. the abstractness of the model. To
meet this concern, we applied the model in different areas
including collaborative and self-regulated learning (Section III
in this paper) and storytelling. Furthermore, minor changes
were performed, such as the separation of educational and
psychological aspects, or some rewording of the model
description. Fig. 1 depicts the revised version of IMA.
With regard to the validation of the tools, four experts
filled out the questionnaire for the AQC and three the one for
the co-Wiki. Because this paper focuses on the theoretical
model, results are only summarized very shortly. The experts
considered the co-Wiki for the most part as supportive for
students as well as for instructors, especially the visualization
tools and the assessment rubric were found to be very helpful
components. In general, the experts saw the fields of
application very broad, but would improve its design and add
some components, such as a search function and more
information about the contributors. As far as the AQC is
concerned, experts confirmed that the tool is a valuable
instrument to test knowledge on a lower level and to get a first
impression of what the students have learned. However, it is
no suitable to test students’ deeper understanding of a subject.
V. DISCUSSION AND OUTLOOK
The aim of this research was to develop an integrated
model for e-assessment (IMA), which meets the challenges of
the adaptive e-learning environment build within the ALICE
project. The latter combines personalization, collaboration,
and simulation aspects wihtin an affective/emotional based
approach. The final goal is to provide an interactive,
challenging and context aware environmnet that fosters
learners’ demand of empowerment, social identity, and
authentic learning experience. The IMA discussed in this
paper, describes an enriched learning experience on an
abstract level. It is made up of didactical objectives, different
learning resources, and assessment activities. It also considers
influences arising from the viewpoints of pedagogy and
psychology as well as from the viewpoint of technology.
Furthermore, the relationship to other models (didactic model,
knowledge model and learner model) is emphasized. Finally,
to assure a high quality standard of the model, efficiency and
effectiveness as well as evaluation and validation processes
are mentioned as indicators coming up from the model.
The purpose of the IMA is to identify all components that
need to be considered whenever an enriched learning
experience is developed. However, its core is the aspect of e-
assessment, which is no longer a simple task of testing a
student’s knowledge, but has to consider a wide range of
assessment forms in order to give a comprehensive picture of
a student’s learning process, including cognitive and
emotional aspects, individual and social learning, adaptivity,
and so on. To give an example of how IMA can be used in
practice a case study from the ALICE project has been
presented to show how each component of the model was
considered in a self-directed learning course (using two tools
developed within this context). This first version and
application of the model together with the tools developed in
this context was validated by a sample of five experts. This
expert validation resulted in a few changes of the model,
especially regarding the integration of social aspects. The
results of the application study and the expert validation show
the usefulness of the model regarding the development of e-
learning environments with comprehensive assessment
procedures.
Future research should include an extension of the IMA to
other areas of application and CLR and especially focus on the
further development of the assessment sub-model. At the
moment, the sub-model gives a comprehensive overview of
important aspects that need to be considered when planning e-
assessments for CLR. However, to increase the usability of the
model, relationships and dependencies between different
forms of assessment should be considered. For example, a
scenario in which learners do not collaborate does not need a
group assessment as strategy. For more convenience of the
user (instructor or course developer), automatic suggestions of
adequate assessments methods depending on the previously
chosen assessment area, referencing, strategy, type, etc. are
also conceivable.
ACKNOWLEDGMENTS
This research is supported by the European Commission
under the Collaborative Project ALICE (Adaptive Learning via
Intuitive/Interactive, Collaborative and Emotional Systems, VII
Framework Program, Theme ICT-2009.4.2 (Technology-
Enhanced Learning), Grant Agreement n. 257639. We are
grateful to Margit Höfler and Isabella Pichlmair for their
support during the model development and validation study.
REFERENCES
[1] Bennett, R. E. (2002). Inexorable and inevitable: The continuing story of
technology and assessment. Journal of Technology, Learning, and
Assessment, 1(1).
[2] ALICE (Adaptive Learning Via Intuitive/Interactive, Collaborative And
Emotional Systems) project (2011), Deliverable D5.1.2 Integrated
Model for e-Assessment v2 (revision), project co-funded by the
European Commission within the 7th Framework Programme (2007-
2013), n. 257639 (2010).
[3] AL-Smadi, M., Höfler, M., & Gütl, C. (2011). An integrated model for
e-assessment of learning experiences enriched with complex learning
resources. Proceedings of International Workshop on Adaptive Learning
via Interactive, Collaborative and Emotional Approaches (ALICE 2011),
3d IEEE INCoS-2011 conference: Third International Conference on
Intelligent Networking and Collaborative Systems, November 30
December 2, 2011, Fukuoka, JAPAN.
[4] Kolb, A. Y. (1984). Experiential Learning: Experience as the source of
learning and development. New Jersey: Prentice Hall.
[5] Bandura, A. (1977). Social learning theory. New York: General
Learning Press.
[6] Bloom, B.S. (Ed.), Engelhart, M.D., Furst, E.J., Hill, W.H. &
Krathwohl, D.R. (1956). Taxonomy of educational objectives: The
classification of educational goals. Handbook 1: Cognitive domain.
New York: David McKay.
[7] Krathwohl, D.R., Bloom, B.S. & Bertram, B.M. (1973). Taxonomy of
educational objectives, the classification of educational goals.
Handbook II: Affective domain. New York: David McKay.
[8] Anderson, O. R. (2009). Neurocognitive theory and constructivism in
science education: A review of neurobiological, cognitive and cultural
perspectives. Brunei International Journal of Sciences & Mathematical
Education, 1, 1-32.
[9] BPS (2002). Guidelines for the Development and Use of Computer-
based Assessments. Leicester: British Psychological Society.
[10] Anderson, J. & McCormick, R. (2006). A common framework for e-
learning quality. In A. McCluskey (ed.). Policy and Innovation in
Education. Quality Criteria. Brussels: European Schoolnet, pp.4-9.
[11] Biggs, J. B. (1996). Enhancing teaching through constructive alignment.
Higher Education, 32, 1-18.
[12] AL-Smadi, M., Höfler, M., & Gütl, C. (2011). Enhancing Wikis with
Visualization Tools to Support Groups Production Function and to
Maintain Task and Social Awareness. Proceedings of 4th International
Conference on Interactive Computer-aided Blended Learning, Nov.
2011, Antigua, Guatemala.
[13] Wesiak, G., Al-Smadi, M., & Gütl, C. (2012). Alternative Forms of
Assessment in Collaborative Writing - Investigating the relationships
between motivation, usability, and behavioural data. Proceedings of the
2012 International Computer Assisted Assessment Conference (CAA),
Southampton, UK. (13p).
[14] Gütl, C., Lankmayr, K., Weinhofer, J., & Höfler, M. (2011). Enhanced
approach of automatic creation of test items to foster modern learning
setting. Electronic Journal of e-Learning, 9(1), 23-38.
[15] AL-Smadi, M., Wesiak, G., Guetl, C., & Holzinger, A. (2012).
Assessment for/as Learning: Integrated Automatic Assessment in
Complex Learning Resources. Proceedings of the Sixth International
Conference on Complex, Intelligent, and Software Intensive Systems
(CISIS 2012). Palermo, Italy. (6p).
[16] Wild, K.-P. (2000). Learning strategies in academic studies. Structures
and conditions. [Lernstrategien im Studium. Strukturen und
Bedingungen]. Münster: Waxmann.
[17] Tseng, S.-C., & Tsai, C.-C. (2010). Taiwan college students‘ self-
efficacy and motivation of learning in online peer-assessment
environments. Internet and Higher Education, 13, 164-169.
[18] Kay, R.H., & Loverock, S. (2008). Assessing emotions related to
learning new software: The computer emotion scale. Computers in
Human Behavior. 24, 1605-1623.
[19] Macdonald, J. (2003). Assessing Online Collaborative Learning: Process
and Product. Computers & Education, 40(4), 377-391.
[20] IMS QTI. IMS Question & Test Interoperability Specification, Version
2.0 - Final Specification. Last retrieved March 3rd, 2012 from
http://www.imsglobal.org/question/index.html.
[21] Brooke, J. (1996). SUS: A “quick and dirty” usability scale. In Usability
evaluation in industry. London: Taylor & Francis.
[22] N. Capuano, S. Miranda, and F. Orciuoli (2009). IWT: A Semantic Web-
based Educational System, IV Workshop of the Working Group on “AI
& E-Learning”, pp. 1116.
... Table 5 lists the demographic profile of experts based on the information obtained in the expert's checklist such as gender, post, affiliation and experience, to support the reliability of the experts. Expert checklist: The information obtained from expert's checklist was analyzed by frequency, mean and percentage scores [27]. The checklist consisted of  the relevance of the proposed guidelines and dimension component  the understanding of DST elements for touch-screen tablet  the applicability of MPBPD development guidelines for touch-screen tablet They were given the proposed guideline model for validation and made comments. ...
Conference Paper
The aim of this study is to create a standard guideline for the development of instructional media (apps) with digital story telling (DST) concept for touch screen tablet. High demand for apps with mobile interaction has boosted the need to create tablet-based teaching products. Nevertheless, guidance to create such products is lacking. Content and comparative analyses were employed based on the previous studies on digital media guidelines to formulate the components of the guidelines. A total of 13 experts, representing the Institute of Teacher Education (ITE) and the Institute of Higher Education (IPTA), were appointed as panel of experts to validate the guideline. Expert review checklist and interviews were used as data collection methods. The guideline should ease novice designers cum teachers to develop apps with mobile technology. Besides, students too can benefit from the new teaching strategy with DST concept.
... Thus, the diversely different opinions of experts on DST have led to various forms of stories produced. Precisely, DST is further classified into three categories according to its content: (i) personal narratives, (ii) stories that examine historical events, and (iii) stories that are primarily used to inform or instruct [6,29]. This study focuses on the narrative to inform or instruct stories as it involves the creation of multimedia learning materials. ...
Article
Full-text available
Digital Storytelling (DST) is a method of delivering information to the audience using narrative and digital media contents that are infused with the multimedia elements. In order for the educators cum the designers to create a compelling digital story, experts have proposed a number of phases. Nevertheless, some of these suggested phases contain redundant processes. Therefore, the main aim of this study is to eliminate the redundancy and propose a single DST guide for the designers. A comparative analysis was employed where ten DST models from various experts were analysed. The proposed guide was then distributed to 70 respondents in an Institute of Teacher Education (ITE). Data were analysed to determine the relationships between factors such as perceived usefulness, perceived of use, perceived ease of understanding and interaction of tablet with the overall quality of the DST process. The correlation analysis indicates that higher scores of the factors are associated with higher score of the overall quality of the DST process.
... Nine experts responded to the invitations made via telephone calls and an appointment is set up for semi-structured face-to-face interviews. The number of experts is sufficient for an expert view [24]. The experts involved in this review process are selected based on their own expertise in human-computer interaction (HCI), DST, multimedia in education, instructional design, and information technology (IT). ...
Conference Paper
Digital Storytelling (DST) is a method of delivering information to the audience. It combines narrative and digital media content infused with the multimedia elements. In order for the educators (i.e the designers) to create a compelling digital story, there are sets of processes introduced by experts. Nevertheless, the experts suggest varieties of processes to guide them; of which some are redundant. The main aim of this study is to propose a single guide process for the creation of DST. A comparative analysis is employed where ten DST models from various experts are analysed. The process can also be implemented in other multimedia materials that used the concept of DST.
... In Chapter II of this paper, which is an extended version of [1], the IMA and its sub-model on assessment forms is presented in detail. Chapter III outlines an example application in a collaborative learning environment. ...
Article
Full-text available
Today’s e-learning systems meet the challenge to provide interactive, personalized environments that support self-regulated learning as well as social collaboration and simulation. At the same time assessment procedures have to be adapted to the new learning environments by moving from isolated summative assessments to integrated assess- ment forms. Therefore, learning experiences enriched with complex didactic resources - such as virtualized collabora- tions and serious games - have emerged. In this extension of [1] an integrated model for e-assessment (IMA) is outlined, which incorporates complex learning resources and assess- ment forms as main components for the development of an enriched learning experience. For a validation the IMA was presented to a group of experts from the fields of cognitive science, pedagogy, and e-learning. The findings from the validation lead to several refinements of the model, which mainly concern the component forms of assessment and the integration of social aspects. Both aspects are accounted for in the revised model, the former by providing a detailed sub-model for assessment forms.
Conference Paper
Full-text available
In the so-called ‘New Culture for Assessment’ assessment has become a tool for Learning. Assessment is no more considered to be isolated from the learning process and provided as embedded assessment forms. Nevertheless, students have more responsibility in the learning process in general and in assessment activities in particular. They become more engaged in: developing assessment criteria, participating in self, peer-assessments, reflecting on their own learning, monitoring their performance, and utilizing feedback to adapt their knowledge, skills, and assessment tools have emerged from being stand-alone represented by monolithic assessment tools to more flexible and interoperable generation by adopting the service-oriented architecture and modern learning specifications and standards. The new generation holds great promise when it comes to having interoperable learning services and tools within more personalized and adaptive e-learning platforms. In this paper, integrated automated assessment forms provided through flexible and SOA-based tools are discussed. Moreover, it presents a show case of how these forms have been integrated with a Complex Learning Resource (CLR) and used for self-directed learning. The results of the study show, that the developed tool for self- directed learning supports students in their learning process.
Article
Full-text available
The emergence of web 2.0 and the influence of Information and Communication Technology (ICT) have fostered e-learning to be more interactive, challenging, and situated. As a result, learners have gained more empowerment through collaborative learning activities and self-directed learning. Moreover they are provided with computer-based social environments to discuss and collaborate by which they are encouraged to reflect on others’ contributions in a way that may facilitate collaborative knowledge construction. The use of wikis in education as an example of these social environment lacks students’ motivators namely assessment activities and group work support. This paper proposes an enhanced wiki system for collaborative writing assignments. Moreover, it discusses how a wiki can be enhanced with visualization tools to maintain task and social awareness and support group production function. Nevertheless, it shows how integrated self and peer-assessment activities may increase engagement and maintain group well-being.
Article
Assessment has to be seen as an integrated and important activity in the learning process. In particular modern educational approaches -such as self-directed or exemplary learning -and per-sonalized learning activities cause a tremendous effort or make it even impossible to prepare appro-priate and individualized test items, assess them and provide feedback. This situation has motivated the Advanced Educational Media Technologies (AEMT) Group at Graz University of Technology to initiate a research program on e-assessment to cover the entire life cycle of the assessment process by semi-automated and automated approaches. In this paper we focus on the automated process to create different types of test items out of textual learning content, more precisely to create single choice, multiple-choice, completion exercises and open ended questions. Started in previous re-search activities by applying statistic approaches to summarize learning content and to identify most important words, our most recent approach applies a combination of statistic and natural language processing as well as semantic methods to identify most important concepts on an abstracted level. In our enhanced approach, identified concepts and differently related concepts represent the underpin-ning input for the test item creation. The implemented prototype can process learning content stored in various file formats, extracts most important content and related concepts, creates different types of test items and reference answers, and supports online assessments as well as exports the test items in QTI format. In this paper we cover in particular the following aspects: the motivation for automated test item creation, related work, requirement and design of the enhanced tool, implementation and usage viewpoints. Furthermore, we outline a study on the evaluation of the prototype, which suggests that the quality of extracted concepts and created test items is comparable with such ones provided by humans.