ArticlePDF Available

How to evaluate the quality of digital learning resources?

Authors:
  • Abdelmalek Essaâdi University - Faculty of Sciences

Abstract and Figures

Digital learning resources have been widely used in educational activities ranging from schools to universities and higher educational institutions. The evaluation of the quality of these resources plays a significant role in designing and implementing attractive educational content. In this paper, we present the structure and the theoretical foundations to elaborate an instrument for examination and evaluation of the quality of digital learning resources. In order to invite pedagogues and computer scientists to think more about the evaluation of the quality of educational learning resources used in face or online (e-learning), the study presents and discusses the evaluation criteria that can guide and direct all design of evaluation instruments designed for easy use. These criteria are specific to the four dimensions of digital learning resources quality: academic quality, pedagogical quality, didactic quality and technical quality. The paper presents the practices relevant to the evaluation of these four dimensions of quality and describes how each dimension can be interpreted to evaluate digital learning resources. On the basis of the description given, the article also presents an evaluation instrument designed in the form of a computer application. The findings of this study are expected to support pedagogy agents to develop evaluation instruments, because these agents are not only concerned to design, but also to evaluate their products.
Content may be subject to copyright.
International Journal of Computer Science Research and Application
2013, Vol. 03, Issue. 03, pp. 27-36
INTERNATIONAL
JOURNAL OF
COMPUTER SCIENCE
RESEARCH AND
APPLICATION
ISSN 2012-9564 (Print)
ISSN 2012-9572 (Online)
© Author Names. Authors retain all rights.
IJCSRA has been granted the right to publish and share, Creative Commons 3.0
www.ijcsra.org
How to evaluate the quality of digital learning resources?
Abderrahim El Mhouti1, Azeddine Nasseh2, Mohamed Erradi2
1Laboratory of Computer Science, Operational Research and Applied Statistics, Abdelmalek Essaadi
University, Faculty of Sciences, Tetouan, Morocco. elmhouti@hotmail.com
2 Laboratory of Computer Science, Operational Research and Applied Statistics, Abdelmalek Essaadi
University, Higher Normal School, Tetouan, Morocco. azedinenasseh@yahoo.fr
Author Correspondence: B.P. 7, M’diq, 93200, Morocco, GSM: +212 672 35 70 30,
elmhouti@hotmail.com
Abstract
Digital learning resources have been widely used in educational activities ranging from schools to universities and higher
educational institutions. The evaluation of the quality of these resources plays a significant role in designing and
implementing attractive educational content. In this paper, we present the structure and the theoretical foundations to
elaborate an instrument for examination and evaluation of the quality of digital learning resources. In order to invite
pedagogues and computer scientists to think more about the evaluation of the quality of educational learning resources used
in face or online (e-learning), the study presents and discusses the evaluation criteria that can guide and direct all design of
evaluation instruments designed for easy use. These criteria are specific to the four dimensions of digital learning resources
quality: academic quality, pedagogical quality, didactic quality and technical quality. The paper presents the practices
relevant to the evaluation of these four dimensions of quality and describes how each dimension can be interpreted to
evaluate digital learning resources. On the basis of the description given, the article also presents an evaluation instrument
designed in the form of a computer application. The findings of this study are expected to support pedagogy agents to
develop evaluation instruments, because these agents are not only concerned to design, but also to evaluate their products.
Keywords: Digital learning resources, Evaluation, Evaluation instrument, Quality, Criteria.
1. Introduction
Thanks to the possibilities offered by Information and Communication Technology (ICT) in education today,
the number of digital learning resources available is rapidly increasing. Many educational products, such as
multimedia digital learning resources and educational websites, are elaborated within the framework of a
campus-based or online education (e-learning).
However, the question now facing pedagogues and computer scientists is not whether to integrate digital
pedagogy or not (its usefulness is quite evident and needs no reflection), but it is rather about the pedagogical
mission of these products: Do these products really carry out their due tasks? (teaching and developing skills),
or are they only electronic versions of traditional courses that will not bring anything special to the learner,
who is sometimes disturbed by inadequate use of technology (choice of colours, number of links, the
complexity of the interface)? What are the techno-pedagogical criteria to be taken into consideration for the
development of digital learning resources in education?
We propose in this work to make a contribution in this area by presenting the structure of an evaluation
instrument of the quality of digital learning resources, used in a campus-based or online education (e-learning),
and exposing different aspects and evaluation criteria to integrate into this structure. In this context, we
propose an evaluation tool designed in the form of a computer application. The study aims to assist those in the
28 International Journal of Computer Science Research and Application, 3(3): 27-36
field of education to evaluate the academic, pedagogical, didactic and technical quality of digital pedagogical
and educational resources they use or intend to use.
The first section of this paper presents the context of the study and explains about the adopted approach to
extract the different aspects and evaluation criteria for digital learning resources. In the following sections, we
outline how the structure of the evaluation instrument can be conceived to evaluate easily digital learning
resources, and we describe more precisely the different sections and criteria on which this structure is based.
We present also the evaluation instrument designed.
Finally, we conclude by discussing some of the larger implications of evaluation of digital learning
resources and the benefits of developing an evaluation instrument to meet the changing demands of quality
assurance and quality improvement for digital learning resources.
2. Context
The production of digital learning resources occurs in a variety of settings, many of which lack quality control
procedures or guidelines. A brief survey on these resources offers abundant evidence that authors frequently
fail to apply design principles that have been established in the fields of instructional design, instructional
psychology and the learning sciences. Further, many resources appear never to have been learner-tested or
subjected to other processes of evaluation. In our view, there is a quality problem tha t demands a multifaceted
solution involving better education of digital learning resources designers and design and development of
models that incorporate quality assessment.
Literature in educational multimedia offers many criteria and recommendations that can guide and direct
any digital design of digital educational resources in campus-based or online education (e-learning). We point
out, in a non-exhaustive way, the work of several researchers : Romiszowski, 1986; Reigeluth, 1989; Flagg,
1990; Reeves, 1993, R.Carrier, 1996 ; R. Bibeau, 1999 ; L. Bazin, 1999 ; A. Gras, 2000 ; D.Gilbert, 2001 ; O.
Hû, 2001; V. Benar & E.Sloim, 2001 ; P.Trigano, 2004. These works have been undertaken to specify the
conditions, methods, criteria and evaluation tools adapted to digital learning resources.
However, the evaluation of digital educational resources remains an arduous and difficult task. There are
significant challenges to effective evaluation because the processes and the evaluation tools should maximize
the pedagogical support and the graphical aspect. Although there exists exhaustive and detailed evaluation
instruments used by some school systems to evaluate educational software products (e.g., Small, 1997; Squires
& Preece, 1999), but these instruments may not be suitable for evaluating all digital learning resource because
the criteria are not always easy to implement or are very difficult to adapt in the case of customized products.
3. Methodology
To carry out our study, which aims to present aspects and evaluation criteria to evaluate the quality of
multimedia and digital learning resources and propose an example of an evaluation tool, our approach is to
consult a number of digital educational resources and visit educational websites all dealing with the same
subject, and then identifying elements which enable to compare and evaluate them. We consider quality criteria
specifically for multimedia learning resources, which we define as digital learning resources that combine text,
images and other media.
The proposed approach identifies the main aspects and evaluation criteria and describes separately each in
its own context: the academic, pedagogical, didactic and technical aspect. From the description given and an
exploration of the conducted research in this field, we collect all the data and we combine them such as an
evaluation instrument designed in the form of a computer application.
4. Structure of the evaluation instrument
The evaluation instrument is a tool to support evaluation of multimedia learning resources. It is designed for
eliciting ratings and it can be available as both a digital form (web form, software) and printable document
(grid, questionnaire, etc).
The structure of the instrument that we propose to use to evaluate the quality of digital learning resources is
designed to easy use. It is developed using specific vocabulary to avoid multiple interpretations.
The specific criteria included in this structure are grouped under four main headings : pedagogical quality
aspect, didactic quality aspect, technical quality aspect and academic quality aspect. The criteria are intended to
encourage evaluators to think critically about the resource and evaluate some of its more detailed aspects. The
criteria are not listed in order of importance, which will vary according to the resource and its intended use.
29 International Journal of Computer Science Research and Application, 3(3): 27-36
In addition to the identification part and overview, used to identify and to present each evaluated product,
the evaluation tool should be built around the four main relevant aspects for the evaluation of both the content
and the form of digital learning resources. Each section (academic quality, pedagogical quality, didactic quality
and technical quality) must be associated with a set of additional criteria, and each criterion is then associated
with one or more questions to verify the suitability of the product examined with each reference criterion. The
whole form a tree structure with three levels is shown in Figure1.
Figure 1: Tree structure of the evaluation instrument
In the next part, we describe more precisely the different sections and criteria on which the evaluation
instrument must be based.
5. Description of the different sections and criteria
5.1 Product identification
This section identifies and presents a digital educational resource. In this part, we provide general information
about the product. We indicate the name or title of the product and we identify the name of the authors or those
responsible for the production. This section also identifies the target audience and specifies whether the
objectives or targeted skills are shown.
5.2 Academic quality aspect
The objective of this section is to evaluate the quality of information presented in the digital learning resource.
Indeed, the quality of information presented is an essential component of the experience the learners will be
living by checking the product. There are two essential criteria to define the concept of quality applied to
information.
5.2.1 Information reliability
Information reliability lies in credibility and accuracy. To evaluate this criterion, we start questioning whether
the information is reliable, accurate and error-free. Is this accuracy sustainable over time? Is information
security guaranteed? Is there any correspondence between the perceived reliability and the actual reliability of
information?
30 International Journal of Computer Science Research and Application, 3(3): 27-36
5.2.2 Information relevance
This criterion is related to the effectiveness of information. We wonder if the information transmitted will
trigger desirable behaviours for the learner? Is the information workable and usable? These two elements
(reliability and relevance) of academic quality are highly interdependent: the mechanisms implemented to
ensure information reliability will obviously affect its relevance if the perceived reliability is good.
5.3 Pedagogical quality aspect
The evaluation of pedagogical quality is of paramount importance. To enhance learning and enable the learner
to construct his/her knowledge, a digital learning resource must refer to a differentiated pedagogy, active and
learner-centred which promotes the development of skills.
The evaluation of the instructional design of the resource involves an examination of its goals, objectives,
teaching strategies, and assessment provisions. This section examines therefore the various facets of the
educational dimension brought by the digital learning resource.
The main criteria that will face each product during the evaluation are:
5.3.1 Pedagogical formulation
Pedagogical formulation represents a concern of comprehension by learners who use digital educational
resources for learning. This formulation is characterized by the quality of simplification of content, explanation
of acronyms, glossary provided, the presence of summaries or abstracts as well as the use of diagrams, figures
and illustrations.
5.3.2 Pedagogical construction
Pedagogical construction evaluates whether the structure of the digital learning resource promotes its use in a
pedagogical context through the presence of appropriate interactivity, logic of organization, ease of orientation
(e.g. summary, site plan), ease of browsing (back-forward, back to home page, scroll box) and readability of
pages (internal summary, back buttons).
5.3.3 Pedagogical strategies
This criterion evaluates the teaching strategies adopted. Developing an appropriate instructional strategy lies in
designing and organizing learning activities based on techniques, methods, approaches and diverse educational
models to handle different learning styles.
Teaching strategies should be based on active teaching approaches (constructivism, socio-constructivism)
to build meaningful and motivating situations for learners and engage them actively in learning.
The main sub criteria that will face each product during the evaluation of ped agogical strategies are:
Instructional goals and learner objectives are clearly stated:
Is the overall purpose of the resource concisely stated, if appropriate, with specific objectives stated for specific
components? Based on their experience, evaluators must judge whether the resource would fulfil its intended
purpose and meet the learning objectives.
The resource is suitable for a wide range of learning/teaching styles:
The resource uses a variety of approaches (behaviourism, cognitivism, constructivism, socio-constructivism)
and is flexible in its application (e.g., encourages teacher intervention, student contributions, co-operative
learning, discovery learning, collaborative teaching). Materials and suggested activities encourage the use of a
variety of learning styles and strategies (e.g., concrete, abstract, oral, written, multi-sensory, opportunities for
extension, inclusion of explicit aids for retention).
The resource promotes student engagement:
Focusing techniques and cueing devices, such as variations in typeface, boxes, underlining, and spacing are
included. The resource incorporates aids to accessibility (advance organizers, summaries). Questions should
encourage reflection. Questions and activities within the resource should attract attention and increase
understanding.
31 International Journal of Computer Science Research and Application, 3(3): 27-36
The methodology promotes active learning:
The methodology promotes critical thinking, research skills, problem solving, group decision making, etc.
Students assume increased responsibility for learning. For the decision-making actions, the number of decision
options should vary according to student needs.
The resource encourages group interaction:
The resource uses group-based learning methods such as crossability groups and co-operative learning.
The resource encourages student creativity:
Use of the resource encourages students to develop unique interpretations or solutions.
Pedagogy is innovative:
The resource demonstrates a fresh approach. Imagery, layout, presentation, pace, topics, suggested activities,
and instructional design all serve to promote student interest in the content.
5.3.4 Assessment methods
The assessment methods are tools implemented for evaluation, teaching monitoring and learners support, such
as exercises and tests. This criterion aims to evaluate the assessment practices used. It also helps to ensure
whether the assessment is promoted or opposes the emergence of learning.
5.4 Didactic quality aspect
Didactics focuses on the central role of learning activities, disciplinary content and epistemology (the nature of
knowledge to be taught).
This section examines the didactic quality of pedagogical digital resources in education. We can define two
key criteria to evaluate the quality of this digital resource:
5.4.1 Veracity of learning activities
To enable the learner to manipulate the presented digital learning resource, the activities proposed in the
product must be appropriate. These activities must refer to real problems that could possibly face the learner
outside the classroom.
5.4.2 Content of the educational tool
The single most salient aspect of quality in many discussions of educational materials is quality of content.
Sanger M.J. and Greenbowe T.J. (1999) demonstrated that biases and errors can easily slip into educational
materials and cause problems for students. The content quality criteria asks reviewers to consider the veracity
and accuracy of learning resource, in addition to assessing whether the product provides a balanced
presentation of ideas and contains an appropriate level of detail.
Further, to achieve the learning objectives, the content of digital learning resource must be in line with the
objectives and target audience. Knowledge conveyed must undergo changes without minimizing, deviating or
affecting the concept.
5.5 Technical quality aspect
The technical quality of a digital learning resource is paramount. In fact, it is not acceptable that the learner
will not be able to achieve an educational activity because of usage problems.
As was the case with the educational criteria, we must also evaluate the relevance of the technical
requirements and recommendations in regard to usage area and distinctiveness. All the requirements must be
evaluated in light of the resource used, as not all requirements and recommendations are relevant for all
resources.
The technical quality measures the resource elaboration from the perspective:
32 International Journal of Computer Science Research and Application, 3(3): 27-36
5.5.1 Design
The content and organization of the visual product should promote appropriate use of colours, interactivity,
graphic quality and pleasing aesthetic for the selected images and illustrations.
The visual appearance and sounds presented by digital learning resources, particularly as they relate to
information design, affect the resource’s aesthetic and pedagogical impact. Decisions about presentation design
should be informed by instructional and cognitive psychology, especially the theories and principles of
cognitive load, multimedia learning and information visualization.
5.5.2 Browsing
The product design must facilitate browsing. While manipulating the resource, the learner should be able to
find a plan, an index or a detailed table of contents. The suggested choices should be clear and the groupings
within the menus should be consistent.
5.5.3 Technological ingenuity
Multimedia techniques aim to combine and exploit the capacities of new technologies in education to enhance
knowledge transfer and assimilation of knowledge by learners.
During product development, the designer should use multimedia techniques in favour of information and
education such as animations, flashing text, animated images and multiple windows.
6. Description of the evaluation tool
We have implemented an assessment tool based on the four sections we have described previously. These four
sections are divided into criteria, and each criterion is then associated with a set of questions to form a tree
structure with three levels. The tool comprises 4 sections, 11 criteria and 40 questions.
6.1 Home interface
The home interface of the tool includes, in addition to the title, a main menu which includes : identification
part, sections and criteria to be evaluated (Figure 2).
Figure 2: Home interface of evaluation tool
This interface also includes a description of the tool to guide the evaluator.
33 International Journal of Computer Science Research and Application, 3(3): 27-36
6.2 Identification of product
In the main menu, the button “Identification of product" provides an interface containing the required fields
to identify the product to be evaluated (Figure 3). After having completed the fields, the evaluator must record
the information entered.
Figure 3: Interface of product identification
6.3 Questions
When the evaluator selects a criterion, an interface appears containing (Figure 4):
A bar with the name of the criterion under evaluation ;
A field of questions characterizing the criterion under evaluation. The appraiser must allocate to each
question, a note ranging from 0 to 5 according to the Lickert scale
1
.
Figure 4: Interface containing the questions
When evaluating a criterion, the evaluator must pass the following criterion by clicking the "Next" button to
evaluate all criteria (answer all questions).
1
Lickert scale is a rating scale where the interrogated person expresses his/her degree of agreement or disagreement regarding a statement.
The scale contains five or seven answer choices which enable to formulate the degree of agreement.
34 International Journal of Computer Science Research and Application, 3(3): 27-36
6.4 Scores
When evaluating all criteria (answer all questions), the evaluator can obtain quantitative results about the
quality of the evaluated product (Figure 5).
Figure 5: Interface for obtaining results
Two types of results can be obtained (Figure 6):
A global score which gives a general idea about the quality of the product evaluated ;
The scores of each section: academic quality, quality pedagogical, didactic quality and technical
quality. This type of result helps focus the analysis on the impact that can have each section on the
quality of evaluated product.
Figure 6: Interface displaying scores
35 International Journal of Computer Science Research and Application, 3(3): 27-36
7. Conclusion
Learning with digital learning resources takes place in a highly different context from traditional learning,
where human interactions become mediated. In this new environment where the learner finds himself alone in
front of the machine, careful attention to the presented digital content quality is particularly important.
However, this quality is not always assured. Hill and Hannafin (2001) observed that digital learning
resources often suffer from lack of regulation of content validity, reliability, and credibility. In this context, we
anticipate that conception and use of evaluation instruments will help potential users identify objects that do
achieve high quality (academic, pedagogical, didactic and technical).
We believe evaluation instruments designed for the digital learning resources are needed for three reasons.
First, the design of multimedia learning materials is frequently not informed by relevant research in
psychology and education (Nesbit, Li, & Leacock, 2006; Shavinina & Loarer, 1999). This has resulted in easy
access to many digital learning resources of varying quality. Second, to mitigate this search problem, some
resource repositories use quality metrics to order search results (Vargo, Nesbit, Belfer, & Archambault, 2003).
The efficacy of this technique is directly dependent on the validity of the evaluation tool used to generate the
quality ratings. Third, quality criteria for summative evaluations have the potential to drive improvements in
design practice (Nesbit, Belfer, & Vargo, 2002).
To this end, we identified in this article the sections and criteria to evaluate digital learning resources.
Being aware of this criteria, that can affect resources quality, is an essential step towards elaboration, according
to the tree structure presented in this paper, of an evaluation instrument such as the evaluation grid (El Mhouti
A., Nasseh, A. & Erradi M., 2013) or software tool presented in this paper.
From this perspective, to be able to create products that meet most of the teaching and learning criteria,
evaluation should be conducted before making these products at the disposal of learners to identify
irregularities and make the necessary adjustments in the development process.
References
Crozat, S., Trigano, P. and Hû, O., 1999, EMPI : Une méthode informatisée pour l'évaluation des didacticiels multimédias,
RHIM, la Revue d'Interaction Homme Machine (ed Europia), Volume 1, Issue 2.
Ecaterina G-P., 2003, Conception et Evaluation des Environnements pédagogiques sur le Web. Université de Technologie
de Compiègne (UTC).
El Mhouti A., Nasseh A., Erradi M., 2013, Development of a tool for quality assessment of digital learning resources,
International Journal of Computers Applications, Volume 64, No. 14.
Ezzahri S., Talbi M., Erradi M., Khaldi M., Jilali A, 2008, Elaboration d’un outil d’évaluation de cours de formation
continue a distance, Information, Savoirs, Décisions & Médiation (ISDM), n°39.
Flagg, B.N., 1990, Formative Evaluation for Educational Technologies. Hillsdale, NJ : Lawrence Erlbaum.
Gayeski, D.M. (ed.), Englewood Cliffs, New Jersey: Educational Technology Publications, pp. 97-112.
Hill, J. R. & Hannafin, M. J., 2001, Teaching and learning in digital environments: The resurgence of resource-based
learning, Educational Technology Research and Development, Volume 49, Issue 3, pp. 3752.
Hû O., 2001, Contribution à l’évaluation des logiciels multimédias pédagogiques, PhD Thesis, University of Technology of
Compiegne, France.
John C. N., Jerry L., Tracey L., Based Tools for Collaborative Evaluation of Learning Resources, Journal of Systemics,
cybernetics and informatics, Volume 3, Issue 5.
Leacock, T. L., & Nesbit, J. C., 2007, A Framework for Evaluating the Quality of Multimedia Learning Resources.
Educational Technology & Society, Volume 10, Issue 2, pp. 44-59.
Losby J. and Wetmore A, Using Lickert scale in Evaluation survey work, 2012.
Nesbit, J. C., Belfer, K., & Vargo, J., 2002, A convergent participation model for evaluation of learning objects . Canadian
Journal of Learning and Technology, Volume 28, Issue 3, pp. 105120.
Nesbit, J. C., Li, J., & Leacock, T. L., 2006, Web-based tools for collaborative evaluation of learning resources.
Park I., Hannafin M.-J., 1993, Empiically-based guidelines for the design of interactive media, Educational Technology
Research en Development, volume 41, Issue 3.
Pierre M., Renata J., 2001, Outils pour l'analyse de sites Web éducatifs. Module n°3, version 4.
Prince Edward Island, 2008, Evaluation and Selection of Learning Resources: A Guide.
Reeves, T.C., 1993, Evaluating interactive multimedia in Multimedia for Learning Development, Application, Evaluation.
Reigeluth, C.M., Schwartz, E., 1989, An instructional theory for the desing of computer-based simulations. Journal of
Computer-Based Instruction, Volume 16, pp. 1-10.
Romiszowski, A.J., 1986, Developing Auto-Instructional Materials: from Programmed Texts to CAL and Interactive
Video. New York, Nichols Publishing Company.
Sanger M.J., & Greenbowe T.J., 1999, An Analysis of College Chemistry Textbooks as Sources of Misconceptions and
Errors in Electrochemistry, Journal of Chemistry Education, Volume 7, Issue 6, pp. 853-860.
36 International Journal of Computer Science Research and Application, 3(3): 27-36
Scapin D., Bastien CH, 1997, Ergonomic criteria for evaluating the ergonomic quality of interactive systems. Behaviour &
Information Technologie.
Shavinina, L. V. & Loarer, E., 1999, Psychological evaluation of educational multimedia applications. European
Psychologist, Volume 4, Issue 1, pp. 3344.
Small, R., 1997, Assessing the motivational qualities of world wide web sites. US Department of Education Report.
Southern Regional Educational Board (2005). Principles of effective learning objects: Guidelines for development and use
of learning objects for the SCORE initiative of the Southern Regional Educational Board.
Squires, D. & Preece, J., 1999, Predicting quality in educational software: Evaluating for learning, usability and the synergy
between them. Interacting with Computers, Volume 11, pp. 467483.
Vargo, Nesbit, Belfer & Archambault, 2003, Learning object evaluation: Computer mediated collaboration and inter-rater
reliability. International Journal of Computers and Applications, Volume 25, Issue 3, pp. 198205.
A Brief Author Biography
Abderrahim EL MHOUTI e-learning & Pedagogy researcher. Laboratory of Computer Science Operational Research
and Applied Statistics, Abdelmalek Essaadi University, Faculty of Science, Tetouan, Morocco. E-mail :
elmhouti@hotmail.com
Azeddine NASSEH Professor and computer science researcher. Laboratory of Computer Science Operational Research
and Applied Statistics. Abdelmalek Essaadi University, Higher Normal School, Tetouan, Morocco. E-mail:
azedinenasseh@yahoo.fr
Mohamed ERRADI Professor and e-learning & Pedagogy researcher. Laboratory of Computer Science Operational
Research and Applied Statistics. Abdelmalek Essaadi University, Higher Normal School, Tetouan, Morocco. E-mail:
erradi-mo@yahoo.fr
Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By the
appearance in this open access journal, articles are free to use with the required attribution. Users must contact the corresponding authors for
any potential use of the article or its content that affects the authors’ copyright.
... "Open is a continuous, not binary, construct." 1 In other words, there can be no fixed definition of 'open' in the context of OER. Instead it should be interpreted in relative terms -is a repository, for example 'open enough' to be defined as offering OER, or is one repository 'more open' or 'less open' than another? ...
Research
Full-text available
The Open Education movement has gained substantial traction since the term Open Educational Resources (OER) was coined in 2000. However, there remains much scope for further advocacy and promotion of Open Education generally and of the principles and values that the concept embodies. Open Education is a broad canvas that is able to accommodate a range of understandings of the term. It is also a term that gathers an array of different elements beneath its umbrella, of which OER is one, although one that is much discussed. OERs are generally stored in a Learning Object Repository (LOR). The focus of this paper is to undertake a short evaluation of quality issues in relation to OER, with a focus on the K12 sector where that is possible, and to examine any techniques that can be identified, either in proposals by educators and academics or in actual use in OER repositories, that might support educators in integrating digital learning resources into their teaching practice. Much of the literature and research into Open Educational Practices (OEP) generally, and into OER in particular, has been produced and carried out from a higher education (HE) or technical & vocational education & training (TVET) perspective, and, to date, with a few honourable exceptions, less so from the perspective of teachers and students in the schools sector. For whatever reasons, there seems to have been an identifiably lower level of engagement with Open Education from within the K12 sector generally over many years. Of course, many of the Open Education lessons learnt and applied within HE and TVET will apply just as much to OEP in schools, but we should be cautiously aware that their application in K12 might raise some contextually specific issues too.
... The technology facilitates the transmission of concepts and knowledge to students. Because of this new environment where the learner finds himself alone in front of the machine, the teacher must pay careful attention to the quality of the digital content presented when designing interactive, collaborative activities [19,20]. Some DLRs used are interactive presentations, videos, simulators, 3D and augmented reality, educational chatbots, and virtual labs ( Table 2). ...
Article
Full-text available
One of the most significant challenges of telepresence distance education is to bring the professor and the students closer together in a synchronistic educational experience where the professor is perceived as anatomically proportionate. Telepresence, an educational technology ecosystem using holograms, offers a way to solve this technological challenge. Our mixed exploratory research investigating this methodology had two purposes: (1) propose the key elements to teach distance courses synchronously in an educational technology ecosystem, and (2) demonstrate the technological, didactic practices that result in positive student learning outcomes in several specified courses. This methodology included applying a student questionnaire to collect their perceptions of the educational experience. The scores and written comments from the questionnaire were analyzed using Grounded Theory. On a Likert scale from 1 to 5, the students scored their educational experience, attaining a mean of 4.05. The positive perception affirmed that they valued: (a) recreating the natural dynamics of face-to-face classes, where the students perceived their professors as being physically present in the classroom; (b) professors renowned in their disciplines; (c) professor–student and campus and intercampus learning community interactions, and, finally, (d) class design and content. The main conclusions of this research were that students positively perceived the “wow” effect of the technology, feeling comfort, amazement, interest, and engagement. In addition, we found that professors and keynote speakers with excellent pedagogical skills and experts in their disciplines were well appreciated. Key elements for the success of the experience were professor–student, campus, and intercampus interactions and the quality of the technological and communication infrastructure.
... El Mhouti, Nasseh & Erradi (2013) proposed an instrument for the evaluation of the quality of digital learning resources. Their study presented evaluation criteria that were based on 'four dimensions of digital learning resources quality: academic quality, pedagogical quality, didactic quality and technical quality.' (El Mhouti et. ...
Article
The usage of emerging social and digital applications is growing rapidly among the current generation of students and academics, and researchers started exploring their effectiveness in enhancing students’ creativity. However, examining the most effective criteria for evaluating digital-media-enhanced creativity in art and design still needs further exploration. This pilot study seeks to develop a framework to assess students’ creativity and another framework to assess the effectiveness of multimedia-based teaching approaches in art and design educational contexts. Sixteen design instructors participated in a survey, which aimed to identify their experiences with multimedia-based pedagogy as a potentially effective approach in fostering students’ creativity as well as educators’ innovation in teaching. The paper identifies and ranks the criteria, which they thought are effective in assessing digitally-stimulated creativity in each field of graphic design. The ultimate goal is to provide educators with a set of evaluation criteria to guide the appropriate teaching and assessment of digital creativity. Keywords: Creativity, multimedia, pedagogy, innovation, digital creativity.
... It was developed using specific vocabulary to avoid multiple interpretations. To identify the main criteria to be evaluated, the proposed approach consists in consulting a number of digital educational resources and visiting educational websites all dealing with the same subject, and then identifying elements which enable to compare and evaluate them [6]. PAPER AN EVALUATION MODEL OF DIGITAL EDUCATIONAL RESOURCES In addition to the identification part and overview, used to identify and to present each evaluated product, the evaluation grid is built around four main relevant sections for the evaluation of both the content and the form of digital learning resources. ...
Article
Full-text available
Abstract—Today, the use of digital educational resources in teaching and learning is considerably expanding. Such expansion calls educators and computer scientists to reflect more on the design of such products. However, this reflection exposes a number of criteria and recommendations that can guide and direct any teaching tool design be it campus-based or online (e-learning). Our work is at the heart of this issue. We suggest, through this article, examining academic, edagogical, didactic and technical criteria to conduct this study which aims to evaluate the quality of digital educational resources. Our approach consists in addressing the specific and relevant factors of each evaluation criterion. We will then explain the detailed structure of the evaluation instrument used : “evaluation grid”. Finally, we show the evaluation outcomes based on the conceived grid and then we establish an analytical evaluation of the state of the art of digital educational resources.
Article
Full-text available
Within the context of lifelong learning, it is necessary for teachers to improve their competencies, including the competencies in the use of digital media. The paper presents partial results of research carried out within the VEGA 1/0913/15 project on Media Literacy of Young School-Age Children in the Context of Family and School Cooperation, while it also analyses the need to develop digital literacy, which is part of the VEGA 1/0748/20 project on Diagnosing Digital Literacy of Primary School Teachers in the Context of Undergraduate Training and Educational Reality. The empirical research had a diagnostic as well as quantitative and qualitative character. The subject of the research was media education of younger school-age pupils implemented in both formal and informal ways in Slovakia. The research involved 28 schools from all over Slovakia. The paper focuses mainly on the findings obtained from the questionnaires filled out by primary school teachers, interviews conducted with school management and content analysis of school educational programs.
Article
Full-text available
The results of our own research indicate the heterogeneity of hearing children of deaf parents (koda) in the development of language in context of special educational needs. Koda acquire language and speech in an unusual communication environment. The aim of the research is to analyse the linguistic development of koda in terms of active and passive vocabulary, comprehension and use of grammatical forms, and comprehension of a longer text. The results of children obtained in the normalised Linguistic Development Test were analysed. Koda may have difficulty in mastering speech in its various planes and aspects, develop language competences and skills discordantly.
Article
Full-text available
This paper summarizes research work conducted on the design and assessment of a set of usability dimensions called ' ergonomic criteria '. It also provides a detailed description of each of the individual criteria. The paper then mentions the inherent limitations of the method, discusses the notion of ergonomic quality , the differences in perspective compared to empirical testing, and identifies the potential users of the method.Finally,the paper stresses the limitations in the current state of development of the method and identifies research issues for further improving the method.
Article
Full-text available
Today, the use of computers in education is booming and the use of digital learning resources and educational websites invite educators and computer scientists to reflect more on the design of such tools. In fact, this paper outlines a number of criteria and recommendations that can guide and direct any teaching tool design be it campus-based or online (e-learning). This work is at the heart of this issue. It suggests examining academic, pedagogical, didactic and technological criteria to conduct this study which aims to develop an assessment and analysis grid of the quality of educational programs and applications. The approach adopted consists in addressing the specific and relevant factors of each assessment criterion. Then, the article explain the detailed structure of the grid. Finally, on the basis of the description given, all data are collected in the evaluation grid and its methods of use are discussed.
Article
Full-text available
Focuses on prescriptions for designing the instructional overlay of computer-based simulations, which serves to optimize learning and motivation. The instructional functions of simulations and the instructional features that should be used to achieve these functions are described. The design of computer-based simulations is presented in the form of a general model that offers prescriptions for the design of the introduction, acquisition, application, and assessment stages of simulations and for dealing with the issue of control (system or learner). Variations on the general model are based on the nature of the behavior (using procedures, process principles, or causal principles), complexity of the content, form of learner participation, form of changes being simulated (physical or nonphysical), and motivational requirements. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The properties that distinguish learning objects from other forms of educational software - global accessibility, metadata standards, finer granularity and reusability - have implications for evaluation. This article proposes a convergent participation model for learning object evaluation in which representatives from stakeholder groups (e.g., students, instructors, subject matter experts, instructional designers, and media developers) converge toward more similar descriptions and ratings through a two-stage process supported by online collaboration tools. The article reviews evaluation models that have been applied to educational software and media, considers models for gathering and meta-evaluating individual user reviews that have recently emerged on the Web, and describes the peer review model adopted for the MERLOT repository. The convergent participation model is assessed in relation to other models and with respect to its support for eight goals of learning object evaluation: (1) aid for searching and selecting, (2) guidance for use, (3) formative evaluation, (4) influence on design practices, (5) professional development and student learning, (6) community building, (7) social recognition, and (8) economic exchange.
Article
This article presents the structure and theoretical foundations of the Learning Object Review Instrument (LORI), an evaluation aid available through the E-Learning Research and Assessment Network at http://www.elera.net. A primary goal of LORI is to balance assessment validity with efficiency of the evaluation process. The instrument enables learning object users to create reviews consisting of ratings and comments on nine dimensions of quality: content quality, learning goal alignment, feedback and adaptation, motivation, presentation design, interaction usability, accessibility, reusability, and standards compliance. The article presents research and practices relevant to these dimensions and describes how each dimension can be interpreted to evaluate multimedia learning resources.
Article
As the number of World Wide Web sites continues to grow at an explosive rate, the need for design guidelines also increases. Although there are a number of resources that provide guidance on structure and content, few address the motivational aspects of Web sites. The Website Motivational Analysis Checklist (WebMAC) was developed to help diagnose the motivational quality of sites on the World Wide Web. WebMAC is based, in part, on Keller's ARCS Model of Motivational Design, and also incorporates many of the attributes that define the information science concept of "relevance," a critical concept in information retrieval. WebMAC specifies four general motivational criteria of Web sites: (1) Engaging--offers eye-catching visuals, attractive screen layout, humor, varied activities, novelty, and diverse and well-written content; (2) Meaningful--offers a statement of the purpose and importance of the site, accurate and updated information, meaningful examples and analogies, and quick and easy links to other relevant sites; (3) Organized--offers a site overview, summaries of key points, a help interface, and definitions of terms; and (4) Enjoyable for both the extrinsically and intrinsically motivated user--positive feedback on progress, user-controlled external rewards (such as animation), and quick response speed. WebMAC allows the designer or evaluator to assess the motivational quality of each of the four categories and plot a score on a graph, providing quick visual assessment of the site's strengths and weaknesses. The current version of the instrument is included. (Contains 11 references.) (Author/SWC)
Article
Describes the psychological evaluation of educational multimedia applications, which has the potential to be a new direction for applied psychology, arising as it does at the intersection of multimedia technology, education, and psychology, including general, cognitive, developmental, educational, and personality psychology. The paper analyzes the current situation in the field of educational multimedia, and proposes a framework for psychological evaluation of educational multimedia applications. The proposed approach to psychological evaluation of the quality of educational multimedia products involves the following 5 dimensions: (1) the individual; (2) the learning approach or model; (3) specific characteristics of multimedia technology; (4) the environment; and (5) the relationship among the previous 4 dimensions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The oxidation–reduction and electrochemistry chapters of 10 introductory college chemistry textbooks were reviewed for misleading or erroneous statements, using a list of student misconceptions. These misconceptions include the notions that the identity of the anode and cathode depends on the physical placement of the half-cell; half-cell potentials are independent of each other, meaningful, and measurable; electrons can flow through electrolyte solutions and the salt bridge; cation movement does not constitute an electrical current; electrodes have large net positive or negative charges that can be used to explain ion and electron flow; and electrolysis products cannot be predicted using standard reduction potentials. As a result of this analysis, we provide suggestions for chemistry instructors and textbook authors: simplifications such as always drawing the anode as the left-hand half-cell or only describing the flow of anions in electrolyte solutions and the salt bridge should be avoided; vague or misleading statements should be avoided; cell potentials should be calculated using the difference method instead of the additive method; simple electrostatic arguments should not be used to predict ion and electron flow in electrochemical cells; and all possible oxidation and reduction half-reactions should be considered when predicting the products of electrolysis. Keywords (Audience): High School / Introductory Chemistry
Article
The digital age has not simply changed the nature of resources and information; it has transformed several basic social and economic enterprises. Contemporary society—the settings where we live, work, and learn—has likewise changed dramatically. Both the amount of information and access to it have grown exponentially; a significant potential for using varied resources in numerous ways for instruction and learning has emerged. However, several issues related to the educational uses of varied resources (e.g., people, place, things, ideas) must be addressed if we are successfully to implement resource-based learning environments. In this paper, we trace the changing nature of resources and perspectives in their use for learning in the digital age, describe the overarching structures of resource-based learning environments, and identify key challenges to be addressed.