International Journal of Computer Science Research and Application
2013, Vol. 03, Issue. 03, pp. 27-36
ISSN 2012-9564 (Print)
ISSN 2012-9572 (Online)
© Author Names. Authors retain all rights.
IJCSRA has been granted the right to publish and share, Creative Commons 3.0
How to evaluate the quality of digital learning resources?
Abderrahim El Mhouti1, Azeddine Nasseh2, Mohamed Erradi2
1Laboratory of Computer Science, Operational Research and Applied Statistics, Abdelmalek Essaadi
University, Faculty of Sciences, Tetouan, Morocco. firstname.lastname@example.org
2 Laboratory of Computer Science, Operational Research and Applied Statistics, Abdelmalek Essaadi
University, Higher Normal School, Tetouan, Morocco. email@example.com
Author Correspondence: B.P. 7, M’diq, 93200, Morocco, GSM: +212 672 35 70 30,
Digital learning resources have been widely used in educational activities ranging from schools to universities and higher
educational institutions. The evaluation of the quality of these resources plays a significant role in designing and
implementing attractive educational content. In this paper, we present the structure and the theoretical foundations to
elaborate an instrument for examination and evaluation of the quality of digital learning resources. In order to invite
pedagogues and computer scientists to think more about the evaluation of the quality of educational learning resources used
in face or online (e-learning), the study presents and discusses the evaluation criteria that can guide and direct all design of
evaluation instruments designed for easy use. These criteria are specific to the four dimensions of digital learning resources
quality: academic quality, pedagogical quality, didactic quality and technical quality. The paper presents the practices
relevant to the evaluation of these four dimensions of quality and describes how each dimension can be interpreted to
evaluate digital learning resources. On the basis of the description given, the article also presents an evaluation instrument
designed in the form of a computer application. The findings of this study are expected to support pedagogy agents to
develop evaluation instruments, because these agents are not only concerned to design, but also to evaluate their products.
Keywords: Digital learning resources, Evaluation, Evaluation instrument, Quality, Criteria.
Thanks to the possibilities offered by Information and Communication Technology (ICT) in education today,
the number of digital learning resources available is rapidly increasing. Many educational products, such as
multimedia digital learning resources and educational websites, are elaborated within the framework of a
campus-based or online education (e-learning).
However, the question now facing pedagogues and computer scientists is not whether to integrate digital
pedagogy or not (its usefulness is quite evident and needs no reflection), but it is rather about the pedagogical
mission of these products: Do these products really carry out their due tasks? (teaching and developing skills),
or are they only electronic versions of traditional courses that will not bring anything special to the learner,
who is sometimes disturbed by inadequate use of technology (choice of colours, number of links, the
complexity of the interface)? What are the techno-pedagogical criteria to be taken into consideration for the
development of digital learning resources in education?
We propose in this work to make a contribution in this area by presenting the structure of an evaluation
instrument of the quality of digital learning resources, used in a campus-based or online education (e-learning),
and exposing different aspects and evaluation criteria to integrate into this structure. In this context, we
propose an evaluation tool designed in the form of a computer application. The study aims to assist those in the
28 International Journal of Computer Science Research and Application, 3(3): 27-36
field of education to evaluate the academic, pedagogical, didactic and technical quality of digital pedagogical
and educational resources they use or intend to use.
The first section of this paper presents the context of the study and explains about the adopted approach to
extract the different aspects and evaluation criteria for digital learning resources. In the following sections, we
outline how the structure of the evaluation instrument can be conceived to evaluate easily digital learning
resources, and we describe more precisely the different sections and criteria on which this structure is based.
We present also the evaluation instrument designed.
Finally, we conclude by discussing some of the larger implications of evaluation of digital learning
resources and the benefits of developing an evaluation instrument to meet the changing demands of quality
assurance and quality improvement for digital learning resources.
The production of digital learning resources occurs in a variety of settings, many of which lack quality control
procedures or guidelines. A brief survey on these resources offers abundant evidence that authors frequently
fail to apply design principles that have been established in the fields of instructional design, instructional
psychology and the learning sciences. Further, many resources appear never to have been learner-tested or
subjected to other processes of evaluation. In our view, there is a quality problem tha t demands a multifaceted
solution involving better education of digital learning resources designers and design and development of
models that incorporate quality assessment.
Literature in educational multimedia offers many criteria and recommendations that can guide and direct
any digital design of digital educational resources in campus-based or online education (e-learning). We point
out, in a non-exhaustive way, the work of several researchers : Romiszowski, 1986; Reigeluth, 1989; Flagg,
1990; Reeves, 1993, R.Carrier, 1996 ; R. Bibeau, 1999 ; L. Bazin, 1999 ; A. Gras, 2000 ; D.Gilbert, 2001 ; O.
Hû, 2001; V. Benar & E.Sloim, 2001 ; P.Trigano, 2004. These works have been undertaken to specify the
conditions, methods, criteria and evaluation tools adapted to digital learning resources.
However, the evaluation of digital educational resources remains an arduous and difficult task. There are
significant challenges to effective evaluation because the processes and the evaluation tools should maximize
the pedagogical support and the graphical aspect. Although there exists exhaustive and detailed evaluation
instruments used by some school systems to evaluate educational software products (e.g., Small, 1997; Squires
& Preece, 1999), but these instruments may not be suitable for evaluating all digital learning resource because
the criteria are not always easy to implement or are very difficult to adapt in the case of customized products.
To carry out our study, which aims to present aspects and evaluation criteria to evaluate the quality of
multimedia and digital learning resources and propose an example of an evaluation tool, our approach is to
consult a number of digital educational resources and visit educational websites all dealing with the same
subject, and then identifying elements which enable to compare and evaluate them. We consider quality criteria
specifically for multimedia learning resources, which we define as digital learning resources that combine text,
images and other media.
The proposed approach identifies the main aspects and evaluation criteria and describes separately each in
its own context: the academic, pedagogical, didactic and technical aspect. From the description given and an
exploration of the conducted research in this field, we collect all the data and we combine them such as an
evaluation instrument designed in the form of a computer application.
4. Structure of the evaluation instrument
The evaluation instrument is a tool to support evaluation of multimedia learning resources. It is designed for
eliciting ratings and it can be available as both a digital form (web form, software) and printable document
(grid, questionnaire, etc).
The structure of the instrument that we propose to use to evaluate the quality of digital learning resources is
designed to easy use. It is developed using specific vocabulary to avoid multiple interpretations.
The specific criteria included in this structure are grouped under four main headings : pedagogical quality
aspect, didactic quality aspect, technical quality aspect and academic quality aspect. The criteria are intended to
encourage evaluators to think critically about the resource and evaluate some of its more detailed aspects. The
criteria are not listed in order of importance, which will vary according to the resource and its intended use.
29 International Journal of Computer Science Research and Application, 3(3): 27-36
In addition to the identification part and overview, used to identify and to present each evaluated product,
the evaluation tool should be built around the four main relevant aspects for the evaluation of both the content
and the form of digital learning resources. Each section (academic quality, pedagogical quality, didactic quality
and technical quality) must be associated with a set of additional criteria, and each criterion is then associated
with one or more questions to verify the suitability of the product examined with each reference criterion. The
whole form a tree structure with three levels is shown in Figure1.
Figure 1: Tree structure of the evaluation instrument
In the next part, we describe more precisely the different sections and criteria on which the evaluation
instrument must be based.
5. Description of the different sections and criteria
5.1 Product identification
This section identifies and presents a digital educational resource. In this part, we provide general information
about the product. We indicate the name or title of the product and we identify the name of the authors or those
responsible for the production. This section also identifies the target audience and specifies whether the
objectives or targeted skills are shown.
5.2 Academic quality aspect
The objective of this section is to evaluate the quality of information presented in the digital learning resource.
Indeed, the quality of information presented is an essential component of the experience the learners will be
living by checking the product. There are two essential criteria to define the concept of quality applied to
5.2.1 Information reliability
Information reliability lies in credibility and accuracy. To evaluate this criterion, we start questioning whether
the information is reliable, accurate and error-free. Is this accuracy sustainable over time? Is information
security guaranteed? Is there any correspondence between the perceived reliability and the actual reliability of
30 International Journal of Computer Science Research and Application, 3(3): 27-36
5.2.2 Information relevance
This criterion is related to the effectiveness of information. We wonder if the information transmitted will
trigger desirable behaviours for the learner? Is the information workable and usable? These two elements
(reliability and relevance) of academic quality are highly interdependent: the mechanisms implemented to
ensure information reliability will obviously affect its relevance if the perceived reliability is good.
5.3 Pedagogical quality aspect
The evaluation of pedagogical quality is of paramount importance. To enhance learning and enable the learner
to construct his/her knowledge, a digital learning resource must refer to a differentiated pedagogy, active and
learner-centred which promotes the development of skills.
The evaluation of the instructional design of the resource involves an examination of its goals, objectives,
teaching strategies, and assessment provisions. This section examines therefore the various facets of the
educational dimension brought by the digital learning resource.
The main criteria that will face each product during the evaluation are:
5.3.1 Pedagogical formulation
Pedagogical formulation represents a concern of comprehension by learners who use digital educational
resources for learning. This formulation is characterized by the quality of simplification of content, explanation
of acronyms, glossary provided, the presence of summaries or abstracts as well as the use of diagrams, figures
5.3.2 Pedagogical construction
Pedagogical construction evaluates whether the structure of the digital learning resource promotes its use in a
pedagogical context through the presence of appropriate interactivity, logic of organization, ease of orientation
(e.g. summary, site plan), ease of browsing (back-forward, back to home page, scroll box) and readability of
pages (internal summary, back buttons).
5.3.3 Pedagogical strategies
This criterion evaluates the teaching strategies adopted. Developing an appropriate instructional strategy lies in
designing and organizing learning activities based on techniques, methods, approaches and diverse educational
models to handle different learning styles.
Teaching strategies should be based on active teaching approaches (constructivism, socio-constructivism)
to build meaningful and motivating situations for learners and engage them actively in learning.
The main sub criteria that will face each product during the evaluation of ped agogical strategies are:
Instructional goals and learner objectives are clearly stated:
Is the overall purpose of the resource concisely stated, if appropriate, with specific objectives stated for specific
components? Based on their experience, evaluators must judge whether the resource would fulfil its intended
purpose and meet the learning objectives.
The resource is suitable for a wide range of learning/teaching styles:
The resource uses a variety of approaches (behaviourism, cognitivism, constructivism, socio-constructivism)
and is flexible in its application (e.g., encourages teacher intervention, student contributions, co-operative
learning, discovery learning, collaborative teaching). Materials and suggested activities encourage the use of a
variety of learning styles and strategies (e.g., concrete, abstract, oral, written, multi-sensory, opportunities for
extension, inclusion of explicit aids for retention).
The resource promotes student engagement:
Focusing techniques and cueing devices, such as variations in typeface, boxes, underlining, and spacing are
included. The resource incorporates aids to accessibility (advance organizers, summaries). Questions should
encourage reflection. Questions and activities within the resource should attract attention and increase
31 International Journal of Computer Science Research and Application, 3(3): 27-36
The methodology promotes active learning:
The methodology promotes critical thinking, research skills, problem solving, group decision making, etc.
Students assume increased responsibility for learning. For the decision-making actions, the number of decision
options should vary according to student needs.
The resource encourages group interaction:
The resource uses group-based learning methods such as crossability groups and co-operative learning.
The resource encourages student creativity:
Use of the resource encourages students to develop unique interpretations or solutions.
Pedagogy is innovative:
The resource demonstrates a fresh approach. Imagery, layout, presentation, pace, topics, suggested activities,
and instructional design all serve to promote student interest in the content.
5.3.4 Assessment methods
The assessment methods are tools implemented for evaluation, teaching monitoring and learners support, such
as exercises and tests. This criterion aims to evaluate the assessment practices used. It also helps to ensure
whether the assessment is promoted or opposes the emergence of learning.
5.4 Didactic quality aspect
Didactics focuses on the central role of learning activities, disciplinary content and epistemology (the nature of
knowledge to be taught).
This section examines the didactic quality of pedagogical digital resources in education. We can define two
key criteria to evaluate the quality of this digital resource:
5.4.1 Veracity of learning activities
To enable the learner to manipulate the presented digital learning resource, the activities proposed in the
product must be appropriate. These activities must refer to real problems that could possibly face the learner
outside the classroom.
5.4.2 Content of the educational tool
The single most salient aspect of quality in many discussions of educational materials is quality of content.
Sanger M.J. and Greenbowe T.J. (1999) demonstrated that biases and errors can easily slip into educational
materials and cause problems for students. The content quality criteria asks reviewers to consider the veracity
and accuracy of learning resource, in addition to assessing whether the product provides a balanced
presentation of ideas and contains an appropriate level of detail.
Further, to achieve the learning objectives, the content of digital learning resource must be in line with the
objectives and target audience. Knowledge conveyed must undergo changes without minimizing, deviating or
affecting the concept.
5.5 Technical quality aspect
The technical quality of a digital learning resource is paramount. In fact, it is not acceptable that the learner
will not be able to achieve an educational activity because of usage problems.
As was the case with the educational criteria, we must also evaluate the relevance of the technical
requirements and recommendations in regard to usage area and distinctiveness. All the requirements must be
evaluated in light of the resource used, as not all requirements and recommendations are relevant for all
The technical quality measures the resource elaboration from the perspective:
32 International Journal of Computer Science Research and Application, 3(3): 27-36
The content and organization of the visual product should promote appropriate use of colours, interactivity,
graphic quality and pleasing aesthetic for the selected images and illustrations.
The visual appearance and sounds presented by digital learning resources, particularly as they relate to
information design, affect the resource’s aesthetic and pedagogical impact. Decisions about presentation design
should be informed by instructional and cognitive psychology, especially the theories and principles of
cognitive load, multimedia learning and information visualization.
The product design must facilitate browsing. While manipulating the resource, the learner should be able to
find a plan, an index or a detailed table of contents. The suggested choices should be clear and the groupings
within the menus should be consistent.
5.5.3 Technological ingenuity
Multimedia techniques aim to combine and exploit the capacities of new technologies in education to enhance
knowledge transfer and assimilation of knowledge by learners.
During product development, the designer should use multimedia techniques in favour of information and
education such as animations, flashing text, animated images and multiple windows.
6. Description of the evaluation tool
We have implemented an assessment tool based on the four sections we have described previously. These four
sections are divided into criteria, and each criterion is then associated with a set of questions to form a tree
structure with three levels. The tool comprises 4 sections, 11 criteria and 40 questions.
6.1 Home interface
The home interface of the tool includes, in addition to the title, a main menu which includes : identification
part, sections and criteria to be evaluated (Figure 2).
Figure 2: Home interface of evaluation tool
This interface also includes a description of the tool to guide the evaluator.
33 International Journal of Computer Science Research and Application, 3(3): 27-36
6.2 Identification of product
In the main menu, the button “Identification of product" provides an interface containing the required fields
to identify the product to be evaluated (Figure 3). After having completed the fields, the evaluator must record
the information entered.
Figure 3: Interface of product identification
When the evaluator selects a criterion, an interface appears containing (Figure 4):
A bar with the name of the criterion under evaluation ;
A field of questions characterizing the criterion under evaluation. The appraiser must allocate to each
question, a note ranging from 0 to 5 according to the Lickert scale
Figure 4: Interface containing the questions
When evaluating a criterion, the evaluator must pass the following criterion by clicking the "Next" button to
evaluate all criteria (answer all questions).
Lickert scale is a rating scale where the interrogated person expresses his/her degree of agreement or disagreement regarding a statement.
The scale contains five or seven answer choices which enable to formulate the degree of agreement.
34 International Journal of Computer Science Research and Application, 3(3): 27-36
When evaluating all criteria (answer all questions), the evaluator can obtain quantitative results about the
quality of the evaluated product (Figure 5).
Figure 5: Interface for obtaining results
Two types of results can be obtained (Figure 6):
A global score which gives a general idea about the quality of the product evaluated ;
The scores of each section: academic quality, quality pedagogical, didactic quality and technical
quality. This type of result helps focus the analysis on the impact that can have each section on the
quality of evaluated product.
Figure 6: Interface displaying scores
35 International Journal of Computer Science Research and Application, 3(3): 27-36
Learning with digital learning resources takes place in a highly different context from traditional learning,
where human interactions become mediated. In this new environment where the learner finds himself alone in
front of the machine, careful attention to the presented digital content quality is particularly important.
However, this quality is not always assured. Hill and Hannafin (2001) observed that digital learning
resources often suffer from lack of regulation of content validity, reliability, and credibility. In this context, we
anticipate that conception and use of evaluation instruments will help potential users identify objects that do
achieve high quality (academic, pedagogical, didactic and technical).
We believe evaluation instruments designed for the digital learning resources are needed for three reasons.
First, the design of multimedia learning materials is frequently not informed by relevant research in
psychology and education (Nesbit, Li, & Leacock, 2006; Shavinina & Loarer, 1999). This has resulted in easy
access to many digital learning resources of varying quality. Second, to mitigate this search problem, some
resource repositories use quality metrics to order search results (Vargo, Nesbit, Belfer, & Archambault, 2003).
The efficacy of this technique is directly dependent on the validity of the evaluation tool used to generate the
quality ratings. Third, quality criteria for summative evaluations have the potential to drive improvements in
design practice (Nesbit, Belfer, & Vargo, 2002).
To this end, we identified in this article the sections and criteria to evaluate digital learning resources.
Being aware of this criteria, that can affect resources quality, is an essential step towards elaboration, according
to the tree structure presented in this paper, of an evaluation instrument such as the evaluation grid (El Mhouti
A., Nasseh, A. & Erradi M., 2013) or software tool presented in this paper.
From this perspective, to be able to create products that meet most of the teaching and learning criteria,
evaluation should be conducted before making these products at the disposal of learners to identify
irregularities and make the necessary adjustments in the development process.
Crozat, S., Trigano, P. and Hû, O., 1999, EMPI : Une méthode informatisée pour l'évaluation des didacticiels multimédias,
RHIM, la Revue d'Interaction Homme Machine (ed Europia), Volume 1, Issue 2.
Ecaterina G-P., 2003, Conception et Evaluation des Environnements pédagogiques sur le Web. Université de Technologie
de Compiègne (UTC).
El Mhouti A., Nasseh A., Erradi M., 2013, Development of a tool for quality assessment of digital learning resources,
International Journal of Computers Applications, Volume 64, No. 14.
Ezzahri S., Talbi M., Erradi M., Khaldi M., Jilali A, 2008, Elaboration d’un outil d’évaluation de cours de formation
continue a distance, Information, Savoirs, Décisions & Médiation (ISDM), n°39.
Flagg, B.N., 1990, Formative Evaluation for Educational Technologies. Hillsdale, NJ : Lawrence Erlbaum.
Gayeski, D.M. (ed.), Englewood Cliffs, New Jersey: Educational Technology Publications, pp. 97-112.
Hill, J. R. & Hannafin, M. J., 2001, Teaching and learning in digital environments: The resurgence of resource-based
learning, Educational Technology Research and Development, Volume 49, Issue 3, pp. 37–52.
Hû O., 2001, Contribution à l’évaluation des logiciels multimédias pédagogiques, PhD Thesis, University of Technology of
John C. N., Jerry L., Tracey L., Based Tools for Collaborative Evaluation of Learning Resources, Journal of Systemics,
cybernetics and informatics, Volume 3, Issue 5.
Leacock, T. L., & Nesbit, J. C., 2007, A Framework for Evaluating the Quality of Multimedia Learning Resources.
Educational Technology & Society, Volume 10, Issue 2, pp. 44-59.
Losby J. and Wetmore A, Using Lickert scale in Evaluation survey work, 2012.
Nesbit, J. C., Belfer, K., & Vargo, J., 2002, A convergent participation model for evaluation of learning objects . Canadian
Journal of Learning and Technology, Volume 28, Issue 3, pp. 105–120.
Nesbit, J. C., Li, J., & Leacock, T. L., 2006, Web-based tools for collaborative evaluation of learning resources.
Park I., Hannafin M.-J., 1993, Empiically-based guidelines for the design of interactive media, Educational Technology
Research en Development, volume 41, Issue 3.
Pierre M., Renata J., 2001, Outils pour l'analyse de sites Web éducatifs. Module n°3, version 4.
Prince Edward Island, 2008, Evaluation and Selection of Learning Resources: A Guide.
Reeves, T.C., 1993, Evaluating interactive multimedia in Multimedia for Learning Development, Application, Evaluation.
Reigeluth, C.M., Schwartz, E., 1989, An instructional theory for the desing of computer-based simulations. Journal of
Computer-Based Instruction, Volume 16, pp. 1-10.
Romiszowski, A.J., 1986, Developing Auto-Instructional Materials: from Programmed Texts to CAL and Interactive
Video. New York, Nichols Publishing Company.
Sanger M.J., & Greenbowe T.J., 1999, An Analysis of College Chemistry Textbooks as Sources of Misconceptions and
Errors in Electrochemistry, Journal of Chemistry Education, Volume 7, Issue 6, pp. 853-860.
36 International Journal of Computer Science Research and Application, 3(3): 27-36
Scapin D., Bastien CH, 1997, Ergonomic criteria for evaluating the ergonomic quality of interactive systems. Behaviour &
Shavinina, L. V. & Loarer, E., 1999, Psychological evaluation of educational multimedia applications. European
Psychologist, Volume 4, Issue 1, pp. 33–44.
Small, R., 1997, Assessing the motivational qualities of world wide web sites. US Department of Education Report.
Southern Regional Educational Board (2005). Principles of effective learning objects: Guidelines for development and use
of learning objects for the SCORE initiative of the Southern Regional Educational Board.
Squires, D. & Preece, J., 1999, Predicting quality in educational software: Evaluating for learning, usability and the synergy
between them. Interacting with Computers, Volume 11, pp. 467–483.
Vargo, Nesbit, Belfer & Archambault, 2003, Learning object evaluation: Computer mediated collaboration and inter-rater
reliability. International Journal of Computers and Applications, Volume 25, Issue 3, pp. 198–205.
A Brief Author Biography
Abderrahim EL MHOUTI – e-learning & Pedagogy researcher. Laboratory of Computer Science Operational Research
and Applied Statistics, Abdelmalek Essaadi University, Faculty of Science, Tetouan, Morocco. E-mail :
Azeddine NASSEH – Professor and computer science researcher. Laboratory of Computer Science Operational Research
and Applied Statistics. Abdelmalek Essaadi University, Higher Normal School, Tetouan, Morocco. E-mail:
Mohamed ERRADI – Professor and e-learning & Pedagogy researcher. Laboratory of Computer Science Operational
Research and Applied Statistics. Abdelmalek Essaadi University, Higher Normal School, Tetouan, Morocco. E-mail:
Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By the
appearance in this open access journal, articles are free to use with the required attribution. Users must contact the corresponding authors for
any potential use of the article or its content that affects the authors’ copyright.