ArticlePDF Available

Evaluating the Learning in Learning Objects

Authors:

Abstract and Figures

A comprehensive review of the literature on the evaluation of learning objects revealed a number of problem areas, including emphasizing technology ahead of learning, an absence of reliability and validity estimates, over-reliance on informal descriptive data, a tendency to embrace general impres-sions of learning objects rather than focusing on specific design features, the use of formative or summative evaluation, but not both, and testing on small, vaguely described sample populations using a limited number of learning objects. This study explored a learning-based approach for eval-uating learning objects using a large, diverse, sample of secondary school students. The soundness of this approach was supported by estimates of reliability and validity, using formal statistics where applicable, incorporating both formative and summative evaluations, examining specific learning objects features based on instructional design research, and testing of a range of learning objects. The learning-based evaluation tool produced useful and detailed information for educators, design-ers and researchers about the impact of learning objects in the classroom. Overview According to Williams (2000), evaluation is essential for every aspect of designing learning objects, including identifying learners and their needs, conceptualizing a design, developing prototypes, implementing and delivering instruction, and improv-ing the evaluation itself. It is interesting to note, however, that Williams (2000) does not emphasize evaluating the impact of learning objects on 'actual learning'. This omission is representative of the larger body of research on learning objects. In a recent review of 58 articles (see Kay & Knaack, submitted), 11 studies focused on the evaluation of learning objects; however, only two papers examined the impact of learning objects on learning.
Content may be subject to copyright.
Open Learning
Vol. 22, No. 1, February 2007, pp. 5–28
ISSN 0268-0513 (print)/ISSN 1469-9958 (online)/07/010005–24
© 2007 The Open University
DOI: 10.1080/02680510601100135
Evaluating the learning in learning
objects
Robin H. Kay* and Liesel Knaack
University of Ontario Institute of Technology, Canada
Taylor and Francis LtdCOPL_A_209955.sgm10.1080/02680510601100135Open Learning0268-0513 (print)/1469-9958 (online)Original Article2007Taylor & Francis221000000February 2007RobinKayrobin.kay@uoit.ca
A comprehensive review of the literature on the evaluation of learning objects revealed a number of
problem areas, including emphasizing technology ahead of learning, an absence of reliability and
validity estimates, over-reliance on informal descriptive data, a tendency to embrace general impres-
sions of learning objects rather than focusing on specific design features, the use of formative or
summative evaluation, but not both, and testing on small, vaguely described sample populations
using a limited number of learning objects. This study explored a learning-based approach for eval-
uating learning objects using a large, diverse, sample of secondary school students. The soundness
of this approach was supported by estimates of reliability and validity, using formal statistics where
applicable, incorporating both formative and summative evaluations, examining specific learning
objects features based on instructional design research, and testing of a range of learning objects.
The learning-based evaluation tool produced useful and detailed information for educators, design-
ers and researchers about the impact of learning objects in the classroom.
Keywords: Evaluate; Assess; Quality; Secondary school; Learning object
Overview
According to Williams (2000), evaluation is essential for every aspect of designing
learning objects, including identifying learners and their needs, conceptualizing a
design, developing prototypes, implementing and delivering instruction, and improv-
ing the evaluation itself. It is interesting to note, however, that Williams (2000) does
not emphasize evaluating the impact of learning objects on ‘actual learning’. This
omission is representative of the larger body of research on learning objects. In a
recent review of 58 articles (see Kay & Knaack, submitted), 11 studies focused on the
evaluation of learning objects; however, only two papers examined the impact of
learning objects on learning.
*Corresponding author. University of Ontario Institute of Technology, Faculty of Education, 2000
Simcoe St. North, Oshawa, Ontario, Canada L1H 7L7. Email: robin.kay@uoit.ca
6R. H. Kay and L. Knaack
A number of authors note that the ‘learning object’ revolution will never take place
unless instructional use and pedagogy are explored and evaluated (for example,
Wiley, 2000; Muzio et al., 2002; Richards, 2002). Duval et al. (2004) add that while
many groups seem to be grappling with issues that are related to the pedagogy of
learning objects, few papers include a detailed analysis of specific learning object
features that affect learning. Clearly, there is a need for empirical research that
focuses on evaluating the learning-based features of learning objects. The purpose of
this study was to explore and test a learning-based approach for evaluating learning
objects.
Literature review
Definition of learning objects
It is important to establish a clear definition of a ‘learning object’ in order to develop
an effective metric. Unfortunately, consensus regarding a definition of learning objects
has yet to be attained (for example, Muzio et al., 2002; Littlejohn, 2003; Parrish, 2004;
Wiley et al., 2004; Bennett & McGee, 2005; Metros, 2005). Part of the problem rests
in the values and needs of learning object developers and designers. The majority of
researchers have emphasized technological issues such as accessibility, adaptability,
the effective use of metadata, reusability and standardization (for example, Muzio
et al., 2002; Downes, 2001; Littlejohn, 2003; Siqueira et al., 2004; Koppi et al., 2005).
However, a second ‘learning focused’ pathway to defining learning objects has
emerged as a reaction to the overemphasis of technological characteristics (Baruque
& Melo, 2004; Bradley & Boyle, 2004; Wiley et al., 2004; Cochrane, 2005).
While both technical and learning-based definitions offer important qualities that
can contribute to the success of learning objects, research on the latter, as indicated,
is noticeably absent (Kay & Knaack, submitted). Agostinho et al. (2004) note that we
are at risk of having digital libraries full of easy-to-find learning objects we do not
know how to use in the classroom.
In order to address a clear gap in the literature on evaluating learning objects, a
pedagogically focused definition of learning objects has been adopted for the current
study. Learning objects are defined as ‘interactive web-based tools that support the
learning of specific concepts by enhancing, amplifying, and guiding the cognitive
processes of learners’. Therefore, specific learning qualities are valued more highly
than the technical qualities in the evaluation model being developed and assessed.
Theoretical approaches to evaluating learning objects
Summative versus formative evaluation. Researchers have followed two distinct paths
for evaluating learning objects—summative and formative. The first and most
frequently used approach is to provide a summative or final evaluation of a learning
object (Kenny et al., 1999; Van Zele et al., 2003; Adams et al., 2004; Bradley & Boyle,
2004; Jaakkola & Nurmi, 2004; Krauss & Ally, 2005; MacDonald et al., 2005). Various
Evaluating the learning in learning objects 7
summative formats have been used, including general impressions gathered using
informal interviews or surveys, measuring frequency of use and assessing learning
outcomes. The ultimate goal of this kind of evaluation has been to get an overview of
whether participants valued the use of learning objects and whether their learning
performance was altered. The second approach involves the use of formative assess-
ment when learning objects are being developed, an approach that is strongly
supported by Williams (2000). Cochrane (2005) provides a good example of how this
kind of evaluation model works where feedback is solicited from small groups at regular
intervals during the development process. Overall, the formative approach to evalua-
tion is not well documented in the learning object literature.
Convergent participation. Nesbit et al. (2002) outline a convergent evaluation model
that involves multiple participants—learners, instructors, instructional designers and
media developers. Each of these groups offers feedback throughout the development
of a learning object. Ultimately a report is produced that represents multiple values
and needs. A number of studies evaluating learning objects gather information from
multiple sources (for example, Kenny et al., 1999; Bradley & Boyle, 2004; Krauss &
Ally, 2005; MacDonald et al. 2005), although the formal convergence of participant
values advocated by Nesbit et al. (2002) is not pursued. The convergent evaluation
model is somewhat limited by the typically small number of participants giving feed-
back. In other words, the final evaluation may not be representative of what a larger
population might observe or experience.
Instructional design. Pedagogically focused designers of learning objects have empha-
sized principles of instructional design, interactivity, clear instructions, formative
assessment and solid learning theory (for example, Baruque & Melo, 2004; Bradley &
Boyle, 2004; Cochrane, 2005; Krauss & Ally, 2005). However, there is relatively little
research on the design principles as they apply to learning objects (Williams, 2000;
Cochrane, 2005). Recommendations for specific design characteristics are proposed
but are rarely evaluated (Downes, 2001; Krauss & Ally, 2005). It is more typical to
collect open-ended, informal feedback on learning objects without reference to specific
instructional design characteristics that might enhance or reduce learning perfor-
mance (for example, Kenny et al., 1999; Bradley & Boyle, 2004; Krauss & Ally, 2005).
Methodological issues
At least six key observations are noteworthy with respect to methods used to evaluate
learning objects. First, most studies offer clear descriptions of the learning objects
used; however, considerable variation exists in content. Learning objects examined
included drill-and-practice assessment tools (Adams et al., 2004) or tutorials
(Jaakkola & Nurmi, 2004), video case studies or supports (Kenny et al., 1999;
MacDonald et al., 2005), general web-based multimedia resources (Van Zele et al.,
2003) and self-contained interactive tools in a specific content area (Bradley & Boyle,
8R. H. Kay and L. Knaack
2004; Cochrane, 2005). The content and design of a learning object needs to be
considered when examining quality and learning outcomes (Jaakkola & Nurmi, 2004;
Cochrane, 2005).
Second, a majority of researchers use multiple sources to evaluate learning objects,
including surveys, interviews or email feedback from students and faculty, tracking
the use of learning objects by students, think-aloud protocols and learning outcomes
(for example, Kenny et al., 1999; Bradley & Boyle, 2004; Krauss & Ally, 2005;
MacDonald et al. 2005).
Third, most evaluation papers focus on single learning objects (Kenny et al., 1999;
Adams et al., 2004; Bradley & Boyle, 2004; Krauss & Ally, 2005; MacDonald et al.,
2005); however, using an evaluation tool to compare a range of learning objects can
provide useful insights. For example, Cochrane (2005) compared a series of four
learning objects based on general impressions of reusability, interactivity and peda-
gogy, and found that different groups valued different areas. Also, Jaakkola and
Nurmi (2004) compared drill-and-practice versus interactive learning objects and
found the latter to be significantly more effective in improving overall performance.
Fourth, with the exception of Jaakkola and Nurmi’s (2004) unpublished but well-
designed report, the evaluation of learning objects has been done exclusively in higher
education. Furthermore, sample size has been typically small and/or poorly described
(Van Zele et al., 2003; Adams et al., 2004; Cochrane, 2005; Krauss & Ally, 2005;
MacDonald et al., 2005), making it difficult to extend any conclusions to a larger
population.
Fifth, while most evaluation studies reported that students benefited from using
learning objects, the evidence is based on loosely designed assessment tools with no
validity or reliability (Kenny et al., 1999; Van Zele et al., 2003; Bradley & Boyle,
2004; Krauss & Ally, 2005). Finally, only 2 of the 11 evaluation studies (Kenny et al.,
1999; Van Zele et al., 2003) examined in Kay and Knaack’s (submitted) review of the
learning object literature used formal statistics. The majority of studies relied on
descriptive data and anecdotal reports to assess the merits of learning objects. The
lack of reliability and validity of evaluation tools and an absence of statistical rigour
make it difficult to have full confidence in the results presented to date.
In summary, previous methods used to evaluate learning objects have incorporated
clear descriptions of the tools used and multiple sources of data collection, although
limitations are evident with respect to sample size, representative populations, reli-
ability and validity of data collection tools and the use of formal statistics.
Purpose
The purpose of this study was to explore a learning-based approach for evaluating
learning objects. Based on a detailed review of studies looking at the evaluation of
learning objects, the following practices were followed:
a large, diverse, sample of secondary school students was used;
reliability and validity estimates were calculated;
Evaluating the learning in learning objects 9
formal statistics were used where applicable;
both formative and summative evaluations were employed;
specific learning objects features based on instructional design research were
examined;
a range of learning objects was tested; and
evaluation criteria focused on the learner, not the technology.
Method
Sample
Students. The sample consisted of 221 secondary school students (104 males, 116
females, one missing data), 13–17 years of age, in grade 9 (n = 85), grade 11 (n = 67)
and grade 12 (n = 69) from 12 different high schools and three boards of education.
The students were obtained through convenience sampling.
Teachers. A total of 30 teachers (nine experienced, 21 pre-service) participated in the
development of the learning objects. The team breakdown by subject area was eight
for biology (two experienced, six pre-service), five for chemistry (two experienced,
three pre-service), five for computer science (one experienced, four pre-service), five
for physics (one experienced, four pre-service) and seven for mathematics (three
experienced, four pre-service).
Learning objects. Five learning objects in five different subject areas were evaluated
by secondary school students. Seventy-eight students used the mathematics learning
object (grade 9), 40 used the physics learning object (grades 11 and 12), 37 used the
chemistry learning object (grade 12), 34 used the biology learning object (grades 9
and 11) and 32 used the computer science learning object (grades 11 and 12). All
learning objects can be accessed online (http://education.uoit.ca/learningobjects). All
five learning objects met the criteria established by the definition of a learning object
provided for this paper. They were interactive, web-based and enhanced concept
formation in a specific area through graphical supports and scaffolding. A brief
description for each is provided below.
Mathematics. This learning object (slope of a line) was designed to help grade
9 students explore the formula and calculations for the slope of a line. Students used
their knowledge of slope to navigate a spacecraft through four missions. As the
missions progressed from level one to level four, less scaffolding was provided to solve
the mathematical challenges.
Physics. This learning object (relative velocity) helped grade 11 and grade
12 students explore the concept of relative velocity. Students completed two case-study
10 R. H. Kay and L. Knaack
questions, and then actively manipulated the speed and direction of a boat, along with
the river speed, to see how these variables affect relative velocity.
Biology. This learning object (Mendelian genetics) was designed to help grade
11 students investigate the basics of Mendel’s genetics relating the genotype (genetic
trait) with the phenotype (physical traits) including monohybrid and dihybrid crosses.
Students had a visual scaffolding to predict and complete Punnett squares. Each
activity finished with an assessment.
Chemistry. This grade12-oriented learning object (Le Chatelier’s Principle 1)
demonstrated the three stresses (concentration, temperature and pressure change)
that can be imposed to a system at chemical equilibrium. Students explored how
equilibrium shifts related to Le Chatelier’s Principle. Students assessed their learning
in a simulated laboratory environment by imposing changes to equilibrated systems
and predicting the correct outcome.
Computer Science. This learning object (Boolean logic) was designed to teach
grade 10 or grade 11 students the six basic logic operations (gates)—AND, OR,
NOT, XOR (exclusive OR), NOR (NOT-OR) and NAND (NOT-AND)—through
a visual metaphor of water flowing through pipes. Students selected the least number
of inputs (water taps) needed to get a result in the single output (water holding tank)
to learn the logical function of each operation.
Developing the learning objects
The design of the learning objects was based on the following principles. First, the
learning objects were created at the grassroots level by pre-service and in-service
teachers. Wiley (2000) maintained that learning objects need to be sufficiently chal-
lenging, so in-service teachers in this study were asked to brainstorm about and
select areas where their students had the most difficulty. Second, the learning objects
were designed to be context rich; however, they focused on relatively specific topic
areas that could be shared by different grades. Reusability, while important, was
secondary to developing meaningful and motivating problems. This approach is
supported by a number of learning theorists (for example, Larkin, 1989; Sternberg,
1989; Lave & Wenger, 1991). Third, the learning objects were both interactive and
constructivist in nature. Students interacted with the computer, but not simply by
clicking ‘next, next, next’. They had to construct solutions to genuine problems.
Fourth, the ‘octopus’ or resource model proposed by Wiley et al. (2004) was used.
The learning objects were designed to support and reinforce understanding of
specific concepts. They were not designed as stand alone modules that could teach
concepts. Finally, the learning objects went through many stages of development
Evaluating the learning in learning objects 11
and formative evaluation, including a pilot study involving secondary school
students (see Table 3).
Procedure
Pre-service teachers (guided by an experienced mentor) and in-service teachers
administered the survey to their classes after using one of the learning objects within
the context of a lesson. Students were told the purpose of the study and asked to give
written consent if they wished to volunteer to participate. Teachers and teacher
candidates were instructed to use the learning object as authentically as possible.
Often the learning object was used as another teaching tool within the context of a
unit. After one period of using the learning object (approximately 70 minutes),
students were asked to fill out a survey (see Appendix).
Data sources
The data for this study were gathered using four items based on a seven-point Likert
scale, and two open-ended questions (see Appendix). The questions yielded both
quantitative and qualitative data.
Quantitative data. The first construct consisted of items one to four (Appendix), and
was labelled ‘perceived benefit’ of the learning object. The internal reliability estimate
was 0.87 for this scale. Criterion related validity for perceived benefit score was
assessed by correlating the survey score with the qualitative ratings (item 9—see scor-
ing below). The correlation was significant (0.64; p < 0.001).
Qualitative data—perceived benefits of learning objects. Item six (Appendix) asked
students whether the learning object was beneficial. Two-hundred and twenty-five
comments were made and categorized according to nine post-hoc categories (Table 1).
Each comment was then rated on a five-point Likert scale (2 = very negative, 1 =
negative, 0 = neutral, 1 = positive, 2 = very positive). Two raters assessed all comments
made by students and achieved inter-rater reliability of 0.72. They then met, discussed
all discrepancies and attained 100% agreement.
Qualitative data—learning object quality. Item five (Appendix) asked students what
they liked and did not like about the learning object. A total of 757 comments were
written down by 221 students. Student comments were coded based on well-
established principles of instructional design. Thirteen categories are presented with
examples and references in Table 2. In addition, all comments were rated on a five-
point Likert scale (2 = very negative, 1 = negative, 0 = neutral, 1 = positive, 2 =
very positive).
12 R. H. Kay and L. Knaack
Table 1. Coding scheme for assessing perceived benefits of learning objects (item six, Appendix)
Reason category Criteria Sample student comments
Timing When the learning object was introduced in the
curriculum
‘I think I would have benefited more if I used this program while
studying the unit’
‘It didn’t benefit me because that particular unit was over. It would
have helped better when I was first learning the concepts’
Review of basics/
reinforcement
Refers to reviewing, reinforcing concept, practice ‘going over it more times is always good for memory’
‘it did help me to review the concept and gave me practise in finding
the equation of a line’
Interactive/hands on/
learner control
Refers to interactive nature of the process ‘I believe I did, cause I got to do my own pace … I prefer more hands
on things (like experiments)’
‘Yes, it helped because it was interactive’
Good for visual
learners
Refers to some visual aspect of the process ‘I was able to picture how logic gates function better through using the
learning object’
‘I found it interesting. I need to see it’
Computer based Refers more generally to liking to work with computers ‘I think that digital learning kind of made the game confusing’
‘I think I somewhat did because I find working on the computer is
easier than working on paper’
Fun/interesting Refers to process being fun, interesting, motivating ‘I think I learned the concepts better because it made them more
interesting’
‘I think I did. The learning object grasped my attention better than a
teacher talking non-stop’
Learning related Refers to some aspect of the learning process ‘I don’t think I learned the concept better’
‘It did help me teach the concept better’
Clarity Refers to the clarity of the program and/or the quality
of instruction
‘I think it was very confusing and hard to understand’
‘Yes, this helped me. It made it much clearer and was very educational’
Not good at subject Refers to personal difficulties in subject areas ‘No, to be honest it bothered me. In general I don’t enjoy math and this
did not help’
Compare with other
method
Compared with other teaching method/strategy ‘Yes, because it … is better than having the teacher tell you what to do’
‘Would rather learn from a book’
No reason given ‘I didn’t benefit from any of it’
‘Yes’
Evaluating the learning in learning objects 13
Table 2. Coding scheme for assessing learning object quality (item five, Appendix)
Category (references) Criteria Sample student comments
Organization/layout (for example, Madhumita,
1995; Koehler & Lehrer, 1998)
Refers to the location or overall layout
of items on the screen
‘Sometimes we didn’t know where/what to click’
‘I found that they were missing the next button’
‘Easy to see layout’
‘[Use a] full screen as opposed to small box’
Learner control over interface (for example,
Akpinar & Hartlet, 1996; Kennedy & McNaught,
1997; Bagui, 1998; Hanna et al., 1999)
Refers the control of the user over
specific features of the learning object
including pace of learning
‘[I liked] that it was step by step and I could go
at my own pace’
‘I liked being able to increase and decrease
volume, temperature and pressure on my own. It
made it easier to learn and understand’
‘It was too brief and it went too fast’
Animation (for example, Oren, 1990; Gadanidis
et al., 2003; Sedig & Liang, 2006)
Refers specifically to animation
features of the program
‘You don’t need all the animation. It’s good to
give something good to look at, but sometimes it
can hinder progress’
‘I liked’ the fun animations’
‘Like how it was linked with little movies …
demonstrating techniques’
‘I liked the moving spaceship’
Graphics (for example, Oren, 1990; Gadanidis
et al., 2003; Sedig & Liang, 2006)
Refers to graphics (non-animated of
the program), colours, size of text
‘The pictures were immature for the age group’
‘I would correct several mistakes in the graphics’
‘The graphics and captions that explained the
steps were helpful’
‘Change the colours to be brighter’
Audio (for example, Oren, 1990; Gadanidis et al.,
2003; Sedig & Liang, 2006)
Refers to audio features ‘Needed a voice to tell you what to do’
‘Needs sound effects’
‘Unable to hear the character (no sound card on
computers)’
14 R. H. Kay and L. Knaack
Table 2. (continued)
Category (references) Criteria Sample student comments
Clear instructions (for example, Jones et al., 1995;
Kennedy & McNaught, 1997; Macdonald et al.,
2005)
Refers to clarity of instructions before
feedback or help is given to the user
‘Some of the instruction were confusing’
‘I … found it helpful running it through first and
showing you how to do it’
‘[I needed] … more explanations/Clearer
instructions’
Help features (for example, Jones et al., 1995;
Kennedy & McNaught, 1997; Macdonald et al.,
2005)
Refers to help features of the program ‘The glossary was helpful’
‘Help function was really good’
‘Wasn’t very good in helping you when you were
having trouble … I got more help from the
teacher than it’
Interactivity (for example, Akpinar & Hartley,
1996; Kennedy & McNaught, 1997; Bagui, 1998;
Hanna et al., 1999)
Refers to general interactive nature of
the program
‘Using the computer helped me more for
genetics because it was interactive’
‘I like that it is on the computer and you were
able to type the answers’
‘I liked the interacting problems’
Incorrect content/errors Refers to incorrect content ‘There were a few errors on the sight’
‘In the dihybrid cross section, it showed some
blond girls who should have been brunette’
Difficulty/challenge levels (for example, Savery &
Duffy, 1995; Hanna et al., 1999)
Was the program challenging? Too
easy? Just the right difficulty level?
‘Make it a bit more basic’
‘For someone who didn’t know what they were
doing, the first few didn’t teach you anything but
to drag and drop’
‘I didn’t like how the last mission was too hard’
Evaluating the learning in learning objects 15
Table 2. (continued)
Category (references) Criteria Sample student comments
Useful/informative (for example, Sedig & Liang,
2006)
Refers to how useful or informative the
learning object was
‘I like how it helped me learn’
‘I found the simulations to be very useful’
‘[The object] has excellent review material and
interesting activities’
‘I don’t think I learned anything from it though’
Assessment (Atkins, 1993; Sedighian, 1998;
Zammit, 2000; Kramarski & Zeichner, 2001;
Wiest, 2001)
Refers to summative feedback/
evaluation given after a major task (as
opposed to a single action) is
completed
No specific comments offered by students
Theme/motivation (Akpibar & Hartley, 1996;
Harp & Mayer, 1998)
Refers to overall theme and/or
motivating aspects of the learning
object
‘Very boring. Confusing. Frustrating’
‘Better than paper or lecture—game is good!’
‘I liked it because I enjoy using computers, and I
learn better on them’
16 R. H. Kay and L. Knaack
Two raters assessed the first 100 comments made by students and achieved inter-
rater reliability of 0.78. They then met, discussed all discrepancies and attained
100% agreement. Next the raters assessed the remaining 657 comments with an
inter-rated reliability of 0.66. All discrepancies were reviewed and 100% agreement
was again reached.
Key variables
The key variables used to evaluate learning objects in this study were the following:
perceived benefit (survey construct of four items; Appendix);
perceived benefit (content analysis of open-ended question based on post-hoc
structured categories; Table 1);
quality of learning objects (content analysis of open-ended response question
based on 13 principles of instruction design; Table 2); and
learning object type (biology, computer science, chemistry, physics, mathematics).
Results
Formative evaluation
Table 3 outlines nine key areas where formative analysis of the learning objects was
completed by pre-service and in-service teachers, students, a media expert or an
external learning object specialist. Eight of these evaluations occurred before the
summative evaluation of the learning object. Numerous revisions were made
throughout the development process. A final focus group offered systematic analysis
of where the learning objects could be improved.
The focus groups also reported a series of programming changes that would help
improve the consistency and quality of the learning objects (see Table 4). It is impor-
tant to note that the changes offered for chemistry, biology, and computer science
learning objects were cosmetic, whereas those noted for mathematics and physics
were substantive, focusing on key learning challenges. Mathematics and physics
comments included recommendations for clearer instructions, which is consistent
with student evaluations where chemistry and biology learning objects were rated
significantly better than mathematics and physics objects (see Table 4).
The positive impact of formative assessment in creating effective learning objects is
reflected by a majority of students reporting that the learning objects were useful (see
detailed results below).
The remaining results reported in this study are summative evaluations collected
from students who actually used the learning objects in a classroom.
Perceived benefit of learning object—survey construct
Based on the average perceived benefit rating from the survey (items one to four,
Appendix), students felt the learning object was more beneficial than not (mean =
Evaluating the learning in learning objects 17
Table 3. Timeline for formative analysis used in developing the learning objects
Step Time Description
Mock prototyping September 2004 (two hours) Subject team introduced to learning objects by creating paper-based
prototype; Member from each team circulated and gave feedback on
clarity and design
Prototyping and usability November 2004 (one and a
half days)
Subject teams produced detailed paper prototype of their learning objects.
Every two hours subject teams were asked to circulate around the room,
and give feedback on other group’s learning object designs. It is not
uncommon for 10–20 versions of the paper-prototype to emerge over the
span of this one-and-a-half-day workshop
Electronic prototype December 2004 One team member creates PowerPoint prototype of learning object.
Throughout this process, feedback was solicited from other team
members. Edits and modifications were made through an online
discussion board where various versions of the prototype were posted and
comments from team members were discussed
Programming learning object January 2005 A Flash programmer/multimedia designer sat down with each group and
observed their electronic prototype. He discussed what challenges they
would have and different strategies for getting started in Flash
Team formative evaluation February 2005 (half day) Subject teams evaluate each other’s Flash versions of learning objects.
Each team had approximately 15 minutes to go through their learning
object, describe interactivity components and highlight sections that each
member had done. The entire learning object group provided feedback
during and after each presentation
Pilot test February 2005 (one day) Learning objects pilot tested on 40 volunteer students
External formative evaluation February 2005 (half day) CLOE expert provides provided one-to-one guidance and feedback for
improving the learning objects
Revision plan (before summative
evaluation)
February 2005 (half day) Subject teams digest student and expert feedback and make plan for
further revisions
Revision plan (after summative
evaluation
April 2005 Subject teams brought together to evaluate implementation of learning
objects future revisions (see Table 5)
18 R. H. Kay and L. Knaack
Table 4. Proposed programming changes for learning by subject area
Learning object Proposed changes
Biology Integrate dihybrid activity.
In dihybrid Punnett square #1:
Blue squares updated to question marks.
Prompt for next button removed.
In dihybrid analysis #1:
Click and drag too repetitive.
Eye colour difficult to see in dihybrid section.
Fix credits
Monohybrid—‘What is monohybrid’ tab:
Remove word ‘dominant’ from heterozygous.
Monohybrid Punnett #1:
‘The off spring…’—should be on one line.
Dihybrid Punnett #1:
Words in combination box on one line.
Chemistry Give example of catalyst/noble gas in year 3.
‘Correct’ change text colour to green or blue.
Update credits.
Link to activities in each year.
Simulation.
Remove title from certificate frame.
Size of screen—too small
Computer science Buttons are active and hidden—e.g., upper left and main choice buttons
hidden behind instructions and help screen
Level 2—should there be a short pipe under the second-last OR gate?
Level 2—no ‘unsuccessful’ message
Level 3—no ‘unsuccessful’ message
Level 5—incorrect message if choose right-most two taps (message also
appears twice)
Level 5—no ‘unsuccessful’ message
Level 6—incorrect message if choose either of the right-most taps
Level 6—no ‘unsuccessful’ message
On level 6, above NAND, there is no end to the pipes
General—change pipe colour (to silver–grey?) and the ‘on’ tap colour to
green (to match text instructions and feedback from users)
About screen—bump up the version number (v1.4?, v1.5?)
Teacher info/expectations à ‘Ontario Curriculum Unit Planner’ should be
changed to ‘the content and intentions of the published Ontario
curriculum’
Teacher info/prerequisite—same wording as above
The big grey box at the left with all the mouse-over help—put this on the
right and make it a water tank, and then feed the flow from this—it would
make it a useful part of the metaphor, rather than a big help box taking up
a large part of the screen
Evaluating the learning in learning objects 19
4.8, standard deviation = 1.5; scale ranged from 1 to 7). Fourteen per cent of all
students (n = 30) disagreed (average score of 3 or less) that the learning object was of
benefit, whereas 55% (n = 122) agreed (average score of 5 or more) that it was useful.
Perceived benefit of learning object—content analysis
The qualitative comments (item six, Appendix) based on the post-hoc categories
outlined in Table 1 supported the survey results. Twenty-four per cent of the students
(n = 55) felt the overall learning object was not beneficial; however, 66% (n = 146)
felt it did provide benefit.
A more detailed examination indicated that the motivational, interactive and visual
qualities were most important to students who benefited from the learning object.
Whether they learned something new was also cited frequently and rated highly as a
key component. Presenting the learning object after the topic had already been
learned and poor instructions were the top two reasons given by students who did not
benefit from the learning object (Table 5).
Quality of learning object—content analysis
Overview. Students were relatively negative with respect to their comments about
learning object quality (item five, Appendix). Fifty-seven per cent of all comments
Table 4. (continued)
Learning object Proposed changes
Mathematics Make help more obvious. Have a bubble? Bubbles for areas of the screen
(console and intro to the screen).
Press ‘Enter’ on the keyboard instead of ‘next’ on screen (when prompted
for text).
Mission 2—students didn’t know that they needed to do the calculations
on their own using pencil and paper. Instructions need to be more explicit.
‘Instruction’ font size and colour are too small and too dark.
Options to go back to other missions, and when they get to the end, more
clarity as to what or where they will go more missions or choices.
Variety of scenarios (missions).
Display the equation of the line drawn from ‘planet’ to ‘planet’.
Physics Program locked up at times.
Instructions are not obvious.
Screen resolution problems.
Labels inconsistent.
No defined learning objects for case 3.
Boat can disappear!?
Better used as a demonstration tool or as a problem-solving simulation.
Cannot teach the concept in isolation from a teacher.
20 R. H. Kay and L. Knaack
were either very negative (n = 42, 6%) or negative (n = 392, 52%), whereas only 42%
of the students made positive (n = 258, 34%) or very positive (n = 57, 8%) statements
about learning object quality.
Categories. An analysis of categories evaluating learning object quality (see Table 2
for descriptions) identified animation, interactivity and usefulness as the highest rated
areas, and audio, correct information, difficulty, clarity of instructions and help func-
tions as the lowest rated areas. Table 6 presents the means and standard deviation for
all categories assessing the quality of learning objects.
A one-way analysis of variance (ANOVA) comparing categories of learning object
quality was significant (p < 0.001). Audio, correct information and difficulty were
Table 5. Mean ratings for reasons given for perceived benefits of learning objects (item nine)
Reason nMean
Standard
deviation
Fun/interesting 17 1.35 0.74
Visual learners 33 1.24 0.84
Interactive 30 1.17 1.34
Learning related 37 0.81 1.13
Good review 60 0.80 1.04
Computer based 5 0.20 1.40
Compare with another method 24 0.00 1.18
Timing 21 0.29 1.19
Clarity 33 0.55 0.00
Not good at subject 3 1.35 0.38
Table 6. Mean ratings for categories evaluating learning object quality
Category nMean
Standard
deviation
Animations 27 0.81 0.74
Interactivity 47 0.66 0.84
Useful 39 0.51 1.34
Assessment 9 0.44 1.13
Graphics 84 0.25 1.04
Theme/motivation 125 0.12 1.40
Organization 34 0.06 1.18
Learner control 75 0.12 1.19
Help functions 42 0.43 1.02
Clear instructions 138 0.61 0.95
Difficulty 107 0.67 0.81
Information correct 17 1.00 0.00
Audio 13 1.15 0.38
Evaluating the learning in learning objects 21
rated significantly lower than animations, interactivity and usefulness (Scheffé post-
hoc analysis, p < 0.05).
Categories—likes only. One might assume that categories with mean ratings close to
zero are not particularly important with respect to evaluation. However, it is possible
that a mean of zero could indicate an even split between students who liked and
disliked a specific category. Therefore, it is worth looking at what students liked about
the learning objects, without dislikes, to identify polar ‘hot spots’. A comparison of
means for positive comments confirmed that usefulness (mean = 1.33) was still
important, but that theme and motivation (mean = 1.35), learner control (mean =
1.35) and organization of the layout (mean = 1.20) also received high ratings. These
areas had mean ratings that were close to zero when negative comments were
included (see Table 5). This indicates than students had relatively polar attitudes
about these categories.
Categories—dislikes only. A comparison of means for negative comments indicated
that usefulness (mean = 1.33) remained important; however, theme and motivation
(mean = 1.32) was also perceived as particularly negative. Students appeared to
either like or dislike the theme or motivating qualities of the learning object.
Correlation between quality and perceived benefit scores. Theme and motivation (r =
0.45, p < 0.01), the organization of the layout (r = 0.33, p < 0.01), clear instructions
(r = 0.33, p < 0.01) and usefulness (r = 0.33, p < 0.01) were significantly correlated
with the perceived benefit survey (items one to four, Appendix).
Learning object type
A multivariate ANOVA was used to examine differences among learning object types
(subject) with respect to perceived benefits and quality of learning object (Table 7).
Significant differences among learning objects were observed for perceived benefit
Table 7. Multivariate ANOVA for learning object quality, perceived benefits (survey), and
perceived benefits (content analysis) for learning object type
Source
Degrees of
freedom
Sum of
squares
Mean
square F value Scheffé post-hoc analysis
Learning quality
(item five)
4 18.0 4.5 13.3* Biology, computer science,
chemistry > mathematics, physics**
Perceived benefits
(survey)
4 54.2 13.5 13.3* Biology, computer science,
chemistry > mathematics, physics**
Perceived benefits
(content analysis)
4 1942.5 485.6 18.3* Biology, computer science,
chemistry > mathematics, physics**
*p < 0.001, ** p < 0.05.
22 R. H. Kay and L. Knaack
(survey and content analysis p < 0.001) and learning object quality (p < 0.001). An
analysis of contrasts revealed that the chemistry, biology and computer science learn-
ing objects were rated significantly higher with respect to perceived benefit and learn-
ing object quality (p < 0.05).
While the number of observations was too small to make comparisons among post-
hoc categories for perceived benefit (Table 2), a series of ANOVAs was run compar-
ing mean learning objects ratings of categories used to assess learning object quality.
A majority of the categories revealed no significant effect, although three areas
showed significant differences among learning objects: learner control, clear instruc-
tions, and theme/motivation. The chemistry learning object was rated significantly
higher than the mathematics and biology learning objects with respect to learner
control (p < 0.001). The chemistry and biology learning objects were rated signifi-
cantly higher than the mathematics and physics learning objects with respect to clear
instructions (p < 0.001). Finally, the computer science learning object was rated
significantly higher than the mathematics learning object with respect to theme/moti-
vation (p < 0.001). These results are partially compromised because all learning
objects were not experienced by students from each grade.
Feedback from learning object teams
With respect to the positive qualities of learning objects, two key themes emerged in
the focus groups for all five learning objects: graphics and interactivity. These were to
the two qualities that students liked best. Regarding areas for improvement, feedback
varied according to the specific learning object used. The biology group reported that
students wanted better audio and more challenges. The chemistry group noted that
teacher support was necessary for the learning objects and that some instructions
were unclear. The computer science group commented that students liked the learn-
ing object but wanted more difficult circuits. The mathematics group felt the success
of the learning object was tied closely to when the concept was taught and in what
format (group versus individual). Finally, the physics group observed that a number
of bugs and obscure instructions slowed students down.
Discussion
The purpose of this study was to explore a learning-based approach for evaluating
learning objects. Key issues emphasized were sample population, reliability and valid-
ity, using formal statistics where applicable, incorporating both formative and
summative evaluations, examining specific learning objects features based on instruc-
tional design research, testing of a range of learning objects, and focusing on the
learner, not the technology.
Sample population
The population in this study is unique compared with previous evaluations of learn-
ing objects. A large, diverse, sample of secondary school students was used to provide
Evaluating the learning in learning objects 23
feedback, instead of a relatively small, vaguely defined, higher education group.
Larger, more diverse populations are needed to ensure confidence in the final results.
The sample in this study permitted a more in-depth analysis of specific learning object
features that affected learning.
Reliability and validity
This study is unique in attempting to develop a reliable and valid metric to evaluate
the benefits and specific impact of a wide range of learning object qualities. The
perceived-benefits scale proved to be reliable and valid. The quality scale was also
reliable and partially validated by focus group data gathered from teachers after the
summative evaluation. The coding scheme (see Table 2) based on sound principles
of instructional design with 100% inter-rater reliability was particularly useful in
isolating and identifying salient qualities of learning objects. Overall, the evaluation
tool used in this study provided a reasonable foundation with which to assess the
impact of learning objects.
Data analysis
While descriptive analysis proved to be valuable in providing an overview of
perceived benefits and quality of the learning objects tested, inferential statistics
provided useful information on the relationship between perceived benefits and
learning quality, as well as the individual learning qualities deemed to be most
important by students. The combination of descriptive and inferential statistics,
not regularly seen in previous learning object research, is critical to establishing a
clear, reliable, understanding of how learning objects can be used as effective
teaching tools.
Formative and summative feedback
Formative assessment was critical in this study for developing a set of five workable
learning objects in a relatively short time period (eight months). This approach is
consistent with Williams (2000) and Cochrane’s (2005) philosophy of design. In
addition, formative assessment helped to validate the summative evaluation results
reported by students. In the current study, summative evaluation based on theoreti-
cal grounded instructional design principles provided detailed pedagogical informa-
tion lacking in previous learning object research, where informal, general and
anecdotal results were the norm (for example, Kenny et al., 1999; Bradley & Boyle,
2004; Krauss & Ally, 2005). While most evaluation studies report that learning
objects are positively received by students and faculty, it is difficult to determine the
qualities in a learning object that contribute to or inhibit learning. The metric used
in the current study opens the door to a more informative discussion about what
learning object features are beneficial.
24 R. H. Kay and L. Knaack
Comparing learning objects
The comparison of five different learning objects in this study demonstrated that even
when learning objects are designed to be similar in format, considerable variation
exists with respect to perceived benefit and quality. This result is consistent with
Cochrane (2005) and Jaakkola and Nurmi (2004). An effective evaluation tool must
account for the wide range of learning objects that have been and are currently being
produced. For the five objects in this study, learner control, clear instructions and
theme were the key distinguishing features. It is important to note that different
features might take precedence for other learning object designs. For example, inter-
activity might be more important when tutorial-based learning objects are compared
with tool-based learning objects.
Metric based on principles of instructional design
Perhaps the strongest element of the evaluation model presented in this study is the
use of well-established instructional design principles to code feedback from students.
As stated earlier, even though the developers of learning objects have emphasized
features such as interactivity, clear instructions and solid learning theory in the design
process (Bradley & Boyle, 2004; Cochrane, 2005; Macdonald et al. 2005; Sedig &
Liang, 2006), evaluation tools have opted for a more general perspective on whether
a learning object is effective. It is argued that a finer metric is needed to identify
features that have a significant impact on learning. This allows educators to supple-
ment problems areas when using learning objects and gives designers a plan for future
modifications. In the current study, organization of the layout, learner control, clear
instructions and theme were critical hotspots where the use of learning objects
enhanced or inhibit learning. If a learning object provides clear instructions, is well
organized, is easy to use and has a motivating theme, secondary school students are
more likely feel they have benefited from the experience. These results match the
qualitative feedback reported by Cochrane (2005) and MacDonald et al. (2005) for
higher education students. Finally, formative feedback from focus groups helps trans-
late statistical differences observed into concrete suggestions for change.
Focusing on learner, not technology
The learning-based approach used in the current study permits designers and educa-
tors to answer the question of ‘what features of a learning object provide the most
educational benefit to secondary school students?’ The evidence suggests that
students will benefit more if the learning object has a well-organized layout, is inter-
active, visual representations are provided that make abstract concepts more
concrete, instructions are clear and the theme is fun or motivating.
Overall, secondary school students appear to be relatively receptive to using
learning objects. While almost 60% of the students were critical about one or more
learning object features, roughly two-thirds of all students perceived learning
Evaluating the learning in learning objects 25
objects as beneficial because they were fun, interactive, visual and helped them
learn. Students who did not benefit felt that learning objects were presented at the
wrong time (e.g. after they had already learned the concept) or that the instructions
were not clear enough. Interestingly, student feedback, both positive and negative,
emphasized learning. While reusability, accessibility and adaptability are given
heavy emphasis in the learning object literature, when it comes to the end user,
learning features appear to be more important.
Future research
This study was a first step in developing a pedagogically based evaluation model for
evaluating learning objects. While the study produced useful information for educa-
tors, designers and researchers, there are at least five key areas that could be
addressed in future research. First, a set of pre-test and post-test content questions is
important to assess whether any learning actually occurred. Second, a more system-
atic survey requiring students to rate all quality and benefit categories (Tables 1 and
2) would help to provide more comprehensive assessment data. Third, details about
how each learning object is used are necessary to open up a meaningful dialogue on
the kind of instructional wrap that is effective with learning objects. Fourth, use of
think-aloud protocols would be helpful to examine actual learning processes while
learning objects are being used. Finally, a detailed assessment of computer ability,
attitudes, experience and learning styles of students might provide insights about the
impact of individual differences on the use of learning objects.
Summary
Based on a review of the literature, it was argued that a learning-based approach for
evaluating learning objects was needed. Limitations in previous evaluation studies
were addressed using a large, diverse, sample, providing reliability and validity esti-
mates, using formal statistics to strengthen any conclusions made, incorporating both
formative and summative evaluations, examining specific learning objects features
based on principles of instructional design, and testing of a range of learning objects.
It was concluded that the learning-based approach produced useful and detailed
information for educators, designers and researchers about the impact of learning
objects in the classroom.
References
Adams, A., Lubega, J., Walmsley, S. & Williams, S. (2004) The effectiveness of assessment learn-
ing objects produced using pair programming, Electronic Journal of e-Learning, 2(2). Available
online at: http://www.ejel.org/volume-2/vol2-issue2/v2-i2-art1-adams.pdf (accessed 28 July
2005).
Agostinho, S., Bennett, S., Lockyear, L. & Harper, B. (2004) Developing a learning object meta-
data application profile based on LOM suitable for the Australian higher education market,
Australasian Journal of Educational Technology, 20(2), 191–208.
26 R. H. Kay and L. Knaack
Akpinar, Y. & Hartley, J. R. (1996) Designing interactive learning environments, Journal of
Computer Assisted Learning, 12(1), 33–46.
Atkins, M. J. (1993) Theories of learning and multimedia applications: an overview, Research
Papers in Education, 8(2), 251–271.
Bagui, S. (1998) Reasons for increased learning using multimedia, Journal of Educational Multimedia
and Hypermedia, 7(1), 3–18.
Baruque, L. B. & Melo, R. N. (2004) Learning theory and instructional design using learning
objects, Journal of Educational Multimedia and Hypermedia, 13(4), 343–370.
Bennett, K. & McGee, P. (2005) Transformative power of the learning object debate, Open Learning,
20(1), 15–30.
Bradley, C. & Boyle, T. (2004) The design, development, and use of multimedia learning objects,
Journal of Educational Multimedia and Hypermedia, 13(4), 371–389.
Cochrane, T. (2005) Interactive QuickTime: developing and evaluating multimedia learning
objects to enhance both face-to-face and distance e-learning environments, Interdisciplinary
Journal of Knowledge and Learning Objects, 1, 33–54. Available online at: http://ijklo.org/
Volume1/v1p033-054Cochrane.pdf (accessed 3 August 2005).
Downes, S. (2001) Learning objects: resources for distance education worldwide, International
Review of Research in Open and Distance Learning, 2(1). Available online at: http://
www.irrodl.org/content/v2.1/downes.html (accessed 1 July 2005).
Duval, E., Hodgins, W., Rehak, D. & Robson, R. (2004) Learning objects symposium special
issue guest editorial, Journal of Educational Multimedia and Hypermedia, 13(4), 331–342.
Gadanidis, G., Gadanidis, J. & Schindler, K. (2003) Factors mediating the use of online applets in
the lesson planning of pre-service mathematics teachers, Journal of Computers in Mathematics
and Science Teaching, 22(4), 323–344.
Hanna, L., Risden, K., Czerwinski, M. & Alexander, K. J. (1999) The role of usability in design-
ing children’s computer products, in: A. Druin (Ed.) The design of children’s technology (San
Francisco, Morgan Kaufmann Publishers, Inc.).
Harp, S. F. & Mayer, R. E. (1998) How seductive details do their damage: a theory of cognitive
interest in science learning, Journal of Educational Psychology, 90(3), 414–434.
Jaakkola, T. & Nurmi, S. (2004) Learning objects—a lot of smoke but is there a fire? Academic impact of
using learning objects in different pedagogical settings (Turku, University of Turku). Available online
at: http://users.utu.fi/samnurm/Final_report_on_celebrate_experimental_studies.pdf (accessed
25 July 2005).
Jones, M. G., Farquhar, J. D. & Surry, D. W. (1995) Using metacognitive theories to design user
interfaces for computer-based learning, Educational Technology, 35(4), 12–22.
Kay, R. H. & Knaack, L. (submitted) A systematic evaluation of learning objects for secondary
school students, Journal of Educational Technology Systems.
Kennedy, D. M. & McNaught, C. (1997) Design elements for interactive multimedia, Australian
Journal of Educational Technology, 13(1), 1–22.
Kenny, R. F., Andrews, B. W., Vignola, M. V., Schilz, M. A. & Covert, J. (1999) Towards guide-
lines for the design of interactive multimedia instruction: Fostering the reflective decision-
making of pre-service teachers, Journal of Technology and Teacher Education, 7(1), 13–31.
Koehler, M. J. & Lehrer, R. (1998) Designing a hypermedia tool for learning about children’s
mathematical cognition, Journal of Educational Computing Research, 18(2), 123–145.
Koppi, T., Bogle, L. & Bogle, M. (2005) Learning objects, repositories, sharing and reusability,
Open Learning, 20(1), 83–91.
Kramarski, B. & Zeichner, O. (2001) Using technology to enhance mathematical reasoning:
effects of feedback and self-regulation learning, Education Media International, 38(2/3).
Krauss, F. & Ally, M. (2005) A study of the design and evaluation of a learning object and
implications for content development, Interdisciplinary Journal of Knowledge and Learning
Objects, 1, 1–22. Available online at: http://ijklo.org/Volume1/v1p001-022Krauss.pdf
(accessed 4 August 2005).
Evaluating the learning in learning objects 27
Larkin, J. H. (1989) What kind of knowledge transfers?, in: L. B. Resnick (Ed.) Knowing, learning,
and instruction (Hillsdale, NJ, Erlbaum Associates), 283–305.
Lave, J. & Wenger, E. (1991) Situated learning: legitimate peripheral participation (New York,
Cambridge University Press).
Littlejohn, A. (2003) Issues in reusing online resources, Special Issue on Reusing Online
Resources, Journal of Interactive Media in Education, 1. Available online at: www-jime.
open.ac.uk/2003/1/ (accessed 1 July 2005).
MacDonald, C. J., Stodel, E., Thompson, T. L., Muirhead, B., Hinton, C., Carson, B., et al.
(2005) Addressing the eLearning contradiction: a collaborative approach for developing a
conceptual framework learning object, Interdisciplinary Journal of Knowledge and Learning
Objects, 1, 79–98. Available online at: http://ijklo.org/Volume1/v1p079-098McDonald.pdf
(accessed 2 August 2005).
Madhumita, K., K.L. (1995) Twenty-one guidelines for effective instructional design, Educational
Technology, 35(3), 58–61.
Metros, S. E. (2005) Visualizing knowledge in new educational environments: a course on learning
objects, Open Learning, 20(1), 93–102.
Muzio, J. A., Heins, T. & Mundell, R. (2002) Experiences with reusable e-learning objects from
theory to practice, Internet and Higher Education, 2002(1), 21–34.
Nesbit, J., Belfer, K. & Vargo, J. (2002) A convergent participation model for evaluation of learn-
ing objects, Canadian Journal of Learning and Technology, 28(3), 105–120. Available online at:
http://www.cjlt.ca/content/vol28.3/nesbit_etal.html (accessed 1 July 2005).
Oren, T. (1990) Cognitive load in hypermedia: designing for the exploratory learner, in: S.
Ambron & K. Hooper (Eds) Learning with interactive multimedia (Washington, DC, Microsoft
Press), 126–136.
Parrish, P. E. (2004) The trouble with learning objects, Educational Technology Research &
Development, 52(1), 49–67.
Richards, G. (2002) Editorial: the challenges of learning object paradigm, Canadian Journal
of Learning and Technology, 28(3), 3–10. Available online at: http://www.cjlt.ca/content/
vol28.3/editorial.html (accessed 1 July 2005).
Savery, J. R. & Duffy, T. M. (1995) Problem-based learning: an instructional model and its
constructivist framework, Educational Technology, 35(5), 31–34.
Sedig, K & Liang, H (2006) Interactivity of visual mathematical representations: factors affecting
learning and cognitive processes, Journal of Interactive Learning Research, 17(2), 179–212.
Sedighian, K. (1998) Interface style, flow, and reflective cognition: issues in designing interactive
multimedia mathematics learning environments for children. Unpublished Doctor of Philosophy
dissertation, University of British Columbia, Vancouver.
Siqueira, S. W. M., Melo, R. N. & Braz, M. H. L. B. (2004) Increasing the semantics of learning
objects, International Journal of Computer Processing of Oriental Languages, 17(1), 27–39.
Sternberg, R. J. (1989) Domain-generality versus domain-specificity: the life and impending death
of a false dichotomy, Merrill-Palmer Quarterly, 35(1), 115–130.
Van Zele, E., Vandaele, P., Botteldooren, D. & Lenaerts, J. (2003) Implementation and evaluation
of a course concept based on reusable learning objects, Journal of Educational Computing and
Research, 28(4), 355–372.
Wiest, L. R. (2001) The role of computers in mathematics teaching and learning, Computers in the
Schools, 17(1/2), 41–55.
Wiley, D. A. (2000) Connecting learning objects to instructional design theory: a definition, a
metaphor, and a taxonomy, in: D. A. Wiley (Ed.) The instructional use of learning objects:
online version. Available online at: http://reusability.org/read/chapters/wiley.doc (accessed 1
July 2005).
Wiley, D., Wayers, S., Dawson, D., Lambert, B., Barclay, M. & Wade, D. (2004) Overcoming
the limitations of learning objects, Journal of Educational Multimedia and Hypermedia, 13(4),
507–521.
28 R. H. Kay and L. Knaack
Williams, D. D. (2000) Evaluation of learning objects and instruction using learning objects, in:
D. A. Wiley (Ed.) The instructional use of learning objects: online version. Available online at:
from http://reusability.org/read/chapters/williams.doc (accessed 1 July 2005).
Zammit, K. (2000) Computer icons: a picture says a thousand words. Or does it?, Journal of
Educational Computing Research, 23(2), 217–231.
Appendix. Learning object survey
Strongly
Disagree
1
Disagree
2
Slightly
Disagree
3
Neutral
4
Slightly
Agree
5
Agree
6
Strongly
Agree
7
1. The learning object
has some benefit in
terms of providing me
with another learning
strategy/another
tool.
1234567
2. I feel the learning
object did benefit my
understanding of the
subject matter’s
concept/principle.
1234567
3. I did not benefit
from using the
learning object.
1234567
4. I am interested in
using the learning
object again.
1234567
5. You used a digital learning object on the computer. Tell me about this experience when you used
the object.
a) What did you like? (found helpful, liked working with, what worked well for you)
b) What didn’t you like? (found confusing, or didn’t like, or didn’t understand)
6. Do you think you benefited from using this particular learning object? Do you think you learned
the concept better? Do you think it helped you review a concept you just learned? Why? Why not?
... Students report that WBLTs are engaging (e.g. Kay, 2009;Kay & Knaack, 2005, 2007a, 2007b, enjoyable (Clarke & Bowe, 2006a, 2006bKay, 2009;Reimer & Moyer, 2005) and easy to control with respect to the pace of learning (Clarke & Bowe, 2006b;Docherty, Hoy, Topp, & Trinder, 2005;Kay, 2009;Reimer & Moyer, 2005). They also note that WBLTs provide timely feedback (Brown & Voltz, 2005;Reimer & Moyer, 2005), include a wide range of motivating multimedia (Clarke & Bowe, 2006b;Kay & Knaack, 2007a, 2007b, and help them learn (Bradley & Boyle, 2004;de Salas & Ellis, 2006;Kay, 2009;Kay & Knaack, 2007a, 2007bLim, Lee, & 90 R. H. Kay Richards, 2006;MacDonald et al., 2005;Schoner, Buzza, Harrigan, & Strampel, 2005). ...
... Kay, 2009;Kay & Knaack, 2005, 2007a, 2007b, enjoyable (Clarke & Bowe, 2006a, 2006bKay, 2009;Reimer & Moyer, 2005) and easy to control with respect to the pace of learning (Clarke & Bowe, 2006b;Docherty, Hoy, Topp, & Trinder, 2005;Kay, 2009;Reimer & Moyer, 2005). They also note that WBLTs provide timely feedback (Brown & Voltz, 2005;Reimer & Moyer, 2005), include a wide range of motivating multimedia (Clarke & Bowe, 2006b;Kay & Knaack, 2007a, 2007b, and help them learn (Bradley & Boyle, 2004;de Salas & Ellis, 2006;Kay, 2009;Kay & Knaack, 2007a, 2007bLim, Lee, & 90 R. H. Kay Richards, 2006;MacDonald et al., 2005;Schoner, Buzza, Harrigan, & Strampel, 2005). In addition, considerable evidence suggests that student learning performance improves when WBLTs are used (Akpinar & Bal, 2006;Bower, 2005;Docherty et al., 2005;Kay & Knaack, 2007a, 2007bKong & Kwok, 2005, Liu & Bera, 2005Nurmi & Jaakkola, 2006;Reimer & Moyer, 2005;Rieber, Tzeng, & Tribble, 2004;Windschitl & Andre, 1998). ...
... Kay, 2009;Kay & Knaack, 2005, 2007a, 2007b, enjoyable (Clarke & Bowe, 2006a, 2006bKay, 2009;Reimer & Moyer, 2005) and easy to control with respect to the pace of learning (Clarke & Bowe, 2006b;Docherty, Hoy, Topp, & Trinder, 2005;Kay, 2009;Reimer & Moyer, 2005). They also note that WBLTs provide timely feedback (Brown & Voltz, 2005;Reimer & Moyer, 2005), include a wide range of motivating multimedia (Clarke & Bowe, 2006b;Kay & Knaack, 2007a, 2007b, and help them learn (Bradley & Boyle, 2004;de Salas & Ellis, 2006;Kay, 2009;Kay & Knaack, 2007a, 2007bLim, Lee, & 90 R. H. Kay Richards, 2006;MacDonald et al., 2005;Schoner, Buzza, Harrigan, & Strampel, 2005). In addition, considerable evidence suggests that student learning performance improves when WBLTs are used (Akpinar & Bal, 2006;Bower, 2005;Docherty et al., 2005;Kay & Knaack, 2007a, 2007bKong & Kwok, 2005, Liu & Bera, 2005Nurmi & Jaakkola, 2006;Reimer & Moyer, 2005;Rieber, Tzeng, & Tribble, 2004;Windschitl & Andre, 1998). ...
Article
Full-text available
The purpose of this study was to explore individual differences in middle and secondary school student attitudes and learning performance regarding Web-Based Learning Tools (WBLTs). The student characteristics assessed were gender, age, computer comfort level, subject comfort level, and average grade. Attitudes toward WBLTs were measured using a reliable, valid survey designed to gather data on student perceptions of learning, design, and engagement. Learning performance was assessed by comparing pre-and post-test scores on four knowledge categories (remembering, understanding, application, analysis) based on the revised Bloom's taxonomy. Female students had significantly more positive attitudes toward WBLTs. Students who were more comfortable with using computers and the subject area addressed by a WBLT had significantly more positive attitudes toward WBLTs. Average grade was unrelated to student attitudes toward WBLTs. Student age was the only student characteristic that was significantly associated with learning performance. When older students use WBLTs (different from those used by younger students), learning performance is significantly greater than younger students. It is speculated that WBLTs may be better suited toward older students who have better self-regulation skills.
... Integrated STEM learning activities that combine science, technology, and engineering are grounded in the principle that infusing the engineering design processes with scientific methodology results in physical models that can be iteratively refined during the investigative process (Bevan, 2017;Godhe et al., 2019;Martin, 2015). This has precipitated from numerous researchers who recognized that effective STEM instruction is not simply a matter of combining learning objectives from different subject matter courses, rather there needs to be thoughtful input into how best to organize learning so that it is authentic and coherent (Justo López et al., 2019;Kay & Knaack, 2007;Merrill, 2002;Ornek, 2008;Sims, 2006). Unfortunately, implementation of such applications as digital modeling has been hampered by traditional expectations related to content standardization and ill-defined STEM specific learning outcomes (Ashton, 2014;Holincheck & Galanti, 2023;Samara & Kotsis, 2023). ...
Article
Full-text available
Much of the research within K-12 STEM teacher education and integrated STEM instructional design (ID) involves illuminating how STEM subjects can be integrated to bridge gaps between methodological and pedagogical practices. ID involving the engineering design process within K-12 classrooms generally guides students through prototyping mechanical devices using everyday objects and/or 3D printing. One universal engineering process involved in STEM educational curriculum is modeling using computer aided design (CAD) software such as Tinkercad (a popular software in K-12 settings). This study focuses on the application of 3D modeling as a learning activity within an undergraduate biology course designed to prepare pre-service teachers to facilitate life science learning activities in their future classrooms. A mix methods approach was taken to explain the impact of Tinkercad modeling on anxiety for facilitating integrated STEM activities as well as pre-service teacher self-efficacy, confidence in, and competency for teaching integrated STEM. Analysis of student responses to survey questions, field notes, and informal interviews suggest that utilization of modeling software divorced of 3D printing, though conducive to reducing integrated STEM facilitation anxiety, has a limited effect on improving pre-service teacher self-efficacy, confidence in, and competency regarding leading integrated STEM learning activities targeted towards engaging learners in science exploration. However, participant comments on 3D modeling software usability, application within K-12 science learning environments, and perceived K-6 classroom strengths provide important commentary on likelihood of STEM resources such as Tinkercad being adopted into future classrooms.
... Without explicit reference to interaction, Miller and Upton (2008) note that graphical representations "link more easily with conceptual understanding" (and assert that, by contrast, "symbolic expressions lend themselves to procedural operations") (p. 126); and Kay and Knaack (2007) explain that these representations can "make abstract concepts more concrete" (p. 24, see also Lester, 2000). ...
Article
Full-text available
In early 2014, researchers at the University of Western Sydney (UWS) developed a suite of interactive mathematical visualisations (IMVs) aimed at improving students’ understanding of key concepts in first-level mathematics. This paper outlines the pedagogical and interaction design considerations that informed this development, as well as the processes adopted in preparing the IMVs for classroom use. Guidelines for interaction design that draw on research in human computer interaction, information visualisation and cognitive technologies are reviewed and contextualised for the specific learning needs of mathematics students at UWS. Examples are given of how these guidelines were factored-in to the IMV development. In addition to the pedagogical and technological dimensions of this work, research-based methods for analysing and evaluating the educational effectiveness of the IMVs are examined. It is expected that these methods will underpin a formative evaluation approach to ongoing design and development of the IMVs.
... Na literatura, constata-se a utilização de instrumentos para a avaliação de objetos de aprendizagem, tais com o LORI -Learning Object Review Instrument (NESBIT et al., 2004), o LOEM -Learning Object Evauation Metric (KAY et al., 2008), o FAOA -Formato de Avaliação de Objetos de Aprendizagem (GONZÁLEZ, 2006) e o LOQES -Model for Evaluation of Learning Object (CHAWLA et al., 2012). Contudo, de acordo com pesquisadores da área, estudos que ofereçam critérios para avaliar a qualidade dos OA disponibilizados na internet, bem como conteúdos educativos, ainda são incipientes (VARGO et al., 2003;GAMA, 2007;KAY et al., 2007;BLAKE, 2010;REYES, 2014). Na área da saúde, isso não é diferente (KRAUSS et al., 2005;TRINDADE et al., 2014;ARORA et al., 2017;LEITE et al., 2018). ...
Article
Full-text available
No presente estudo foi investigada unidimensionalidade, o funcionamento das categorias de resposta e a fidedignidade para a Equalis-OAS, uma escala Likert, com cinco pontos, escrita em Língua Portuguesa, desenvolvida sob a ótica dos preceitos psicométricos, com vistas a avaliar a Qualidade de Objetos de Aprendizagem da Área da Saúde. O estudo envolveu 1.766 profissionais de saúde, participantes de cursos de educação continuada realizados na modalidade a distância disponibilizados pelo Telessaúde do Rio Grande do Sul (TelessaúdeRS-UFRGS). Os resultados sugerem que, embora a Equalis-OAS tenha seus itens distribuídos em três categorias (Conceitos Intrínsecos; Educacional; Apresentação) a adoção da unidimensionalidade pode ser adotada, sem comprometer a confiabilidade da escala. Além disso, o ponto “não concordo nem discordo” da Equalis-OAS mostrou ser pouco relevante, sugerindo futuras aplicações com quatro pontos. O cálculo do Rating Scale aponta que os parâmetros de pessoas e itens foram estimados com elevada precisão, indicando evidências de confiabilidade da escala. As análises realizadas abarcando à unidimensionalidade do instrumento, a exclusão de um item e a exclusão do ponto neutro não comprometeram as evidências de validade do instrumento, ou seja, a Equalis-OAS permanece medindo o que se propõe a medir, isto é, a qualidade de objetos de aprendizagem voltados a área da saúde. . Palavras-chave: Educação a distância. Educação em saúde. Estudos de validação. Teoria de resposta ao item.
Chapter
Virtual reality (VR) technology holds promise in transforming education by offering immersive and interactive learning experiences. This study delves into the transformative potential of immersive VR technology in microbiology education by comparing low and high-immersion levels. Forty-one students were randomly assigned to either a low-immersive VR group (Group 1) or a high-immersive VR group (Group 2) to investigate learning presence, interest, and engagement. Group 1 experienced a less immersive VR environment with limited interactivity, while Group 2 engaged in fully immersive, interactive simulations. Results indicate significant differences between the groups regarding presence and interest, with Group 2 exhibiting higher scores than Group 1. Although engagement also showed a significant difference, the effect size and p-value necessitate cautious interpretation. The study acknowledges limitations, notably the small sample size, impacting result generalizability. This research underscores immersive VR's potential to enhance microbiology education, emphasizing the need for larger-scale studies to validate these findings and explore additional factors shaping VR-based learning outcomes. These insights inform future instructional design efforts, highlighting VR's role in enriching student experiences.
Article
Full-text available
Neste trabalho é apresentado um exercício de imaginação pedagógica, na perspectiva apresentada por Skovsmose, com a finalidade de investigar as potencialidades e possíveis contribuições de objetos virtuais de aprendizagem – OVA desenvolvidos com o software GeoGebra no estudo de conceitos relacionados à trigonometria. A pesquisa foi desenvolvida a partir de uma atividade proposta no componente curricular de Metodologia de Ensino em um curso de Licenciatura em Matemática. Foi proposta e apresentada uma sequência didática a partir do desenvolvimento dos OVA, visando a aprendizagem dos conceitos relativos ao objeto de estudo. A partir das discussões e do feedback da turma, foram realizados alguns ajustes nos OVA e desenvolvido o estudo de possibilidades. Foram identificadas evidências de que o uso dos OVA podem auxiliar no processo de aprendizagem e na compreensão de conceitos e propriedades da trigonometria por parte dos alunos.
Article
This paper presents a new persuasive technology that uses a corpus stored in a database to generate learning objects for tutoring of self-directed learning. The technology was developed within the European Union project EuroPLOT 2010-2013 in order to drive effective learning through technology with, in, and around a corpus application. The paper argues that a corpus can generate learning objects, but a theory of Persuasive Technology must to be applied to optimise the design for learning. The paper presents Bible Online Learner as the implementation of this new architecture for persuasive corpus-driven learning, persuading the learner to focus on the form of the text through the interface and then guiding the learner into practice authentic forms. The corpus used is a database of the Hebrew Bible created by the EepTalstra Centre for Bible and Computer at the VU University in Amsterdam for and called ETCBC4.
Chapter
Handheld digital devices, especially iPads, have become increasingly popular in educational institutions surrounded by debates between advocates and skeptics. This chapter examines the perceptions of middle school teachers on the use of iPads in the classroom. A review of the existing literature on the digital devices and on iPads use in the classroom was conducted. The participants, 53 teachers, responded to a Likert scale type online survey asking them questions about how they felt about the 1:1 iPad initiative at their school. Data analysis included open and axial coding for identification of themes and patterns. The findings showed mixed findings, although the majority of teachers believed that the iPads played a significant role in the teaching-learning process to engage students in the classroom. Some participants, however, responded with concerns that iPads caused student distraction and allowed off-task behaviors in the classroom. The findings also suggest that teachers need targeted professional development training on pedagogical and practical use of iPad to be able to successfully integrate the iPad into their practice. Implications for educators, app designers, and for future research are discussed.
Article
Full-text available
The importance of using accessible learning objects in order to ensure access to education for all students is notable. The present work sought to bring an overview of the research on accessible learning objects for the visually impaired based on the systematic literature review methodology. Then, 620 articles from five scientific databases were retrieved, of which 44 were considered for analysis. The results showed that the research question and sub-question were answered, and it was possible to perceive the support that distance education tools can provide if they are accessible. Finally, the various ways in which the researchers approached the accessibility of each learning object were identified and the more specific models, techniques and computational tools that exist and are used in the context of education were detected in order to promote accessibility for people with visual impairments.
Article
Full-text available
Los objetos virtuales de aprendizaje (OVA) son herramientas educativas que buscan emular a un docente o tutor humano en sus habilidades pedagógicas y comunicativas. Su principal ventaja es que pueden ser reutilizados y empleados en cualquier lugar desde un dispositivo móvil u ordenador de escritorio, incentivan el autoaprendizaje y son reutilizables. Se han desarrollado OVA en diferentes áreas de conocimiento; ciencias exactas y naturales, biología, medicina, economía y finanzas, ciencias sociales y humanas, entre otros. En el contexto de la pandemia generada por el COVID-19, los OVA han sido una herramienta de gran utilidad en el proceso de enseñanza aprendizaje en universidades. El objetivo de este trabajo fue realizar un análisis de sentimientos para conocer la percepción que tienen las personas sobre el uso de OVA. Se emplearon 7.000 comentarios de la red social Twitter y el software RStudio para el procesamiento de la información. Se concluye que, es positiva la percepción identificada en la mayoría de los comentarios, la mayoría de ellos se relacionaron con emociones positivas. La implementación masiva del uso de OVA puede contribuir a transformar el modelo tradicional de enseñanza-aprendizaje y al fortalecimiento de la educación remota.
Article
This study was part of a larger evaluation of the effectiveness of laptop computers in grades 8 and 9 science classrooms, in a sample of Australian Independent Schools. In the study described in this article, students' perceptions of their teacher's interpersonal behavior were assessed using the Questionnaire on Teacher Interaction (QTI). As this was the first time this questionnaire had been used in science classrooms where laptop computers were being used, important validation data are provided. Associations between teacher-strident interpersonal behavior and students' attitudes to science and their enquiry skill achievement were also investigated. Students' attitudes to science were assessed using a scale adapted from the Test of Science-Related Attitudes (TOSRA) and achievement was measured using scales from the Test of Enquiry Skills (TOES). The QTI, attitude scale, and enquiry skills scales were administered to 433 laptop students in grades 8 and 9 science classes, in nine Independent schools across four Australian states. Descriptive statistics confirmed the reliability and validity of the QTI for science laptop classroom research. Generally, laptop students' perceptions of teacher-student interpersonal relationships were found to be positively associated with students' attitudinal and cognitive achievement outcomes.
Article
The properties that distinguish learning objects from other forms of educational software - global accessibility, metadata standards, finer granularity and reusability - have implications for evaluation. This article proposes a convergent participation model for learning object evaluation in which representatives from stakeholder groups (e.g., students, instructors, subject matter experts, instructional designers, and media developers) converge toward more similar descriptions and ratings through a two-stage process supported by online collaboration tools. The article reviews evaluation models that have been applied to educational software and media, considers models for gathering and meta-evaluating individual user reviews that have recently emerged on the Web, and describes the peer review model adopted for the MERLOT repository. The convergent participation model is assessed in relation to other models and with respect to its support for eight goals of learning object evaluation: (1) aid for searching and selecting, (2) guidance for use, (3) formative evaluation, (4) influence on design practices, (5) professional development and student learning, (6) community building, (7) social recognition, and (8) economic exchange.
Article
The paper describes an evaluation program designed to assess the effectiveness of Technology Enhanced Instruction (TEI). The study is situated within the context of the Technology Enhanced Secondary Science Instruction (TESSI) project, a seven-year, field-based research program of technology integration into secondary science (grades 9-12). Evaluation procedures include analyses of student enrollment and achievement, teacher-researcher reports, an independent ethnographic assessment, the project's scalability, and interviews with graduates from the program. Taken together, these evaluations of TESSI support claims that TESSI is a scaleable and reproducible model of successful TEI implementation, which encourages greater student enrollment and retention in senior science electives (i.e. greater success for more students), and prepares students for post-secondary education and the realities of an information-based workplace. The effectiveness of the project's implementation of technology is supported by both quantitative and qualitative data.
Article
In this paper we present a student module modeling knowledge states and learning skills of students in the field of Newtonian dynamics. Student data recorded during the exploratory activity in microworlds are used to infer mental representations concerning the concept of force. A fuzzy algorithm able to follow the cognitive states the student goes through in solving a task and to interpret the monitored changes is discussed.
Article
In 4 experiments, students who read expository passages with seductive details (i.e., interesting but irrelevant adjuncts) recalled significantly fewer main ideas and generated significantly fewer problem-solving transfer solutions than those who read passages without seductive details. In Experiments 1, 2, and 3, revising the passage to include either highlighting of the main ideas, a statement of learning objectives, or signaling, respectively, did not reduce the seductive details effect. In Experiment 4, presenting the seductive details at the beginning of the passage exacerbated the seductive details effect, whereas presenting the seductive details at the end of the passage reduced the seductive details effect. The results suggest that seductive details interfere with learning by priming inappropriate schemas around which readers organize the material, rather than by distracting the reader or by disrupting the coherence of the passage.