ArticlePDF Available

Assessing Learning, Quality and Engagement in Learning Objects: The Learning Object Evaluation Scale for Students (LOES-S)

Authors:

Abstract and Figures

Learning objects are interactive web-based tools that support the learning of specific concepts by enhancing, amplifying, and/or guiding the cognitive processes of learners. Research on the impact, effectiveness, and usefulness of learning objects is limited, partially because comprehensive, theoretically based, reliable, and valid evaluation tools are scarce, particularly in the K-12 environment. The purpose of the following study was to investigate a Learning Object Evaluation Scale for Students (LOES-S) based on three key constructs gleaned from 10 years of learning object research: learning, quality or instructional design, and engagement. Tested on over 1100 middle and secondary school students, the data generated using the LOES-S showed acceptable internal reliability, face validity, construct validity, convergent validity and predictive validity.
Content may be subject to copyright.
RESEARCH ARTICLE
Assessing learning, quality and engagement in learning
objects: the Learning Object Evaluation Scale
for Students (LOES-S)
Robin H. Kay Æ Liesel Knaack
Published online: 25 April 2008
Ó Association for Educational Communications and Technology 2008
Abstract Learning objects are interactive web-based tools that support the learning of
specific concepts by enhancing, amplifying, and/or guiding the cognitive processes of
learners. Research on the impact, effectiveness, and usefulness of learning objects is
limited, partially because comprehensive, theoretically based, reliable, and valid evaluation
tools are scarce, particularly in the K-12 environment. The purpose of the following study
was to investigate a Learning Object Evaluation Scale for Students (LOES-S) based on
three key constructs gleaned from 10 years of learning object research: learning, quality or
instructional design, and engagement. Tested on over 1100 middle and secondary school
students, the data generated using the LOES-S showed acceptable internal reliability, face
validity, construct validity, convergent validity and predictive validity.
Keywords Evaluate Assess Quality Scale Secondary school
Middle school Learning object
Overview
Learning objects are operationally defined in this study as interactive web-based tools that
support the learning of specific concepts by enhancing, amplifying, and/or guiding the
cognitive processes of learners. Many learning objects offer visual aids to help guide. For
example, in mathematics students could be asked to enter in various functions to see how
they appear on a graph. In essence, they are testing various ‘what-if’ scenarios (see
http://www.shodor.org/interactivate/activities/FunctionFlyer/). In science, a number of
learning objects provide a rich visual context, such as understanding how specific functions
of the body work (see http://www.sickkids.ca/childphysiology/).
The design, development, reuse, accessibility, and use of learning objects has been
examined in some detail for almost 10 years (Kay and Knaack 2007a), however, research
R. H. Kay (&) L. Knaack
Faculty of Education, University of Ontario Institute of Technology, 2000 Simcoe St. North, Oshawa,
ON, Canada L1H 7L7
e-mail: robin.kay@uoit.ca
123
Education Tech Research Dev (2009) 57:147–168
DOI 10.1007/s11423-008-9094-5
on the impact, effectiveness, and usefulness of learning objects is limited (Kay and Knaack
2005; Nurmi and Jaakkola 2005, 2006a, b; Sosteric and Hesemeirer 2002). While the
challenge of developing an effective, reliable, and valid evaluation system is formidable
(Kay and Knaack 2005; Nesbit and Belfer 2004), assessing effectiveness is critical if
learning objects are to be considered a viable educational tool.
To date, the evaluation of learning objects has taken place predominantly at the
development and design phase (Adams et al. 2004; Bradley and Boyle 2004; Maslowski
and Visscher 1999; Vargo et al. 2003; Williams 2000). This kind of formative analysis,
done while learning objects are created, is useful for developing easy to use learning
objects, but the voice of the end user, the student who uses the learning object, is relatively
silent.
A limited number of repositories have content experts, often educators, evaluate the
quality of learning objects (Cafolla 2006; Krauss and Ally 2005; Schell and Burns 2002)
after they have been developed. However, the number of evaluators is usually limited, the
assessors have limited background in instructional design, and the end user does not enter
the feedback loop in a significant way.
Until recently, learning objects were solely used in higher education. Therefore the
majority of learning object evaluation has taken place in this domain (Haughey and
Muirhead 2005; Kay and Knaack 2005, 2007a). Increased use of learning objects in the
K-12 domain (e.g., Brush and Saye 2001; Clarke and Bowe 2006a, b; Kay and Knaack
2005; Lopez-Morteo and Lopez 2007; Liu and Bera 2005; Nurmi and Jaakkola 2006a)
demands that the focus of evaluation shift, at least in part, to the needs of middle and
secondary school students. The purpose of the current study was to examine a student–
based learning object evaluation tool in middle and secondary school classrooms.
Literature review
Definition of learning objects
In order to develop a clear, effective metric, it is important to establish an acceptable
operational definition of a ‘‘learning object’’. Original definitions focused on technological
issues such accessibility, adaptability, the effective use of metadata, reusability, and
standardization (e.g., Downes 2003; Littlejohn 2003; Koppi et al. 2005; Muzio et al. 2002;
Nurmi and Jaakola 2006b; Parrish 2004; Siqueira et al. 2004). More recently, researchers
are emphasizing learning qualities such as interaction and the degree to which the learner
actively constructs knowledge (Baruque and Melo 2004; Bennett and McGee 2005;
Bradley and Boyle 2004; Caws et al. 2006; Chenail 2004; Cochrane 2005; McGreal 2004;
Kay and Knaack 2007a; Sosteric and Hesemeirer 2002; Wiley et al.
2004).
While both technical and learning-based definitions offer important qualities that can
contribute to the success of learning objects, evaluation tools focusing on learning are
noticeably absent (Kay and Knaack 2007a). In order to address a clear gap in the literature
on evaluating learning objects, a pedagogically focused definition of learning objects has
been adopted for the current study based on a composite of previous definitions. Key
factors emphasized included interactivity, accessibility, a specific conceptual focus, reus-
ability, meaningful scaffolding, and learning. As indicated at the beginning of this paper,
learning objects are operationally defined as ‘interactive web-based tools that support the
learning of specific concepts by enhancing, amplifying, and/or guiding the cognitive
processes of learners’’. Some examples selected by teachers for this study include learning
148 R. H. Kay, L. Knaack
123
objects designed to explore how students factors numbers, how integers work, how volume
is calculated, exploring how tornado and hurricanes work, investigating the water cycle,
and balancing chemical equations. To view specific examples of learning objects used by
teachers in this study, see the links provided in Appendix A.
Theory underlying the evaluation of learning objects
While current methods used to evaluate learning objects are somewhat limited with respect
to underlying learning theory (e.g., Buzzetto-More and Pinhey 2006; Gadanidis et al. 2004;
Koohang and Du Plessis 2004; McGreal et al. 2004; Schoner et al. 2005), considerable
speculation and discussion has taken place on the necessary attributes for developing
effective assessment tools. Three key themes have emerged from these discussions:
learning, quality or instructional design, and engagement.
Learning
Learning object research has addressed technical, instructional design issues in evaluation
far more than issues based on pedagogy (Alonso et al. 2005; Jonassen and Churchhill
2004; Kay and Knaack 2005). This emphasis on design has resulted in a model of learning
that is dated and largely behavioristic—content is presented, students are asked questions,
and then evaluated and rewarded based on the content they remember (Friesen and
Anderson 2004; Krauss and Ally 2005; Nurmi and Jaakkola 2006b). For the past 20 years,
though, research in cognitive science suggests that students need to construct knowledge
and actively participate in the learning process (e.g., Albanese and Mitchell 1993; Bruner
1983, 1986; Brown and Palinscar 1989; Chi and Bassock 1989; Collins et al. 1989;
Vygotsky 1978) and within the last five years, several learning object theorists have
advocated the use of a more constructivist-based metric (Baser 2005; Convertini et al.
2006; Gadanidis et al. 2004).
One way of indirectly examining the construction or building of knowledge is by
measuring the amount and quality of interactivity in a learning object. Interactivity often
defines the very nature of a learning object. While there is a tendency to view all inter-
activity as desirable, considerable debate reigns on finding key elements of effective
interaction (Ohl 2001). Some learning objects simply require point and click progression
through a series of pages that are passively read and digested. Others require direct
manipulation of tools (e.g., sliders) requiring the user to test and evaluate ‘what-if’
scenarios. Still others involve the presentation of simultaneous multimedia formats (e.g.,
vide, graphics, audio) to illustrate specific concepts. Van Merrie
¨
nboer and Ayres (2005)
speculate that interaction requiring a student to manipulate digital learning materials may
be more motivating and stimulating. Lim et al. (2006) have proposed six different levels of
interactivity including mouse pointing and clicking, linear navigation, hierarchical navi-
gation, interacting with help, program-generated questions, and constructing or
manipulating. Finally, Oliver and McLoughlin (1999) argue that, ideally, students who use
learning objects should be making reasoned actions, engaging in personal meaning making,
and integrating knowledge.
To date, interactivity has not been systematically integrated into the learning object
evaluation process. However, there is some evidence that students believe that learning
features of a learning object are more important than the technical features (Kay and
Knaack 2005; 2007a).
Assessing learning, quality and engagement in learning objects 149
123
Quality (instructional design)
For the purpose of this paper, the quality of a learning object refers to technical, design
issues focusing on usability, as opposed to the learning issues discussed above. Evaluating
the quality of learning objects is based on a wealth of research looking at the instructional
design of digital materials and includes the following features: organization and layout
(Calvi 1997; Madhumita and Kumar 1995; Mayer and Moreno 2002), learner control
(Hanna et al. 1999; Kennedy and McNaught 1997; Scheiter and Gerjets 2007), multimedia
in the form of animation, graphics, and audio (Gadanidis et al. 2003; Mayer and Moreno
2002; Sedig and Liang 2006), clear instructions to guide how to use a learning object and
help features (Acovelli et al. 1997; Jones et al. 1995; Kennedy and McNaught 1997),
feedback and assessment (Kramarski and Zeichner 2001; Zammit 2000), and theme
(Akpinar and Hartley 1996; Harp and Mayer 1998). In spite of this well researched list of
qualities that have been reported to affect software usability, summative evaluation tools
rarely address the instructional design qualities of learning objects. It is more typical to
collect open-ended, informal feedback without reference to specific instructional design
characteristics that might enhance or reduce learning performance (e.g., Bradley and Boyle
2004; Kenny et al. 1999; Krauss and Ally 2005).
Cognitive load theory (Chandler and Sweller 1991; Kester et al. 2006; Van Gerven
et al. 2006; Sweller 1988; Sweller et al. 1998) has been used to organize and explain the
potential impact that the features of a learning object can have on performance. The main
premise is that a typical user wants to minimize extraneous cognitive load (engaging in
processes that are not beneficial to learning) and optimize germane cognitive load
(engaging in processes that help to solve the problem at hand). Therefore, if the quality or
instructional design of a learning object is sufficiently weak in one or more areas, the user
spends more time on trying to use the object than on learning the concept at hand. Because
the quality of learning objects is rarely addressed in the literature, little is known about how
learning object design features affect cognitive load and ultimately how much is learned by
the end user.
Engagement
In this study, engagement and motivation are used interchangeably, based on previous
studies of learning objects (e.g., Lin and Gregor 2006; Oliver and McLoughlin 1999; Van
Marrie
¨
nboer and Ayres 2005). A number of authors believe that a high level of engagement
or motivation is necessary for a learning object to be successful. Lin and Gregor (2006)
suggest that engagement, positive affect, and personal fulfilment are key factors in the
evaluation process. Oliver and McLoughlin (1999) add that learner self-efficacy is critical
to promoting engagement in learning objects. Van Marrie
¨
nboer and Ayres (2005) note that
lower task involvement, as a result of reduced motivation, can result in a lower investment
of cognitive effort. In summary, it is important to consider the degree to which a learning
object engages students when evaluating effectiveness.
Previous approaches to evaluating learning objects
Considerable effort has been directed toward the evaluation of learning objects as they are
being created (Adams et al. 2004; Bradley and Boyle 2004; Cochrane 2005; MacDonald
et al. 2005; Nesbit et al. 2002; Vargo et al. 2003). Also known as formative assessment,
150 R. H. Kay, L. Knaack
123
this approach to evaluation typically involves a small number of participants being asked to
test and use a learning object throughout the development process. Cochrane (2005)
provides a good example of how this kind of evaluation model works where feedback is
solicited from small groups at regular intervals during the development process. While
formative evaluation is necessary for the development of learning objects that are well
designed from a usability standpoint, this type of assessment does not address how well the
learning object works in a real-world educational environment with actual students.
Qualitative analysis of learning objects is also prevalent in the evaluation literature in
the form of interviews (Bradley and Boyle 2004; Kenny et al. 1999; Lin and Gregor 2006),
written comments (Kay and Knaack 2005; Kenny et al. 1999; Krauss and Ally 2005),
email responses (Bradley and Boyle 2004; MacDonald et al. 2005) and think-aloud pro-
tocols (Holzinger 2004; Krauss and Ally 2005). The majority of studies using a qualitative
approach rely almost exclusively on descriptive data and anecdotal reports to assess the
merits of learning objects. Furthermore, the reliability and validity of these informal
qualitative observations have not been assessed (Bradley and Boyle 2004; Holzinger 2004;
Kenny et al. 1999; Krauss and Ally 2005).
Quantitative efforts to evaluate learning objects have incorporated surveys (Bradley and
Boyle 2004; Howard-Rose and Harrigan 2003; Krauss and Ally 2005), performance data
(Adams et al. 2004; Bradley and Boyle 2004; Nurmi and Jaakola 2006a), and statistics
recording use (Bradley and Boyle 2004; Kenny et al. 1999). The main concerns with the
quantitative measures used to date are a lack of theory underlying measures and the
absence of reliability and validity estimates.
A common practice employed to evaluate learning objects is to use multiple assessment
tools (Bradley and Boyle 2004; Brown and Voltz 2005; Cochrane 2005; Kenny et al. 1999;
Krauss and Ally 2005; Nesbit and Belfer 2004; Schell and Burns 2002; Schoner et al.
2005; Van Zele et al. 2003). This approach, which leads to triangulation of data analysis,
should be encouraged. However, the multitude of constructs that have evolved to date do
not provide a coherent model for understanding what factors contribute to the effectiveness
of learning objects.
Methodological issues
At least six key observations are noteworthy with respect to methods used to evaluate
learning objects. First, a wide range of learning objects have been examined, including
drill-and-practice assessment tools (Adams et al. 2004) or tutorials (Nurmi and Jaakkola
2006a), video case studies or supports (Kenny et al. 1999; MacDonald et al. 2005), general
web-based multimedia resources (Van Zele et al. 2003), and self-contained interactive
tools in a specific content area (Bradley and Boyle 2004; Cochrane 2005
). The content and
design of a learning object need to be considered when examining quality and learning
outcomes. For example, Cochrane (2005) compared a series of four learning objects based
on general impressions of reusability, interactivity, and pedagogy and found that different
groups valued different areas. In addition, Nurmi and Jaakkola (2005) compared drill-and-
practice versus interactive learning objects and found the latter to be significantly more
effective in improving overall test performance.
Second, even though a wide range of learning objects exist, the majority of evaluation
papers focus on a single learning object (Adams et al. 2004; Bradley and Boyle 2004;
Kenny et al. 1999; Krauss and Ally 2005; MacDonald et al. 2005). It is difficult to
determine whether the evaluation tools used in one study generalize to the full range of
Assessing learning, quality and engagement in learning objects 151
123
learning objects that are available on the Internet. Third, while the number of studies
focusing on the K to 12 population has recently increased (e.g., Brush and Saye 2001;
Clarke and Bowe 2006a, b; Kay and Knaack 2005; Lopez-Morteo and Lopez 2007; Liu and
Bera 2005; Nurmi and Jaakkola 2006a), most evaluation of learning objects has been done
in the domain of higher education. Fourth, sampled populations tested in many studies
have been noticeably small and poorly described (e.g., Adams et al. 2004; Cochrane 2005;
Krauss and Ally 2005; MacDonald et al. 2005; Van Zele et al. 2003), making it chal-
lenging to extend any conclusions to a larger population.
Fifth, while most evaluation studies reported that students benefited from using learning
objects, the evidence is based on loosely designed attitude assessment tools with no
validity or reliability (Bradley and Boyle 2004; Howard-Rose and Harrigan 2003; Krauss
and Ally 2005; Kenny et al. 1999; Lopez-Morteo and Lopez 2007; Schoner et al. 2005;
Vacik et al. 2006; Van Zele et al. 2003; Vargo et al. 2003). Also, very few evaluation
studies (e.g., Kenny et al. 1999; Kay and Knaack 2007a; Van Zele et al. 2003) use formal
statistical analyses. The lack of reliability and validity of evaluation tools combined with
an absence of statistical rigor reduce confidence in the results presented to date.
Finally, a promising trend in learning object evaluation research is the inclusion of
measures to assess how well students perform (e.g., Adams et al. 2004; Bradley and Boyle
2004; Docherty et al. 2005; MacDonald et al. 2005; Nurmi and Jaakkola 2006a). Until
recently, there has been little evidence to support the usefulness or pedagogical impact of
learning objects. The next step is to refine current evaluation tools to determine which
specific qualities of learning objects influence performance.
In summary, previous methods used to evaluate learning objects have offered extensive
descriptive and anecdotal evaluations of single learning objects, but are limited with
respect to sample size, representative populations, reliability and validity of data, and the
use of formal statistics. Recent evaluation efforts to incorporate learning performance
should be encouraged in order to advance knowledge of learning object features that may
influence learning.
Current approach to evaluating learning objects
Theoretical model
The model used to support the evaluation tools in this study was based on a (a) thorough
review of the literature on learning objects (see above) and (b) recent feedback from a
similar evaluation tool developed by Kay and Knaack (2007a). Consequently, three key
constructs were developed for the quantitative survey and included learning, quality, and
engagement (see Appendix B). The learning construct referred to a student’s perception
of how much he/she learned from using the learning object. The quality construct
referred to the design of the learning object and included the following key instructional
design features identified by Kay and Knaack (2007a): help features, clarity of
instructions, ease of use, and organization. Finally, the engagement construct examined
how involved a student was with respect to using a learning object. Estimates of all three
constructs were supplemented by written comments that students made about what they
liked and did not like about the learning object. The qualitative coding rubric used in the
current study incorporated learning benefits and a full range of instructional design
features (see Table 2). Finally, learning performance was incorporated into the evalua-
tion system.
152 R. H. Kay, L. Knaack
123
Purpose
The purpose of this study was to explore a student-focused, learning-based approach for
evaluating learning objects. Based on a detailed review of studies looking at the evaluation
of learning objects, the following steps were followed:
1. a large sample was used;
2. a wide range of learning objects were tested;
3. the design of the evaluation tools was based on a thorough review and categorization
of the learning object literature and instructional design research;
4. reliability and validity estimates were calculated;
5. formal statistics were used where applicable;
6. both qualitative and quantitative data were collected, systematically coded, and
analyzed;
7. measures of learning performance were included; and
8. evaluation criteria focused on the end user perceptions and not those of the learning
object designers.
Method
Sample
Students
The student sample consisted of 1,113 students (588 males, 525 females), ranging from 10
to 22 years of age (M = 15.5, SD = 2.1), from both middle (n = 263) and secondary
schools (n = 850). The population base spanned three separate boards of education, six
middle schools, 15 secondary schools, and 33 different classrooms. The students were
selected through convenience sampling and had to obtain signed parental permission to
participate.
Teachers
The teacher sample consisted of 33 teachers (12 males, 21 females), with 0.5–33 years of
teaching experience (M = 9.0, SD = 8.2), from both middle (n = 6) and secondary
schools (n = 27). Most teachers taught math (48%) or science (45%). A majority of the
teachers rated their ability to use computers as strong or very strong (76%) and their
attitude toward using computers as positive or very positive (89%). In spite of the high
ability and positive attitude, only six of the teachers used computers in their classrooms
more than once a month. Teachers from each board of education were notified electron-
ically by a educational coordinator that a study on learning objects was taking place and
asked if they would like to participate.
Learning objects
In order to simulate a real classroom as much as possible, teachers were allowed to select
any learning object they deemed appropriate for their curriculum. As a starting point, they
Assessing learning, quality and engagement in learning objects 153
123
were introduced to a wide range of learning objects located at the LORDEC website
(http://www.education.uoit.ca/lordec/collections.html). Sixty percent of the teachers
selected learning objects from the LORDEC repository—the remaining teachers reported
that they used Google. A total of 48 unique learning objects were selected covering
concepts in biology, Canadian history, chemistry, general science, geography, mathe-
matics, and physics (see Appendix A for the full list).
Procedure
Teachers from three boards of education were emailed by an educational coordinator and
informed of the learning object study. Participation was voluntary and a subjects could
withdraw from the study at any time. Each teacher received a half day of training
in November on how to choose, use, and assess learning objects (see
http://www.education.uoit.ca/lordec/lo_use.html for more details on the training provided).
They were then asked to use at least one learning object in their classrooms by April of the
following year. Email support was available throughout the duration of the study. All
students in a given teacher’s class used the learning object that the teacher selected.
However, only those students with signed parental permission forms were permitted to fill
in an anonymous, online survey about their use of the learning object. In addition, students
completed a pre- and post-test based on the content of the learning object.
Data sources
Student survey
After using a learning object, students completed the Learning Object Evaluation Scale for
Students (LOES-S; see Appendix B) to determine their perception of (a) how much they
learned (learning construct), (b) the quality of the learning object (quality construct), and
(c) how much they were engaged with the learning object (engagement construct).
Descriptive statistics for the LOES-S are presented in Table 1.
Student comments
Students were asked to comment on what they liked and disliked about the learning object
(Appendix B—questions 13 and 14). These open-ended items were organized according to
the three main constructs identified in the literature review (learning, quality, and engage-
ment) and analyzed using the coding scheme provided in Table 2. This coding scheme (Kay
and Knaack 2007a) was used to categorize 1922 student comments. Each comment was then
rated on a five-point Likert scale (-2 = very negative, -1 = negative, 0 = neutral,
1 = positive, 2 = very positive). Two raters assessed all comments made by students based
Table 1 Description of student Learning Object Evaluation Scales (LOES-S)
Scale Number of items Possible range Actual range observed Internal reliability
LOES-S
Learn 5 5–25 12.6–20.6 r=0.89
Quality 4 4–20 10.9–18.3 r=0.84
Engage 3 3–15 7.1–12.8 r=0.78
154 R. H. Kay, L. Knaack
123
on category and rating value. Comments where categories or ratings were not exactly the
same were shared and reviewed a second time by each rater. Using this approach, an inter-
rater reliability of 99% was attained for categories and 100% for the rating values.
Student performance
Students completed a pre- and post-test created by each teacher based on the content of the
learning object used in class. Questions for pre- and post-test were identical in form, but
differed in the raw numbers used. The type of questions asked varied according to the goal
of the specific learning objects. Some tests focussed primarily on factual knowledge while
others assess higher order thinking focussing on ‘what-if’ scenarios. The measure was
used to determine student performance. Because of the wide range of learning objects used,
it was not possible to assess the validity of this test data.
Teacher survey
After using a learning object, each teacher completed the Learning Object Evaluation Scale
for Teachers (LOES-T) to determine their perception of (a) how much their students
Table 2 Coding scheme to categorize student comments about learning objects
Category label Criteria
Learning
Challenge Refers to the ease/difficulty of the concepts being covered. Basically whether
the content level of the LO matched the student’s cognitive level/
understanding
Code ‘it was easy’ in here, but not ‘it was easy to use’
Learn Student comments about a specific or general learning/teaching issue involved
in using the LO
Visual The student mentions a visual feature of the LO that helped/inhibited their
learning
Engagement
Compare Student compares LO to another method of learning
Engage Student refers to program as being OR not being fun/enjoyable/engaging/
interesting
Technology The student mention a technological issue with respect to using the LO
Quality
Animate Refers to quality of animations/moving pictures
Audio Refers to some audio/sound aspect of the learning object
Easy Refers to clarity of instructions or how easy/hard the LO was to use. It does
not refer to how easy/hard the concept was to learn
Graphics Refers to static picture or look of the program (e.g., colours)
Help Refers specifically to help/hints/instructions/feedback provided by the LO
Interactive Student refers to some interactive part feature of the LO
Control Refers to student control of choice/pace in using the LO
Organization/Design Refers to quality of organization/design or the LO
Text Refers to quality/amount of text in LO
Theme Refers to overall/general theme or CONTENT of LO
Assessing learning, quality and engagement in learning objects 155
123
learned (learning construct), (b) the quality of the learning object (quality construct), and
(c) how much their students were engaged with the learning object (engagement construct).
Data from the LOES-T showed low to moderate reliability (0.63 for learning construct,
0.69 for learning object quality construct, and 0.84 for engagement construct), good
construct validity using a principal components factor analysis. See Kay and Knaack
(2007b) for a detailed of the teacher-based learning object scale.
Data analysis
A series of analyses were run to assess the reliability and data generated by the LOES-S for
students. These included:
(1) internal reliability estimates (reliability);
(2) a principal component factor analysis for Student LOES-S (construct validity);
(3) correlations among learning object evaluation constructs within the LOES-S scales
(construct validity);
(4) correlation between LOES-S and LOES-T constructs (convergent validity);
(5) correlation between LOES-S and computer comfort level (convergent validity);
(6) correlations between coded student comments and LOES-S constructs (face validity);
(7) correlation between learning performance and LOES-S constructs (predictive
validity).
Results
Internal reliability
The internal reliability estimates for the LOES-S constructs based on Cronbach’s a were
0.89 (Learning), 0.84 (Quality), and 0.78 (Engagement)—see Table 1. These moderate-to-
high values are acceptable for measures in the social sciences (Kline 1999; Nunnally
1978).
Construct validity
Principal component analysis
While the literature review suggests that constructs related to learning, quality, and
engagement may exist, the evidence provided is too weak to support the use of a confir-
matory factor analysis. Therefore, a principal components analysis was done to explore
whether the three learning object constructs (learning, quality, and engagement) in the
LOES-S formed three distinct factors. Since all communalities were above 0.4 (Stevens
1992), the principal component analysis was deemed an appropriate exploratory method
(Guadagnoli and Velicer 1988). Orthogonal (varimax) and oblique (direct oblimin) rota-
tions were used, given that the correlation among potential strategy combinations was
unknown. These rotational methods produced identical factor combinations, so the results
from the varimax rotation (using Kaiser normalization) are presented because they simplify
the interpretation of the data (Field 2005). The Kaiser–Meyer–Olkin measure of sampling
adequacy (0.937) and Bartlett’s test of sphericity (p \ .001) indicated that the sample size
was acceptable.
156 R. H. Kay, L. Knaack
123
The principal components analysis was set to extract three factors (Table 3). The
resulting rotation corresponded well with the proposed learning object evaluation con-
structs with two exceptions. Factor 1, the learning construct, included the five predicted
scale items, but also showed relatively high loadings on one of the quality construct items
(Item 6—help features being useful) and one of the engagement construct items (Item 11—
motivational value). Overall, the structure was consistent with previous research (Kay and
Knaack 2007a) and the proposed grouping of scale items listed in Appendix B.
Correlations among LOES-S constructs
The correlations between the learning construct and the quality (r = .68 ± 0.03, p \.001,
n = 934) and engagement (r = .74 ± 0.03, p \ .001, n = 1012) constructs were signifi-
cant, as was the correlation between the engagement and quality construct
(r = .64 ± 0.04, p \ .001, n = 963). Shared variances, ranging from 40% to 54% were
small enough to support the assumption that each construct measured was distinct.
Convergent validity
Correlation between LOES-S and LOES-T constructs
Mean student perceptions of learning, quality, and engagement correlated significantly
with teacher perceptions of learning, quality, and engagement, respectively. In addition,
correlations among different constructs were also significant. Correlations ranged from
0.25 to 0.47, indicating a moderate degree of consistency between student and teacher
evaluations of learning objects using the LOES-S and LOES-T scales (Table 4).
Table 3 Varimax rotated factor loadings on Learning Object Evaluation Scale for Students (LOES-S)
Scale item Factor 1 Factor 2 Factor 3
S-Learn 1—Interact .786
S-Learn 2—Feedback .737
S-Learn 3—Graphics .621
S-Learn 4—New Concept .764
S-Learn 5—Overall .728
S-Quality 6—Help .563 .514
S-Quality 7—Instructions .841
S-Quality 8—Easy to use .817
S-Quality 9—Organized .719
S-Engagement 10—Theme .810
S-Engagement 11—Motivating .440 .689
S-Engagement 12—Use again .686
Factor Eigenvalue PCT of VAR CUM PCT
1 6.70 55.8 55.8
2 1.11 9.3 65.1
3 0.70 5.9 71.0
Assessing learning, quality and engagement in learning objects 157
123
Correlation between student computer comfort level and LOES-S constructs
Computer comfort level based on a 3-item scale (Kay and Knaack 2007a) was significantly
correlated with the learning (r = .27 ± 0.05; p \ .001, n = 1022), quality
(r = .26 ± 0.06; p \.001, n = 976), and engagement constructs (r = .32 ± 0.05;
p \ .001, n = 1079). The more comfortable that a student was with the computers, the more
likely he/she would rate learning, quality, and engagement of a learning object higher.
Correlation between student comments and LOES-S constructs
Recall that an average rating score based on a five-point Likert scale was calculated for each
comment made by a student (refer Table 2 for comment categories). The LOES-S learning
construct showed significant correlations with average student ratings of comments about
learning (r = 0.27 ± 0.07, p \.001, n = 696), challenge level (r = 0.17 ± 0.07,
p \ .001, n = 696), and visual aids (r = 0.11 ± 0.07, p \.005, n = 696). The LOES-S
quality construct showed very small, but significant correlations with average student ratings
of comments about learning objects being easy to use (r = 0.09 ± 0.08, p \ .05, n = 663),
quality of help given (r = 0.09 ± 0.08, p \.05, n
= 663), and the quality/amount of text in
a learning object (r = 0.09 ± 0.08, p \.05, n = 663). Finally, the LOES-S engagement
construct showed a significant correlation with average student ratings of comments made
about engagement (r = 0.21 ± 0.07, p \ .001, n = 742).
Predictive validity
Correlation between learning performance and LOES-S constructs
Learning objects that were used for reviewing subject matter already taught were not
included in the learning performance analysis to reduce the potential influence of previous
teaching strategies. Learning performance (percent change from the pre- to the post-tests)
for classes where a learning object was not used for review (n = 273), was significantly
and positively correlated with the learning (r = .18 ± 0.12; p \ .01, n = 254), quality
(r = .22 ± 0.12; p \.005, n = 242), and engagement constructs (r = .10 ± 0.12;
p \ .005, n = 273). In other words, higher scores on student perceptions of learning,
learning object quality, and engagement were associated with higher scores in learning
performance, although this effect is relatively small.
Table 4 Correlations among LOES-S and LOES-T constructs (n = 63)
S-Learn S-Quality S-Engage
r CI r CI r CI
T-Learn 0.47*** 0.26–0.65 0.47*** 0.26–0.65 0.44*** 0.22–0.63
T-Quality 0.45*** 0.23–0.64 0.45*** 0.23–0.64 0.43*** 0.21–0.62
T-Engagement 0.25* 0.00–0.47 0.33** 0.09–0.54 0.39* 0.16–0.58
CI = Confidence interval
* p \ .05 (2-tailed)
** p \ .01 (2-tailed)
*** p \ .001 (2-tailed)
158 R. H. Kay, L. Knaack
123
Discussion
The purpose of this study was to systematically investigate a student-focused approach for
evaluating learning objects, based on three prominent themes that appeared in previous
research: learning, quality, and engagement. Key issues addressed were sampling of stu-
dents, range of learning objects assessed, reliability, validity, using formal statistics where
applicable, incorporating both qualitative and quantitative feedback, and student perfor-
mance. Each of these issues will be discussed in turn.
Sample population and range of learning objects
The population in this study was a large sample of middle and secondary school students
(n = 1113) spread out over three school districts and 15 schools. This type of sampling is
needed to build on previous small-scale research efforts in order to provide in-depth
analysis and confidence regarding specific learning object features that affected learning.
The sizable number (n = 48) of learning objects tested is a significant departure from
previous studies and offers evidence to suggest that the usefulness of the LOES-S extends
beyond a single learning object. While it is beyond the scope of this paper to compare
specific types of learning objects used, it is reasonable to assume that the LOES-S is a
credible tool for evaluating a wide range of learning objects.
Reliability
The internal reliability estimates (0.78–0.89) for the learning object constructs in the
LOES-S were good (Kline 1999; Nunnally 1978), as was the inter-rater reliability (99%) of
the categories and ratings used to assess student comments. Less than 25% of the 25 formal
evaluation studies reviewed for this paper (Baser 2005; Kay and Knaack 2005; Kong and
Kwok 2005; Liu and Bera 2005; Vargo et al. 2003; Windschitl and Andre 1998) offered
reliability statistics, yet it is argued that reliability is a fundamental element of any eval-
uation tool and should be calculated for future research studies, if the sample size permits.
Validity
Only two of the 25 studies reviewed for this paper offer validity estimates (Kay and
Knaack 2007a; Nurmi and Jaakkola 2006a). Therefore, it is prudent to address validity in
future learning object evaluation tools. Four types of validity were considered in this paper:
face, construct, convergent, and predictive.
Face validity
Face validity was supported by the close alignment between the three proposed LOES-S
constructs (learning, quality, and engagement) and those features identified as central in the
comprehensive review of the literature completed for this study. Aligning scale constructs
with a systematic analysis of previous theory is important if face validity is to be estab-
lished. Face validity was also partially confirmed by the significant correlations among
student comments and the three LOES-S subscales. However, correlations were modest
(between .21 and .27) for learning and engagement constructs and low (r = 0.09) for
quality. These findings indicates that more qualitative data, perhaps in the form of
Assessing learning, quality and engagement in learning objects 159
123
interview and focus groups, may be needed in developing scale items that more accurately
reflect actual student perceptions. The result may also reflect the variability in responses
that comes with examining a wide range of learning objects.
Construct validity
The principal components analysis revealed three relatively distinct learning object con-
structs that were consistent with the theoretical framework proposed by previous learning
object researchers and instructional design specialists. However, two exceptions were
noted. First, quality of help (item 6 in Appendix B) showed high communality with both
learning and quality constructs. This result might be explained in part by the specific focus
of help offered by a learning object. Sometimes help is offered to make the learning object
easier to use. Other times help is given to support the actual learning of a concept. Item six
(Appendix B) might need to be modified to reflect the use of help in facilitating the use of
the actual learning object.
The second exception involved the perceived motivation value of a learning object
(item 10, Appendix B) which loaded relatively high on both learning and engagement
constructs. One interpretation of this finding is that learning objects that effectively support
learning are probably more motivating to students. Exploring the source of motivation is a
necessary next step in improving the LOES-S scale.
While it is beneficial to isolate discrete constructs of a learning object in order to
identify potential strengths and weaknesses, the reality may be that these constructs
interact and mutually influence each other when actual learning occurs. For example, a
learning object that is not effective at promoting learning could have a negative impact on
perceived engagement. Furthermore, a learning object that is particularly difficult to use
may frustrate students and impede learning and limit engagement. Finally, highly engaging
learning objects may focus students and this may lead to more effective learning. These
proposed interactions are partially reflected in the relatively high correlations among
learning object constructs (0.63–0.74). However, shared variance of only 40% to 54%
indicates that learning, quality, and engagement constructs were also distinct.
Convergent validity
Convergent validity was supported by two tests. First, correlations between student esti-
mates of learning, quality, and engagement were significantly correlated with teacher
estimates of these same constructs. The correlations, though, were not high with a typical
shared variance of about 20%. However, this result might be expected given that teachers
and students may have different perceptions on what constitutes learning, quality, and
engagement. Therefore, while student and teacher constructs do converge, the modest
correlations underline one of the main premises of this paper, namely the need for
obtaining student input.
The second test of convergent validity looked at correlations among the three LOES-S
constructs and student computer comfort level. It was predicted that students who were
more comfortable with computers would rate learning objects more highly in terms of
learning, quality, and motivation. This prediction was supported by the low (0.26–0.32),
but significant correlation estimates observed. The modest impact of computer comfort
level may be partially explained by the relative ease of use of most learning objects.
Students who are uncomfortable with computers may experience minor, but not excessive
160 R. H. Kay, L. Knaack
123
challenges when interacting with tools that are designed to be simple to use. Further
research is needed to explore the impact of computer comfort.
Predictive validity
It is reasonable to predict that learning objects that are rated highly in terms of learning,
quality, and/or engagement should result in better learning performance. In other words, if
a student perceives a learning object as a high quality, effective learning tool that is
engaging, we would expect him/her to perform better in a pre-post test situation. Signif-
icant, but small (0.10–0.22), correlations between the percent change in pre-post test scores
and the LOES-S learning, quality, and engagement constructs supported these predictions.
One might expect these correlations to be higher if the LOES-S is to be an effective
assessment tool, however, learning is a complex process involving numerous variables
including instructional wrap, student ability in a subject area, student attitude toward a
subject, gender, and time using the learning object. The challenges and problems of
education are always more complex than technology alone can solve, therefore, modest
correlations between key learning object constructs and learning performance are probably
to be expected.
Implications for education
The main purpose for this paper was to develop a reliable, valid student-based evaluation
tool for assessing learning objects. However, there are several implications for education.
First, it is prudent to gather student input when using learning objects and other tech-
nologies in the classroom. While teacher and student assessment of learning benefits,
quality, and engagement are consistent with each other, they only share 20% common
variance. It is through student feedback that these tools and the instructional wrap that
supports them can be improved.
Second, the evaluation tool in this study offers some guidance on key features to focus
on when selecting a learning object. Learning features such as interactivity, clear feedback,
and graphics or animations that support learning are desirable, as are design qualities such
as effective help, clear instructions, transparency of use and organization. It is probably
more challenging for an educator to understand what engages a student, although overall
theme can impact positively or negatively on learning.
Finally, it is important to remember the low but significant correlations among student
evaluations of learning, quality, and engagement and learning performance. No technology
will transform the learning process. Learning objects are simply tools used in a complex
educational environment where decisions on how to use these tools may have considerably
more import than the actual tools themselves.
Caveats and future research
This study was designed with careful attention paid to the details of theory and method-
ology. An attempt was made to develop a learning object evaluation tool that was sensitive
to key issues researched over the past 10 years. A three-prong model was developed and
tested on a large sample, using a wide range of learning objects. Nonetheless, there are
several caveats that should be addressed to guide future research.
Assessing learning, quality and engagement in learning objects 161
123
First, student ability was not examined and may have an impact on the success of any
learning tool, let alone a learning object. In other words, students who like a subject may be
more open to new ways of learning concepts, whereas students who are struggling may find
the use of a learning object distracting and challenging. Assessing student ability might
provide further clarity in future research endeavors.
Second, instructional wrap or strategies employed to incorporate learning objects in the
classroom probably have an impact on the effectiveness. For example, a learning object
used exclusively as a motivational or demonstration tool, might not have as much an effect
as a learning object used to teach a new concept. While some effort was made to assess
how the learning object was used (e.g., for review), a more detailed analysis of instruc-
tional wrap could offer additional understanding of learning object usefulness.
Third, more directed and through qualitative analysis is needed to assess the design
qualities of learning objects. The open-ended approach used in the current study generated
a wide range of responses and these responses could now be used to gather more in depth
data. For example, students could be asked what features specifically supported and
detracted from their learning and how they would design the learning object to be more
effective.
Fourth, tests used to assess performance in this study, were created on an ad hoc basis,
by individual teachers. No effort was made to standardize measures or to assess reliability
and validity. Higher quality learning performance tools should increase the precision of
results collected.
Finally, it would be extremely beneficial to compare systematic external evaluations of
learning objects by experts with student evaluations and performance. We could then begin
to assess whether design efforts and certain types of learning objects have their intended
impact.
Appendix A List of learning objects used in the study
Collection Level Name of learning object Web address Access
AAA Math MS Geometric Facts http://www.eyepleezers.com/aaamath/
geo.htm#topic1
Open
BBC MS Rocks and Soil http://www.bbc.co.uk/schools/scienceclips/
ages/7_8/rocks_soils.shtml
Open
Independent MS Space Trading Cards http://amazing-space.stsci.edu/resources/
explorations/trading/
Open
Learn Alb MS Probability http://www.learnalberta.ca/content/mesg/html/
math6web/math6shell.html?launch=true
Open
Learn Alb MS Volume and Displacement http://www.learnalberta.ca/content/mesg/html/
math6web/lessonLauncher.html?lesson=
m6lessonshell15.swf&launch=true
Open
NLVM MS Factor Trees http://nlvm.usu.edu/en/nav/frames_asid_202_
g_2_t_1.html
Open
NLVM MS Fractions—Equivalent http://nlvm.usu.edu/en/nav/frames_asid_105_
g_3_t_1.html
Open
NLVM MS Fractions—Multiply http://nlvm.usu.edu/en/nav/frames_asid_105_
g_3_t_1.html
Open
NLVM MS How High http://nlvm.usu.edu/en/nav/frames_asid_275_
g_3_t_4.html
Open
162 R. H. Kay, L. Knaack
123
Appendix A continued
Collection Level Name of learning object Web address Access
PBS Kids MS Make a Match http://pbskids.org/cyberchase/games/
equivalentfractions/
Open
TLF MS Integer Cruncher http://www.thelearningfederation.edu.au/tlf2/ Closed
TLF MS Square Pyramids http://www.thelearningfederation.edu.au/tlf2/ Closed
TLF MS Triangular Pyramids http://www.thelearningfederation.edu.au/tlf2/ Closed
Anne Frank HS Anne Frank the writer http://www.ushmm.org/museum/exhibit/online/
af/htmlsite/
Open
Article 19 HS Ohm Zone http://www.article19.com/shockwave/oz.htm Open
Bio Project HS Online Onion Root Tips http://www.biology.arizona.edu/Cell_bio/
activities/cell_cycle/cell_cycle.html
Open
Creative
Chem
HS Creative Chemistry http://www.creative-chemistry.org.uk/gcse/
revision/equations/index.htm
Open
Discovery HS Weather Extreme:
Tornado
http://dsc.discovery.com/convergence/
tornado/tornado.html
Open
DNA Int HS Gel electrophoresis http://www.dnai.org/b/index.html Open
FunBased HS Classic Chembalancer http://funbasedlearning.com/chemistry/
chemBalancer/
Open
Gizmos HS Balancing Chemical
Reactions
http://www.explorelearning.com/ Open
Independent HS Congruent Triangles http://argyll.epsb.ca/jreed/math9/strand3/3203.htm Open
Independent HS Life in Shadows http://www.ushmm.org/museum/exhibit/online/
hiddenchildren/
Open
Independent HS Metals in Aqueous
Solutions
http://www.chem.iastate.edu/group/Greenbowe/
sections/projectfolder/animationsindex.htm
Open
Independent HS Ripples of genocide http://www.ushmm.org/museum/exhibit/online/
congojournal/
Open
Independent HS Triangle Centres http://www.geom.uiuc.edu/*demo5337/Group2/
trianglecenters.html
Open
Learn Alb HS Ammeters and
Voltmeters
http://www.learnalberta.ca/ Closed
Learn Alb HS Binomial distribution http://www.learnalberta.ca/content/meda/html/
binomialdistributions/index.html?launch=true
Open
Learn Alb HS Multiplying and
Dividing Cells
http://www.learnalberta.ca/ Closed
Learn Alb HS The Exponential
Function
http://www.learnalberta.ca/content/meda/html/
exponentialfunction/index.html?launch=true
Open
NLVM HS Algebra Balance Scales http://nlvm.usu.edu/en/nav/frames_asid_201_g_
4_t_2.html?open=instructions
Open
PBS HS Structure of Metals http://www.pbs.org/wgbh/nova/wtc/metal.html Open
PHET HS Energy Skate Park http://phet.colorado.edu/simulations/
energyconservation/energyconservation.jnlp
Open
Shodor HS Maze Game http://www.shodor.org/interactivate/ Open
TLF HS Alpha, Beta, Gamma of
Radiation
http://www.thelearningfederation.edu.au/tlf2/ Closed
TLF HS Measuring: Similar
Shapes
http://www.thelearningfederation.edu.au/tlf2/ Closed
TLF HS Mobile Phone Plans http://www.thelearningfederation.edu.au/tlf2/ Closed
Assessing learning, quality and engagement in learning objects 163
123
Appendix A continued
Collection Level Name of learning
object
Web address Access
TLF HS Reading Between the
Lines
http://education.uoit.ca/lordec/lo/L80/LV5536/ Open
TLF HS Squirt: Proportional
relationships
http://www.thelearningfederation.edu.au/tlf2/ Closed
UOIT HS Capillary Fluid
Exchange
http://education.uoit.ca/EN/main/151820/151827/
research_teach_locollection.php
Open
UOIT HS Charging an
Electroscope
http://education.uoit.ca/EN/main/151820/151827/
research_teach_locollection.php
Open
UOIT HS Relative Velocity http://education.uoit.ca/EN/main/151820/151827/
research_teach_locollection.php
Open
UOIT HS Slope of a Line http://education.uoit.ca/EN/main/151820/151827/
research_teach_locollection.php
Open
UOIT HS Transformation
of Parabola
http://education.uoit.ca/EN/main/151820/151827/
research_teach_locollection.php
Open
UW Madison HS Wild Weather http://cimss.ssec.wisc.edu/satmet/modules/
wild_weather/index.html
Open
Waterloo HS Waterloo: Hydrologic
Cycle
http://www.region.waterloo.on.ca Open
WISC Online HS Periodic table http://www.wisc-online.com/objects/
index_tj.asp?objid=SCI202
Open
Zona Land HS Equation of a Line http://id.mind.net/*zona/mmts/functionInstitute/
linearFunctions/lsif.html
Open
Appendix B Learning object evaluation survey—students
Strongly
disagree
Disagree Neutral Agree Strongly
agree
12345
Learning
1. Working with the learning object helped me
learn
12345
2. The feedback from the learning object helped
me learn
12345
3. The graphics and animations from the learning
object helped me learn
12345
4. The learning object helped teach me a new
concept
12345
5. Overall, the learning object helped me learn 1 2 3 4 5
Quality
6. The help features in the learning object
were useful
12345
7. The instructions in the learning object were
easy to follow
12345
8. The learning object was easy to use 1 2 3 4 5
9. The learning object was well organized 1 2 3 4 5
164 R. H. Kay, L. Knaack
123
Appendix B continued
Strongly
disagree
Disagree Neutral Agree Strongly
agree
12345
Engagement 1 2 3 4 5
10. I liked the overall theme of the learning object 1 2 3 4 5
11. I found the learning object motivating 1 2 3 4 5
12. I would like to use the learning object again 1 2 3 4 5
13. What, if anything, did you LIKE about the
learning object?
14. What, if anything, did you NOT LIKE about the
learning object?
References
Acovelli, M., & Gamble, M. (1997). A coaching agent for learners using multimedia simulations. Educa-
tional Technology, 37(2), 44–49.
Adams, A., Lubega, J., Walmsley, S., & Williams, S. (2004). The effectiveness of assessment learning
objects produced using pair programming. Electronic Journal of e-Learning, 2(2). Retrieved July 28,
2005 from http://www.ejel.org/volume-2/vol2-issue2/v2-i2-art1-adams.pdf.
Akpinar, Y., & Hartley, J. R. (1996). Designing interactive learning environments. Journal of Computer
Assisted Learning, 12(1), 33–46.
Albanese, M. A., & Mitchell, S. A. (1993). Problem-based learning: A review of the literature on its
outcomes and implementation issues. Academic Medicine, 68, 52–81.
Alonso, F., Lopez, G., Manrique, D., & Vines, J. M. (2005). An instructional model for web-based
e-learning education with a blended learning process approach. British Journal of Educational
Technology, 36(2), 217–235.
Baruque, L. B., & Melo, R. N. (2004). Learning theory and instructional design using learning objects.
Journal of Educational Multimedia and Hypermedia, 13(4), 343–370.
Baser, M. (2005). Promoting conceptual change through active learning using open source software for
physic simulations. Australasian Journal of Educational Technology, 22(3), 336–354.
Bennett, K., & McGee, P. (2005). Transformative power of the learning object debate. Open Learning,
20(1), 15–30.
Bradley, C., & Boyle, T. (2004). The design, development, and use of multimedia learning objects. Journal
of Educational Multimedia and Hypermedia, 13(4), 371–389.
Brown, A. L., & Palinscar, A. S. (1989). Guided, cooperative learning and individual knowledge acquisition.
In L. B. Resnick (Ed.), Knowing, learning, and instruction (pp. 393–451). Hillsdale, NJ: Erlbaum
Associates.
Brown, A. R., & Voltz, B. D. (2005). Elements of effective e-learning design. The International Review of
Research in Open and Distance Learning, 6(1). Retrieved June 1, 2007 from http://www.irrodl.
org/index.php/irrodl/article/view/217/300.
Bruner, J. (1983). Child’s talk. Learning to use language. Toronto, Canada: George J. McLeod Ltd.
Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA: Harvard University Press.
Brush, T., & Saye, J. (2001). The use of embedded scaffolds with hypermedia-supported student-centered
learning. Journal of Educational Multimedia and Hypermedia, 10(4), 333–356.
Buzzetto-More, N. A., & Pinhey, K. (2006). Guidelines and standards for the development of fully online
learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2006(2), 96–104.
Cafolla, R. (2006). Project MERLOT: Bringing peer review to web-based educational resources. Journal of
Technology and Teacher Education, 14(2), 313–323.
Calvi, L. (1997). Navigation and disorientation: A case study. Journal of Educational Multimedia and
Hypermedia, 6(3/4), 305–320.
Caws, C., Friesen, N., & Beaudoin, M. (2006). A new learning object repository for language learning:
Methods and possible outcomes. Interdisciplinary Journal of Knowledge and Learning Objects, 2,
112–124.
Assessing learning, quality and engagement in learning objects 165
123
Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and
Instruction, 8, 293–332.
Chenail, R. J. (2004). When Disney meets the research park: Metaphors and models for engineering an
online learning community of tomorrow. Internet and Higher Education, 7(2), 107–121.
Chi, M. T. H., & Bassok, M. (1989). Learning from examples via self-explanations. In L. B. Resnick (Ed.),
Knowing, learning, and instruction (pp. 251–282). Hillsdale, NJ: Erlbaum Associates.
Clarke, O., & Bowe, L. (2006a). The learning federation and the Victorian department of education and
training trial of online curriculum content with Indigenous students, pp. 1–14.
Clarke, O., & Bowe, L. (2006b). The learning federation and the Victorian department of education and
training trial of online curriculum content with ESL students, pp. 1–16.
Cochrane, T. (2005). Interactive QuickTime: Developing and evaluating multimedia learning objects
to enhance both face-to-face and distance e-learning environments. Interdisciplinary Journal of
Knowledge and Learning Objects, 1. Retrieved August 3, 2005 from http://ijklo.org/Volume1/
v1p033-054Cochrane.pdf.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading,
writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction (pp. 453–494).
Hillsdale, NJ: Erlbaum Associates.
Convertini, V. C., Albanese, D., Marengo, A., Marengo, V., & Scalera, M. (2006). The OSEL taxonomy for
the classification of learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2,
125–138.
Docherty, C., Hoy, D., Topp, H., & Trinder, K. (2005). E-Learning techniques supporting problem based
learning in clinical simulation. International Journal of Medical Informatics, 74(7–8), 527–533.
Downes, S. (2003). Design and reusability of learning objects in an academic context: A new economy
of education? USDLA, 17(1). Retrieved June 1, 2007 from http://www.usdla.org/html/journal/
JAN03_Issue/article01.html.
Field, A. (2005). Discovering statistics using SPSS (2nd ed.). Thousand Oaks, CA: SAGE Publications.
Friesen, N., & Anderson, T. (2004). Interaction for lifelong learning. British Journal of Educational
Technology, 35(6), 679–687.
Gadanidis, G., Gadanidis, J., & Schindler, K. (2003). Factors mediating the use of online applets in the
lesson planning of pre-service mathematics teachers. Journal of Computers in Mathematics and
Science Teaching, 22(4), 323–344.
Gadanidis, G., Sedig, K., & Liang, H. (2004). Designing online mathematical investigation. Journal of
Computers in Mathematics and Science Technology, 23(3), 275–298.
Guadagnoli, E., & Velicer, W. (1988). Relation of sample size to the stability of component patters.
Psychological Bulletin, 103, 265–275.
Hanna, L., Risden, K., Czerwinski, M., & Alexander, K. J. (1999). The Role of usability in designing
children’s computer products. In A. Druin (Ed.), The design of children’s technology. San Francisco:
Morgan Kaufmann Publishers, Inc.
Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory of cognitive interest in
science learning. Journal of Educational Psychology, 90(3), 414–434.
Haughey, M., & Muirhead, B. (2005). Evaluating learning objects for schools. E-Journal of Instructional
Sciences and Technology, 8(1). Retrieved June 1, 2007 from http://www.usq.edu.au/electpub/
e-jist/docs/vol8_no1/fullpapers/eval_learnobjects_school.htm.
Holzinger, A. (2004). Rapid prototyping for virtual medical campus interface. IEEE Software, 21(1), 92–99.
Howard-Rose, D., & Harrigan, K. (2003). CLOE learning impact studies lite: Evaluating learning objects in
nine Ontario university courses. Retrieved July 3, 2007 from http://cloe.on.ca/documents/
merlotconference10.doc.
Jonassen, D., & Churchhill, D. (2004). Is there a learning orientation in learning objects? International
Journal on E-Learning,
3(2), 32–41.
Jones, M. G., Farquhar, J. D., & Surry, D. W. (1995). Using metacognitive theories to design user interfaces
for computer-based learning. Educational Technology, 35(4), 12–22.
Kay, R., & Knaack, L. (2005). Developing learning objects for secondary school students: A multi-com-
ponent model. Interdisciplinary Journal of Knowledge and Learning Objects, 2005(1), 229–254.
Kay, R. H., & Knaack, L. (2007a). Evaluating the learning in learning objects. Open Learning, 22(1), 5–28.
Kay, R. H., & Knaack, L. (2007b). Teacher evaluation of learning objects in middle and secondary school
classrooms. Retrieved November 1, 2007 from http://faculty.uoit.ca/kay/papers/LOES_Teacher_
2007.doc.
Kennedy, D. M., & McNaught, C. (1997). Design elements for interactive multimedia. Australian Journal of
Educational Technology, 13(1), 1–22.
166 R. H. Kay, L. Knaack
123
Kenny, R. F., Andrews, B. W., Vignola, M. V., Schilz, M. A., & Covert, J. (1999). Towards guidelines for
the design of interactive multimedia instruction: Fostering the reflective decision-making of preservice
teachers. Journal of Technology and Teacher Education, 7(1), 13–31.
Kester, L., Lehnen, C., Van Gerven, P. W. M., & Kirschner, P. A. (2006). Just-in-time schematic
supportive information presentation during cognitive skill acquisition. Computers in Human
Behavior, 22(1), 93–116.
Kline, P. (1999). The handbook of psychological testing (2nd ed.). London: Routledge.
Kong, S. C., & Kwok, L. F. (2005). A cognitive tool for teaching the addition/subtraction of common
fractions: A model of affordances. Computers and Education, 45(2), 245–265.
Koohang, A., & Du Plessis, J. (2004). Architecting usability properties in the e-learning instructional design
process. International Journal on E-Learning, 3(3), 38–44.
Koppi, T., Bogle, L., & Bogle, M. (2005). Learning objects, repositories, sharing and reusability. Open
Learning, 20(1), 83–91.
Kramarski, B., & Zeichner, O. (2001). Using technology to enhance mathematical reasoning: Effects of
feedback and self-regulation learning. Education Media International, 38(2/3), 77–82.
Krauss, F., & Ally, M. (2005). A study of the design and evaluation of a learning object and implications for
content development. Interdisciplinary Journal of Knowledge and Learning Objects, 1. Retrieved June
1, 2007 from http://ijklo.org/Volume1/v1p001-022Krauss.pdf.
Lopez-Morteo, G., & Lopez, G. (2007). Computer support for learning mathematics: A learning environ-
ment based on recreational learning objects. Computers and Education, 48(4), 618–641.
Lim, C. P., Lee, S. L., & Richards, C. (2006). Developing interactive learning objects for a computing
mathematics models. International Journal on E-Learning, 5(2), 221–244.
Lin, A., & Gregor, S. (2006). Designing websites for learning and enjoyment: A study of museum expe-
riences. International Review of Research in Open and Distance Learning, 7(3), 1–21.
Littlejohn, A. (2003). Issues in Reusing Online Resources. Journal of Interactive Media in Education, 1,
Special Issue on Reusing Online Resources. Retrieved July 1, 2005 from www-jime.open.ac.uk/
2003/1/.
Liu, M., & Bera, S. (2005). An analysis of cognitive tool use patterns in a hypermedia learning environment.
Educational Technology, Research and Development, 53(1), 5–21.
MacDonald, C. J., Stodel, E., Thompson, T. L., Muirhead, B., Hinton, C., Carson, B., et al. (2005).
Addressing the eLearning contradiction: A collaborative approach for developing a conceptual
framework learning object. Interdisciplinary Journal of Knowledge and Learning Objects, 1. Retrieved
August 2, 2005 from http://ijklo.org/Volume1/v1p079-098McDonald.pdf.
Madhumita, & Kumar, K.L., (1995). Twenty-one guidelines for effective instructional design. Educational
Technology, 35(3), 58–61.
Maslowski, R., & Visscher, A. J. (1999). Formative evaluation in educational computing research and
development. Journal of Research on Computing in Education, 32(2), 239–255.
Mayer, R., & Moreno, R. (2002). Aids to computer-based multimedia learning. Learning and Instruction,
12, 107–119.
McGreal, R. (2004). Learning objects: A practical definition. International Journal of Instructional Tech-
nology and Distance Learning, 1(9). Retrieved August 5, 2005 from http://www.itdl.org/Journal/Sep_04/
article02.htm.
McGreal, R., Anderson, T., Babin, G., Downes, S., Friesen, N., Harrigan, K., et al. (2004). EduSource:
Canada’s learning object repository network. International Journal of Instructional Technology and
Distance Learning, 1(3). Retrieved July 24, 2005 from http://www.itdl.org/Journal/Mar_04/
article01.htm.
Muzio, J. A., Heins, T., & Mundell, R. (2002). Experiences with reusable e-learning objects from theory to
practice. Internet and Higher Education, 2002
(5), 21–34.
Nesbit, J., & Belfer, K. (2004). Collaborative evaluation of learning objects. In R. McGreal (Ed.), Online
education using learning objects (pp. 138–153). New York: RoutledgeFalmer.
Nesbit, J., Belfer, K., & Vargo, J. (2002). A convergent participation model for evaluation of learning
objects. Canadian Journal of Learning and Technology, 28(3). Retrieved July 1, 2005 from
http://www.cjlt.ca/content/vol28.3/nesbit_etal.html.
Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.
Nurmi, S., & Jaakkola, T. (2005). Problems underlying the learning object approach. International Journal
of Instructional Technology and Distance Learning, 2(11). Retrieved April 9, 2007 at http://www.
itdl.org/Journal/Nov_05/article07.htm.
Nurmi, S., & Jaakkola, T. (2006a). Effectiveness of learning objects in various instructional settings.
Learning, Media, and Technology, 31(3), 233–247.
Assessing learning, quality and engagement in learning objects 167
123
Nurmi, S., & Jaakkola, T. (2006b). Promises and pitfall of learning objects. Learning, Media, and Tech-
nology, 31(3), 269–285.
Ohl, T. M. (2001). An interaction-centric learning model. Journal of Educational Multimedia and Hyper-
media, 10(4), 311–332.
Oliver, R., & McLoughlin, C. (1999). Curriculum and learning-resources issues arising from the use of web-
based course support systems. International Journal of Educational Telecommunications, 5(4), 419–435.
Parrish, P. E. (2004). The trouble with learning objects. Educational Technology Research & Development,
52(1), 49–67.
Scheiter, K., & Gerjets, P. (2007). Learner control in hypermedia environments. Educational Psychology
Review, 19(3), 285–307.
Schell, G. P., & Burns, M. (2002). A repository of e-learning objects for higher education. e-Service Journal,
1(2), 53–64.
Schoner, V., Buzza, D., Harrigan, K., & Strampel, K. (2005). Learning objects in use: ‘Lite’ assessment for
field studies. Journal of Online Learning and Teaching, 1(1), 1–18.
Sedig, K., & Liang, H (2006). Interactivity of visual mathematical representations: Factors affecting
learning and cognitive processes. Journal of Interactive Learning Research. 17(2), 179–212.
Siqueira, S. W. M., Melo, R. N., & Braz, M. H. L. B. (2004). Increasing the semantics of learning objects.
International Journal of Computer Processing of Oriental Languages, 17(1), 27–39.
Sosteric, M., & Hesemeier, S. (2002). When is a learning object not an object: A first step towards a theory
of learning objects. International Review of Research in Open and Distance Learning, 3(2), 1–16.
Stevens, J. P. (1992). Applied multivariate statistics for the social science applications (2nd ed.). Hillsdale,
NJ: Erlbaum.
Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cognitive Science, 12, 257–285.
Sweller, J., van Merrie
¨
nboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design.
Educational Psychology Review, 10, 251–296.
Vacik, H., Wolfslehner, B., Spork, J., & Kortschak, E. (2006). The use of COCOON in teaching silviculture.
Computers and Education, 47(3), 245–259.
Vargo, J., Nesbit, J. C., Belfer, K., & Archambault, A. (2003). Learning object evaluation: Computer
mediated collaboration and inter-rater reliability. International Journal of Computers and Applications,
25(3), 1–8.
Van Gerven, P. W. M., Paas, F., & Tabbers, H. K. (2006). Cognitive aging and computer-based instructional
design: Where do we go from here? Educational Psychology Review, 18(2), 141–157.
Van Merrie
¨
nboer, J. J. G., & Ayres, P. (2005). Research on cognitive load theory and its design implications
for e-learning. Education Theory, Research and Development, 53(3), 1042–1629.
Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press.
Van Zele, E., Vandaele, P., Botteldooren, D., & Lenaerts, J. (2003). Implementation and evaluation of a
course concept based on reusable learning objects. Journal of Educational Computing and Research,
28(4), 355–372.
Wiley, D., Waters, S., Dawson, D., Lambert, B., Barclay, M., & Wade, D. (2004). Overcoming the limi-
tations of learning objects. Journal of Educational Multimedia and Hypermedia, 13(4), 507–521.
Williams, D. D. (2000). Evaluation of learning objects and instruction using learning objects. In D. A. Wiley
(Ed.), The instructional use of learning objects: Online version. Retrieved July 1, 2005 from
http://reusability.org/read/chapters/williams.doc.
Windschitl, M., & Andre, T. (1998). Using computer simulations to enhance conceptual change: The roles
of constructivist instruction and student epistemological beliefs. Journal of Research in Science
Teaching, 35(2), 145–160.
Zammit, K. (2000). Computer icons: A picture says a thousand words. Or does it? Journal of Educational
Computing Research, 23(2), 217–231.
Robin H. Kay has published over 40 articles in the area of computers in education, and presented numerous
papers at 15 international conferences. Current projects include research on laptop use in teacher education,
learning objects, audience response systems, gender differences, and factors that influence how students
learn with technology.
Liesel Knaack is an Assistant Professor at UOIT. Her research interests are in the areas of design,
development and evaluation of learning objects, effective integration of computers in the curricula,
instructional design of digital learning environments and the process of change in implementing technology
use at institutions of higher education.
168 R. H. Kay, L. Knaack
123
... While schools are increasingly upgrading their technology (Fraillon et al., 2014), the quality of its integration remains a more critical factor for achieving successful learning outcomes (Fütterer et al., 2022). The recent surge in the use of modern technological tools and environments has created favorable conditions for effective teaching (Kalolo, 2019), however, the responsibility for finding and using appropriate, reusable digital resources rests with teachers, who should have a deep understanding of their students' educational needs (Alvarenga et al., 2017;Jutaite et al., 2021;Kay & Knaack, 2009). ...
... DLOs function as tools within complex educational environments, serving diverse educational needs (Reinhold et al., 2024). Thus, the selection and application of DLOs in teaching is critical, as they must be carefully chosen based on the specific context (Kay & Knaack, 2009). To support teachers in making informed choices, several evaluation tools have been developed to assess DLOs effectively (Mikropoulos & Papachristos, 2021;Zacharis & Mikropoulos, 2023). ...
Article
Full-text available
The current study explores the preferences of primary school teachers on the characteristics of Information and Communication Technologies (ICT)-based teaching via Discrete Choice Models (DCM). These models analyze the preferences of teachers towards alternatives that are formed by different characteristics. In this study we examined the characteristics: subject area, grade, and interactivity of digital resources. The sample consists of 418 teachers. According to the study, teachers' preferences are positively influenced by the interactivity of digital resources. Male teachers have a significantly higher probability of utilizing Digital Learning Objects (DLOs) on grades from 3rd to 6th, compared to 1st and 2nd grades. The inclination to use DLOs for teaching Mathematics diminishes as the level of ICT training rises, whereas the preference for using DLOs in teaching Natural Sciences grows with higher levels of ICT training. Moreover, soft skills training increases the probability of using DLOs in teaching Literature and Natural Sciences compared to teachers with no soft skills training. The results provide insights into the specific preferences and needs of teachers concerning the use of technology in their teaching, which can be utilized to develop more effective strategies towards the digitization of the educational process.
... Firstly, the Learning Object Evaluation Scale for Students (LOES-S) developed in [50] was used. The five-level Likert scale items of this questionnaire were adapted to our learning object (the digital platform used) to evaluate the three dimensions of LOES-S: 1) Learning, 2) Quality and 3) Engagement of the learning object. ...
Article
Full-text available
Decimal misconceptions are a persistent challenge in mathematics education, often hindering students' long-term understanding. This study examines how learning analytics (LA) can be effectively integrated into instructional sequences to address these misconceptions, providing teachers with real-time insights for formative assessment. Despite the growing presence of technology in education, LA remains underutilized at the primary level. The study involved 235 fifth- and sixth-grade students completing decimal number tasks through a Moodle-based platform. Students were assigned to one of three conditions: tasks based on correct examples (CE-tasks, n = 79), erroneous examples (EE-tasks, n = 80), or no tasks (control group, n = 76). Results indicate that example-based tasks significantly improve learning outcomes, particularly for students with lower prior knowledge, who benefited more from CE-tasks. LA data effectively predicted student performance, demonstrating its potential as a formative assessment tool. Importantly, results suggests that the observed effects were consistent across male and female students. These findings highlight the need to integrate LA into daily teaching practice, enabling educators to identify misconceptions and tailor instruction accordingly. Given the positive student reception and the efficiency of LA-driven interventions, this study underscores its relevance for policy decisions aimed at enhancing mathematics education in primary schools.
... This scale analyzes the constructs of learning, quality or instructional design, and engagement when students use learning objects. It was developed by Kay and Knaack (2009) after 10 years of research with a sample of over 1100 students. A description of each item used in the LOES-S questionnaire can be found in Table A2. ...
Article
Full-text available
This study investigates the motivation of future primary education teachers when using virtual reality (VR) as a pedagogical tool for teaching Social Sciences and History. A total of 73 students participated, engaging with curricular content through an immersive experience designed to strengthen digital and methodological skills. Motivation was measured using a reduced version of the Instructional Material Motivational Survey (IMMS), assessing attention, relevance, confidence, and satisfaction. Additionally, an adaptation of the LOES-S questionnaire was used to analyze the perception of VR as a learning object in initial teacher training. On a Likert scale (1–5), the results showed a high overall motivation level (M=4.56, SD=0.26), with satisfaction (M=4.92, SD=0.20) being the most prominent factor. Relevance, however, received a lower score (M=4.36, SD=0.44), suggesting difficulty connecting immersive content with prior knowledge. In the LOES-S questionnaire, engagement (M=4.88, SD=0.27) was the highest-rated construct, indicating strong emotional and motivational involvement. No significant gender differences were found, emphasizing the inclusive nature of VR. This study highlights the potential of VR to enhance teacher training, motivation, digital competencies, and innovative methodologies, while underscoring the need for effective pedagogical design to optimize its educational impact.
... Τα ΜΑ είναι ψηφιακοί πόροι που έχουν σχεδιαστεί για να υποστηρίξουν τη μάθηση και πρέπει να έχουν σαφή εκπαιδευτικό σκοπό, πρέπει να είναι προσβάσιμα, να μπορούν να αναζητηθούν και να επαναχρησιμοποιηθούν, επιτρέποντάς τους έτσι να συνδυάζονται με άλλους πόρους διδασκαλίας σε μια ποικιλία εκπαιδευτικών στρατηγικών (Bisol et al., 2015). Μια άλλη ονομασία των ΨΜΑ είναι τα WBLRs (Web-based learning tools, Διαδικτυακά εργαλεία μάθησης), τα οποία ορίζονται ως «διαδραστικά διαδικτυακά εργαλεία που υποστηρίζουν τη μάθηση ενισχύοντας και καθοδηγώντας τις γνωστικές διαδικασίες των μαθητών» (Kay, 2009). Ο μεγάλος αριθμός Ψηφιακών Μαθησιακών Αντικειμένων, η διαθεσιμότητά τους μέσω του Διαδικτύου, η δυνατότητα πρόσβασης και η επαναχρησιμοποίησης τους (Downes, 2001) οδήγησε στη δημιουργία αποθετηρίων για Μαθησιακά Αντικείμενα (Learning Objects Repositories, LOR) με κύριο σκοπό την κοινή χρήση και την πρόσβαση σε ΨΜΑ. ...
Article
Η ψηφιακή τεχνολογία παρέχει πολλές και ποικίλες ευκαιρίες στη νέα γενιά, για την απόκτηση γνώσης προκειμένου να ανταπεξέλθει στις απαιτήσεις της σύγχρονης εποχής. Τα ψηφιακά μαθησιακά αντικείμενα(ΨΜΑ), ως εκπαιδευτικοί και ελεύθεροι διαδικτυακοί πόροι, αποτελούν βασικό παράγοντα στον επανασχεδιασμό της εκπαιδευτικής διαδικασίας. Σκοπός της παρούσας εργασίας ήταν διερεύνηση των στάσεων των εκπαιδευτικών δευτεροβάθμιας εκπαίδευσης απέναντι στα Ψηφιακά Μαθησιακά Αντικείμενα, με παράδειγμα τα ΨΜΑ του Εθνικού αποθετηρίου «Φωτόδεντρο». Για τη συλλογή των δεδομένων χρησιμοποιήθηκε το ερωτηματολόγιο από την έρευνα του Τσερόλα (2023). Η ανάλυση των αποτελεσμάτων επιβεβαίωσε τα όσα αναφέρει η εισαγωγή για τις στάσεις των εκπαιδευτικών απέναντι στα ΨΜΑ.
... Evaluating several WBLTs using the same criteria allows teachers and researchers to determine which tool is more appropriate for speci c learning contexts (i.e., teaching English to high school students, teaching Academic English to L2 graduate students, etc). Kay and Knaack (2009) also argue for the need of a structured and organized evaluation criteria that can later be used for teachers to evaluate new tools as they appear on the market. erefore, we seek to propose an evaluation scheme that can be used by teachers and tool developers to assess the applicability of their tools for speci c classroom contexts. ...
... Oliveros (2024) concentrates on evaluating the mobile application that she personally made to satisfy the gap left by the small availability of studies on the effectivity of LOs as a tool for teaching English (Grammar). This study adapted the scale developed by Kay and Knaack (2008), Learning Object Evaluation Scale for Students (LOES-S), to evaluate the mobile app with regards of three constructs: (1)learning, (2)quality, and (3)engagement. A total of 31 respondents who owns an Android mobile phone from a public secondary high school in the Province of Quezon participated in the said study by using the mobile application. ...
... Oliveros (2024) concentrates on evaluating the mobile application that she personally made to satisfy the gap left by the small availability of studies on the effectivity of LOs as a tool for teaching English (Grammar). This study adapted the scale developed by Kay and Knaack (2008), Learning Object Evaluation Scale for Students (LOES-S), to evaluate the mobile app with regards of three constructs: (1)learning, (2)quality, and (3)engagement. A total of 31 respondents who owns an Android mobile phone from a public secondary high school in the Province of Quezon participated in the said study by using the mobile application. ...
Article
Τα εκπαιδευτικά εικονικά περιβάλλοντα συνεισφέρουν στην κατανόηση αφηρημένων και δυσνόητων εννοιών. Η εργασία παρουσιάζει τη σχεδίαση, ανάπτυξη και πιλοτική εφαρμογή ενός εικονικού περιβάλλοντος για την αρχιτεκτονική του υπολογιστή που απευθύνεται σε μαθητές δευτεροβάθμιας εκπαίδευσης. Η αξιολόγηση έγινε από 15 εκπαιδευτικούς Πληροφορικής δευτεροβάθμιας εκπαίδευσης, οι οποίοι αφού χρησιμοποίησαν το περιβάλλον κατά τη διδασκαλία τους, εκτίμησαν τη λειτουργικότητα του μέσω του εργαλείου SUS και η δυνατότητα επίτευξης μάθησης, εμπλοκής και ευχρηστίας με το εργαλείο LOES-T. Τα αποτελέσματα της πιλοτικής εμπειρικής μελέτης ήταν ενθαρρυντικά, καθώς οι εκπαιδευτικοί συμφώνησαν ότι η παιδαγωγική αλληλεπίδραση με το εύχρηστο εικονικό περιβάλλον εμπλέκει τους μαθητές στη διδακτική πράξη και συμβάλλει σε θετικά μαθησιακά αποτελέσματα.
Article
Full-text available
The purpose of this study was to explore individual differences in middle and secondary school student attitudes and learning performance regarding Web-Based Learning Tools (WBLTs). The student characteristics assessed were gender, age, computer comfort level, subject comfort level, and average grade. Attitudes toward WBLTs were measured using a reliable, valid survey designed to gather data on student perceptions of learning, design, and engagement. Learning performance was assessed by comparing pre-and post-test scores on four knowledge categories (remembering, understanding, application, analysis) based on the revised Bloom's taxonomy. Female students had significantly more positive attitudes toward WBLTs. Students who were more comfortable with using computers and the subject area addressed by a WBLT had significantly more positive attitudes toward WBLTs. Average grade was unrelated to student attitudes toward WBLTs. Student age was the only student characteristic that was significantly associated with learning performance. When older students use WBLTs (different from those used by younger students), learning performance is significantly greater than younger students. It is speculated that WBLTs may be better suited toward older students who have better self-regulation skills.
Article
Full-text available
Preparing and developing e-learning materials is a costly and time consuming enterprise. This paper highlights the elements of effective design that we consider assist in the development of high quality materials in a cost efficient way. We introduce six elements of design and discuss each in some detail. These elements focus on paying attention to the provision of a rich learning activity, situating this activity within an interesting story line, providing meaningful opportunities for student reflection and third party criticism, considering appropriate technologies for delivery, ensuring that the design is suitable for the context in which it will be used, and bearing in mind the personal, social, and environmental impact of the designed activities. Along the way, we describe how these design elements can be effectively utilized by contextualizing them with examples from an e-learning initiative. Copyright © 2005 by Athabasca University - Canada's Open University. All rights reserved.
Article
Preface PART 1: TWO NATURAL KINDS 1. Approaching the Literary 2. Two Modes of Thought 3. Possible Castles PART 2: LANGUAGE AND REALITY 4. The Transactional Self 5. The Inspiration of Vygotsky 6. Psychological Reality 7. Nelson Goodman's Worlds 8. Thought and Emotion PART 3: ACTING IN CONSTRUCTED WORLDS 9. The Language of Education 10. Developmental Theory as Culture Afterword Appendix: A Reader's Retelling of "Clay" by James Joyce Notes Credits Index
Article
This paper reports upon an active learning approach that promotes conceptual change when studying direct current electricity circuits, using free open source software, Qucs. The study involved a total of 102 prospective mathematics teacher students. Prior to instruction, students' understanding of direct current electricity was determined by a subset of a previously developed multiple choice conceptual test. All students received an active learning instruction using Qucssimulations. After instruction, the same test was administered to the students to determine the effectiveness of the instruction they received. Paired t-test analyses showed that students' progress on understanding of direct current electricity was significant. A six week delayed post-test revealed that this observed improvement promised to be durable, at least in the short term. Students' evaluation of using Qucs is also reported.
Article
The properties that distinguish learning objects from other forms of educational software - global accessibility, metadata standards, finer granularity and reusability - have implications for evaluation. This article proposes a convergent participation model for learning object evaluation in which representatives from stakeholder groups (e.g., students, instructors, subject matter experts, instructional designers, and media developers) converge toward more similar descriptions and ratings through a two-stage process supported by online collaboration tools. The article reviews evaluation models that have been applied to educational software and media, considers models for gathering and meta-evaluating individual user reviews that have recently emerged on the Web, and describes the peer review model adopted for the MERLOT repository. The convergent participation model is assessed in relation to other models and with respect to its support for eight goals of learning object evaluation: (1) aid for searching and selecting, (2) guidance for use, (3) formative evaluation, (4) influence on design practices, (5) professional development and student learning, (6) community building, (7) social recognition, and (8) economic exchange.
Article
Formative evaluation is a very powerful technique for developing high-quality educational software. The goal of this article is to explain the function of formative evaluation and the evaluation techniques that can be used in the various stages of educational computing research and development. It is our hope that it will help developers (1) create better and more effective educational software and (2) use resources in the development process more productively.