ArticlePDF Available

Abstract and Figures

A flexible approach for structuring and merging distributed learning object is presented. At the basis of this approach there is a formal representation of a learning object, called attribute structure. Attribute structures are labeled directed graphs representing structured information on the learning objects. When two or more learning objects are merged, the corresponding attribute structures are unified, and the unified structure is attached to the resulting learning object.
Content may be subject to copyright.
Structuring and Merging
Distributed Content
Luca Stefanutti, Dietrich Albert, Cord Hockemeyer
Department of Psychology, University of Graz
Universit¨
atsplatz 2/III, 8010 Graz, AT
luca.stefanutti, dietrich.albert, cord.hockemeyer@uni-graz.at
A flexible approach for structuring and merging distributed learning object is
presented. At the basis of this approach there is a formal representation of a learning
object, called attribute structure. Attribute structures are labeled directed graphs
representing structured information on the learning objects. When two or more
learning objects are merged, the corresponding attribute structures are unified, and
the unified structure is attached to the resulting learning object.
Keywords: distributed learning objects, knowledge structures, structuring content
1. INTRODUCTION
In order to decide which object comes next in presenting a collection of learning objects to a
learner, one might establish some order. Given a set Oof learning objects, a surmise relation
on Ois any partial order ‘4’ on the learning objects in O. The interpretation of ‘4’ is that, given
any two learning objects oand oin O,o4oholds if a learner who masters oalso masters o.
The concept of a surmise relation was introduced by [5] as one of the fundamental concepts of
a theoretical framework called Knowledge Space Theory. According to this theory the knowledge
state of a learner is the subset Kof all learning objects in Othat (s)he masters. A subset KQ
is said to be a knowledge state of the surmise relation ‘4’ if oKand o4oimplies oK
for all learning objects o, oO. Then the collection Kof all knowledge states of ‘4’ is called the
knowledge space derived from ‘4’. Knowledge spaces are used for representing and assessing
the knowledge state of a learner (see, e.g.,[5] and [6] in this connection).
The construction of a surmise relation may follow different approaches. After a brief presentation
of an existing approach based on vectors of components of a learning object, we extend
this approach to a more flexible representation called attribute structure [2]. The mathematical
properties of attribute structures make it possible to compare distributed learning objects in terms
of how much informative and how much demanding they are.
2. THE COMPONENT APPROACH
According to the component approach [1, 7], every content object oin Ois equipped with an
ordered n-tuple A=ha1, a2, . . . , aniof attributes where the length nof the attribute n-tuple A
is fixed for all objects. Each attribute aiin Acomes from a corresponding attribute set Cicalled
the i-th component of the content object. In this sense, given a collection C={C1, C2, . . . , Cn}
of disjoint attribute sets (or components), each object oOis equipped with an element of the
Cartesian product P=C1×C2× · · · × Cn. Usually each component Ciis equipped with a partial
order ‘6i’ so that hCi,6iiis in fact a partially ordered set of attributes. The partial order ‘6i’ is
interpreted in the following way: for a, b Ci, if a6ibthen a learning object characterized by
attribute ais less demanding than a learning object characterized by attribute b. To give a simple
example, it might be postulated that ‘computations involving integer numbers’ (attribute a) are less
demanding than ‘computations involving rational numbers’ (attribute b). A natural order 6on the
elements in P, the so-called coordinatewise order [4], is derived from the npartial orders ‘6i’ by
hx1, x2, . . . , xni6hy1, y2, . . . , yni ⇐⇒ ∀i:xi6iyi
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 1
Structuring and Merging Distributed Content
Stefanutti, L., Albert, D., & Hockemeyer, C. (2005). Structuring and Merging Distributed Content. In
P. Ritrovato, C. Allison, S. A. Cerri, T. Dimitrakos, M. Gaeta & S. Salerno (Eds.), Towards the
Learning Grid: Advances in Human Learning Services (Vol. 127 Frontiers in Artificial Intelligence
and Applications, pp. 113-118). IOS Press.
Structuring and Merging Distributed Content
If f:O→ P is a mapping assigning an attribute n-tuple to each learning object, then a surmise
relation ‘4’ on the learning objects is established by
o4of(o)6f(o)
for all o, oO. The mapping fcan easily be established even when the learning objects are
distributed (see, e.g., [8]).
3. ATTRIBUTE STRUCTURES
An attribute structure is used to represent structured information on a learning object or an asset
and in this sense it represents an extension of the attribute n-tupel discussed in Section 2. From
a mathematical standpoint attribute structures correspond to the feature structures introduced by
[3] in computational linguistics. Let Cbe a set of components and Aa collection of attributes, with
AC=. An attribute structure is a labeled directed graph A=hQ, ¯q, α, ηiwhere:
Qis a set of nodes of the graph;
¯qQis the root node of the graph;
α:QAis a partial function assigning attributes to some of the nodes;
η:Q×CQis a partial function specifying the edges of the graph.
As an example, let
C={picture,topic,subtopic,text,language}
be a set of components, and
A={PICTURE1,TEXT1,ENGLISH,MATH,MATRIX INVERSION}
be a collection of attributes. Suppose moreover that a simple learning object is described by
the asset structure A1=hQ1,¯q1, α1, γ1i, where Q1is the set of nodes, ¯q1is the root node, and
α1and γ1are defined as follows: α1(0) is not defined, α1(1) = PICTURE1,α1(2) = TEXT1,
α1(3) = MATH,α1(4) = ENGLISH, and α1(5) = MATRIX INVERSION;η1(0,picture) = 1,
η1(0,text) = 2,η1(0,topic) = 3,η1(1,topic) = 3,η1(2,topic) = 3,η1(2,language) = 4, and
η1(3,subtopic) = 5. The digraph representing this attribute structure is displayed in Figure 1.
The structure A1describes a very simple learning object containing a picure along with some text
explanation. Both text and picture have MATH as topic and MATRIX INVERSION as subtopic.
The root node of the structure is node 0and it can be easily checked from the figure that each
0
1
2
picture
text
3topic 5
topic
topic
sub-topic
4language
ENGLISH
PICTURE 1
TEXT1
MATH MATRIX_INVERSION
LO1
FIGURE 1: The attribute structure A1describes a simple learning object on ‘matrix algebra’
node can be reached from this node following some path in the graph. The root node is the
entry node in the asset structure of the learning object, and the edges departing from this node
specify the main components of the learning object itself. Thus, in our example, the learning
object represented by A1is defined by three different components: picture,text and topic. The
values of these three components are the attributes given by α1(η1(0,picture)) = PICTURE1,
α1(η1(0,text)) = DESCRIPTION,α1(η1(0,topic)) = MATH.
Observe, for instance that node 5can be reached from node 0following the path
hpicture,topic,subtopici. The fact that, in this example, each node is reachable from the root
node through some path is not a coincidence. It is explicitly required that every node in an attribute
structure be reachable from the root node.
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 2
Structuring and Merging Distributed Content
4. COMPARING AND COMBINING ATTRIBUTE STRUCTURES
Attribute structures can be compared one another. Informally, an attribute structure Asubsumes
another attribute structure A(denoted by A ⊑ A) if Acontains at least the same information as
A. In this sense an attribute structure can be thought as a class of learning objects (the class of
all learning objects represented by that structure), and ‘’ can be regarded as a partial order on
such classes. Formally, an attribute structure A=hQ, ¯q, α, ηisubsumes another attribute structure
A=hQ,¯q, α, ηiif there exists a mapping h:QQfulfilling the three conditions
(1) hq) = ¯q;
(2) for all qQand all cC, if η(q, c)is defined then h(η(q, c)) = η(h(q), c);
(3) for all qQ, if α(q)is defined then α(q) = α(h(q)).
In Section 2 the attribute sets were assumed to be partially ordered according to pedagogical
criteria and/or cognitive demands. Similarly we assume now that a partial order ‘6’ is defined
on the set Aof attributes so that, given two attributes a, b A, if a6bthen a learning object
defined by attribute ais less demanding than a learning object defined by attribute b. Then the
subsumption relation ‘’ is made consistent with ‘6’ if condition (3) is replaced by
(4) for all qQ, if α(q)is defined then α(q)6α(h(q)).
According to this new definition, if A B then Ais either less informative than Bor less
demanding than Bor both.
As an example consider the three attribute structures depicted in Figure 2. Assuming that
MATRIX PRODUCT 6MATRIX INVERSION, both mappings gand hfulfill conditions (1), (2)
and (4), thus both attribute structures labeled by LO2 and LO3 subsume the attribute structure
labeled by LO1. However there is neither mapping from LO2 to LO3 fulfilling the subsumption
conditions, nor the opposite, thus these last two structures are incomparable to each other. The
picture text
topic
topic topic
sub-topic
language
ENGLISH
PICTURE 1 TEXT1
MATH
MATRIX_INVERSION
LO1
text
topic
sub-topic
language
ENGLISH
TEXT1
MATH
MATRIX_INVERSION
LO3
picture
topic
topic
sub-topic
PICTURE1
MATH
MATRIX_PRODUCT
LO2
g h
FIGURE 2: Both LO2 and LO3 subsume LO1. However LO2 and LO3 are incomparable.
derivation of a surmise relation for the learning objects parallels that established in section 2. If
s:o7→ s(o)is a mapping assigning an attribute structure to each learning object, then a surmise
relation ‘4’ on the learning objects is derived by
o4os(o)s(o)
for all o, oO.
Two binary operations are defined on attribute structures: unification and generalization.
Mathematically, the unification of two attribute structures Aand B(denoted by AB), when exists,
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 3
Structuring and Merging Distributed Content
FIGURE 3: Generalization of two asset structures
is the least upper bound of {A,B} with respect to the subsumption relation. Dually,generalization
(denoted by A ⊓ B) is the greatest lower bound. In particular, for any two attribute structures A
and Bit holds that
A ⊑ A ⊔ B,B ⊑ A ⊔ B
A ⊓ B ⊑ A,A ⊓ B B
When two different learning objects are merged together, or when different assets are assembled
into a single learning object, the corresponding attribute structures are unified, and the
resulting attribute structure is assigned to the resulting learning object. On the other hand, the
generalization operation is used to find the common structure of two or more learning objects
or, stated another way, to classify learning objects. An example of the generalization operation
applied to two attribute structures is shown in Figure 3. Here, the resulting structure shows that
two learning objects have in common topic and subtopic. Generalized attribute structures can also
be used e.g. for searching a distributed environment for all learning objects whose structure is
consistent with a certain ‘template’ (for instance to find out all learning objects that are ‘problems’
involving, as cognitive operation, ‘recognition’ rather than ‘recall’).
REFERENCES
[1] Albert D., Held T. (1999) Component-based knowledge spaces in problem solving and
inductive reasoning. In D. Albert and J. Lukas (Eds.), Knowledge Spaces. Theories, Empirical
Research, Applications. Mahwah, NJ: Lawrence Erlbaum Associates.
[2] Albert D., Stefanutti L. (2003) Ordering and Combining Distributed Learning Objects through
Skill Maps and Asset Structures. Proceedings of the International Conference on Computers
in Education (ICCE 2003). Hong Kong, 2-5 December.
[3] Carpenter B. (1992). The logic of typed feature structures. Cambridge Tracts in Theoretical
Computer Science. Cambridge University Press, Cambridge.
[4] Davey B.A., Priestley H.A. (2002). Introduction to lattices and order. Second edition.
Cambridge University Press.
[5] Doignon J.-P., Falmagne J.-C. (1985). Spaces for the assessment of knowledge. International
Journal of Man-Machine Studies, 23, 175–196.
[6] Doignon J.-P., Falmagne J.-C. (1999). Knowledge Spaces. Berlin: Springer-Verlag.
[7] Schrepp M., Held T., Albert D. (1999). Component-based construction of surmise relations
for chess problems. In D. Albert and J. Lukas (Eds.), Knowledge Spaces. Theories, Empirical
Research, Applications. Mahwah, NJ: Lawrence Erlbaum Associates.
[8] Stefanutti L., Albert D., Hockemeyer C. (2003). Derivation of knowledge structures for
distributed learning objects. Proceedings of the 3rd International LeGE-WG Workshop, 3rd
December.
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 4
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Comparing (and thus ordering) learning objects, as well as producing new objects by combining together existing ones are two basic activities in a distributed learning system which exploits object reusability as the fundamental criterion for the construction of adaptive learning systems. In this paper we present a competency-based formal approach for comparing and combining learning objects in a distributed system.
Chapter
Full-text available
An approach to construct surmise relations or quasi--ordinal knowledge spaces through ordering principles, which apply on the components of problems, is described. These components are viewed as the basic units of knowledge which are necessary to solve the problems properly. We describe the three ordering principles 'set inclusion', 'multiset inclusion' and 'sequence inclusion' and an application of these principles to the construction of surmise relations on sets of chess problems. The basic units for constructing the surmise relation in chess are the tactical elements of the game --- the 'motives'. In terms of problem solving, these motives can be regarded as subgoals in the process of problem solving. The empirical validity of the described ordering principles is tested in two experimental investigations. The results show that the two principles 'multiset inclusion' and 'sequence inclusion' predict the difficulty of chess problems rather well, while the principle 'set inclusion' is clearly insufficient in this field. The experimental investigations also demonstrate the suitability of the theory of knowledge spaces for testing psychological theories.
Article
Full-text available
Knowledge space theory (Doignon & Falmagne, 1985; Albert & Lukas, 1999; Doignon & Falmagne, 1999) offers a rigorous and efficient formal framework for the construction, validation, and application of e-assessment and e-learning adaptive systems. This theory is at the basis of some existing e-learning and e-assessment adaptive systems in the U.S. and in Europe. Such systems are based on a fixed and local domain of knowledge, where fixed means that the domain does not change in time and local refers to the fact that the items are stored and available locally. In this paper we present some theoretical notes on the efficient construction and application of knowledge spaces for knowledge domains that are both dynamic and distributed in space. This goes in the direction of an exploitation of new technologies like the GRID for building the next generation of learning environments.
Article
The information regarding a particular field of knowledge is conceptualized as a large, specified set of questions (or problems). The knowledge state of an individual with respect to that domain is formalized as the subset of all the questions that this individual is capable of solving. A particularly appealing postulate on the family of all possible knowledge states is that it is closed under arbitrary unions. A family of sets satisfying this condition is called a knowledge space. Generalizing a theorem of Birkhoff on partial orders, we show that knowledge spaces are in a one-to-one correspondence with AND/OR graphs of a particular kind. Two types of economical representations of knowledge spaces are analysed: bases, and Hasse systems, a concept generalizing that of a Hasse diagram of a partial order. The structures analysed here provide the foundation for later work on algorithmic procedures for the assessment of knowledge.