Content uploaded by Luca Stefanutti
Author content
All content in this area was uploaded by Luca Stefanutti
Content may be subject to copyright.
Structuring and Merging
Distributed Content
Luca Stefanutti, Dietrich Albert, Cord Hockemeyer
Department of Psychology, University of Graz
Universit¨
atsplatz 2/III, 8010 Graz, AT
luca.stefanutti, dietrich.albert, cord.hockemeyer@uni-graz.at
A flexible approach for structuring and merging distributed learning object is
presented. At the basis of this approach there is a formal representation of a learning
object, called attribute structure. Attribute structures are labeled directed graphs
representing structured information on the learning objects. When two or more
learning objects are merged, the corresponding attribute structures are unified, and
the unified structure is attached to the resulting learning object.
Keywords: distributed learning objects, knowledge structures, structuring content
1. INTRODUCTION
In order to decide which object comes next in presenting a collection of learning objects to a
learner, one might establish some order. Given a set Oof learning objects, a surmise relation
on Ois any partial order ‘4’ on the learning objects in O. The interpretation of ‘4’ is that, given
any two learning objects oand o′in O,o4o′holds if a learner who masters o′also masters o.
The concept of a surmise relation was introduced by [5] as one of the fundamental concepts of
a theoretical framework called Knowledge Space Theory. According to this theory the knowledge
state of a learner is the subset Kof all learning objects in Othat (s)he masters. A subset K⊆Q
is said to be a knowledge state of the surmise relation ‘4’ if o∈Kand o′4oimplies o′∈K
for all learning objects o, o′∈O. Then the collection Kof all knowledge states of ‘4’ is called the
knowledge space derived from ‘4’. Knowledge spaces are used for representing and assessing
the knowledge state of a learner (see, e.g.,[5] and [6] in this connection).
The construction of a surmise relation may follow different approaches. After a brief presentation
of an existing approach based on vectors of components of a learning object, we extend
this approach to a more flexible representation called attribute structure [2]. The mathematical
properties of attribute structures make it possible to compare distributed learning objects in terms
of how much informative and how much demanding they are.
2. THE COMPONENT APPROACH
According to the component approach [1, 7], every content object oin Ois equipped with an
ordered n-tuple A=ha1, a2, . . . , aniof attributes where the length nof the attribute n-tuple A
is fixed for all objects. Each attribute aiin Acomes from a corresponding attribute set Cicalled
the i-th component of the content object. In this sense, given a collection C={C1, C2, . . . , Cn}
of disjoint attribute sets (or components), each object o∈Ois equipped with an element of the
Cartesian product P=C1×C2× · · · × Cn. Usually each component Ciis equipped with a partial
order ‘6i’ so that hCi,6iiis in fact a partially ordered set of attributes. The partial order ‘6i’ is
interpreted in the following way: for a, b ∈Ci, if a6ibthen a learning object characterized by
attribute ais less demanding than a learning object characterized by attribute b. To give a simple
example, it might be postulated that ‘computations involving integer numbers’ (attribute a) are less
demanding than ‘computations involving rational numbers’ (attribute b). A natural order 6on the
elements in P, the so-called coordinatewise order [4], is derived from the npartial orders ‘6i’ by
hx1, x2, . . . , xni6hy1, y2, . . . , yni ⇐⇒ ∀i:xi6iyi
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 1
Structuring and Merging Distributed Content
Stefanutti, L., Albert, D., & Hockemeyer, C. (2005). Structuring and Merging Distributed Content. In
P. Ritrovato, C. Allison, S. A. Cerri, T. Dimitrakos, M. Gaeta & S. Salerno (Eds.), Towards the
Learning Grid: Advances in Human Learning Services (Vol. 127 Frontiers in Artificial Intelligence
and Applications, pp. 113-118). IOS Press.
Structuring and Merging Distributed Content
If f:O→ P is a mapping assigning an attribute n-tuple to each learning object, then a surmise
relation ‘4’ on the learning objects is established by
o4o′⇐⇒ f(o)6f(o′)
for all o, o′∈O. The mapping fcan easily be established even when the learning objects are
distributed (see, e.g., [8]).
3. ATTRIBUTE STRUCTURES
An attribute structure is used to represent structured information on a learning object or an asset
and in this sense it represents an extension of the attribute n-tupel discussed in Section 2. From
a mathematical standpoint attribute structures correspond to the feature structures introduced by
[3] in computational linguistics. Let Cbe a set of components and Aa collection of attributes, with
A∩C=∅. An attribute structure is a labeled directed graph A=hQ, ¯q, α, ηiwhere:
•Qis a set of nodes of the graph;
•¯q∈Qis the root node of the graph;
•α:Q→Ais a partial function assigning attributes to some of the nodes;
•η:Q×C→Qis a partial function specifying the edges of the graph.
As an example, let
C′={picture,topic,subtopic,text,language}
be a set of components, and
A′={PICTURE1,TEXT1,ENGLISH,MATH,MATRIX INVERSION}
be a collection of attributes. Suppose moreover that a simple learning object is described by
the asset structure A1=hQ1,¯q1, α1, γ1i, where Q1is the set of nodes, ¯q1is the root node, and
α1and γ1are defined as follows: α1(0) is not defined, α1(1) = PICTURE1,α1(2) = TEXT1,
α1(3) = MATH,α1(4) = ENGLISH, and α1(5) = MATRIX INVERSION;η1(0,picture) = 1,
η1(0,text) = 2,η1(0,topic) = 3,η1(1,topic) = 3,η1(2,topic) = 3,η1(2,language) = 4, and
η1(3,subtopic) = 5. The digraph representing this attribute structure is displayed in Figure 1.
The structure A1describes a very simple learning object containing a picure along with some text
explanation. Both text and picture have MATH as topic and MATRIX INVERSION as subtopic.
The root node of the structure is node 0and it can be easily checked from the figure that each
0
1
2
picture
text
3topic 5
topic
topic
sub-topic
4language
ENGLISH
PICTURE 1
TEXT1
MATH MATRIX_INVERSION
LO1
FIGURE 1: The attribute structure A1describes a simple learning object on ‘matrix algebra’
node can be reached from this node following some path in the graph. The root node is the
entry node in the asset structure of the learning object, and the edges departing from this node
specify the main components of the learning object itself. Thus, in our example, the learning
object represented by A1is defined by three different components: picture,text and topic. The
values of these three components are the attributes given by α1(η1(0,picture)) = PICTURE1,
α1(η1(0,text)) = DESCRIPTION,α1(η1(0,topic)) = MATH.
Observe, for instance that node 5can be reached from node 0following the path
hpicture,topic,subtopici. The fact that, in this example, each node is reachable from the root
node through some path is not a coincidence. It is explicitly required that every node in an attribute
structure be reachable from the root node.
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 2
Structuring and Merging Distributed Content
4. COMPARING AND COMBINING ATTRIBUTE STRUCTURES
Attribute structures can be compared one another. Informally, an attribute structure Asubsumes
another attribute structure A′(denoted by A ⊑ A′) if A′contains at least the same information as
A. In this sense an attribute structure can be thought as a class of learning objects (the class of
all learning objects represented by that structure), and ‘⊑’ can be regarded as a partial order on
such classes. Formally, an attribute structure A=hQ, ¯q, α, ηisubsumes another attribute structure
A′=hQ′,¯q′, α′, η′iif there exists a mapping h:Q→Q′fulfilling the three conditions
(1) h(¯q) = ¯q′;
(2) for all q∈Qand all c∈C, if η(q, c)is defined then h(η(q, c)) = η′(h(q), c);
(3) for all q∈Q, if α(q)is defined then α(q) = α′(h(q)).
In Section 2 the attribute sets were assumed to be partially ordered according to pedagogical
criteria and/or cognitive demands. Similarly we assume now that a partial order ‘6’ is defined
on the set Aof attributes so that, given two attributes a, b ∈A, if a6bthen a learning object
defined by attribute ais less demanding than a learning object defined by attribute b. Then the
subsumption relation ‘⊑’ is made consistent with ‘6’ if condition (3) is replaced by
(4) for all q∈Q, if α(q)is defined then α(q)6α′(h(q)).
According to this new definition, if A ⊑ B then Ais either less informative than Bor less
demanding than Bor both.
As an example consider the three attribute structures depicted in Figure 2. Assuming that
MATRIX PRODUCT 6MATRIX INVERSION, both mappings gand hfulfill conditions (1), (2)
and (4), thus both attribute structures labeled by LO2 and LO3 subsume the attribute structure
labeled by LO1. However there is neither mapping from LO2 to LO3 fulfilling the subsumption
conditions, nor the opposite, thus these last two structures are incomparable to each other. The
picture text
topic
topic topic
sub-topic
language
ENGLISH
PICTURE 1 TEXT1
MATH
MATRIX_INVERSION
LO1
text
topic
sub-topic
language
ENGLISH
TEXT1
MATH
MATRIX_INVERSION
LO3
picture
topic
topic
sub-topic
PICTURE1
MATH
MATRIX_PRODUCT
LO2
g h
FIGURE 2: Both LO2 and LO3 subsume LO1. However LO2 and LO3 are incomparable.
derivation of a surmise relation for the learning objects parallels that established in section 2. If
s:o7→ s(o)is a mapping assigning an attribute structure to each learning object, then a surmise
relation ‘4’ on the learning objects is derived by
o4o′⇐⇒ s(o)⊑s(o′)
for all o, o′∈O.
Two binary operations are defined on attribute structures: unification and generalization.
Mathematically, the unification of two attribute structures Aand B(denoted by A⊔B), when exists,
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 3
Structuring and Merging Distributed Content
picture
topic
topic
sub-topic
PICTURE1
MATH
MATRIX_PRODUCT
text
topic
sub-topic
language
ENGLISH
TEXT1
MATH
MATRIX_INVERSION
topic
=
sub-topic
MATH
topic
MATRIX_PRODUCT
FIGURE 3: Generalization of two asset structures
is the least upper bound of {A,B} with respect to the subsumption relation. Dually,generalization
(denoted by A ⊓ B) is the greatest lower bound. In particular, for any two attribute structures A
and Bit holds that
A ⊑ A ⊔ B,B ⊑ A ⊔ B
A ⊓ B ⊑ A,A ⊓ B ⊑ B
When two different learning objects are merged together, or when different assets are assembled
into a single learning object, the corresponding attribute structures are unified, and the
resulting attribute structure is assigned to the resulting learning object. On the other hand, the
generalization operation is used to find the common structure of two or more learning objects
or, stated another way, to classify learning objects. An example of the generalization operation
applied to two attribute structures is shown in Figure 3. Here, the resulting structure shows that
two learning objects have in common topic and subtopic. Generalized attribute structures can also
be used e.g. for searching a distributed environment for all learning objects whose structure is
consistent with a certain ‘template’ (for instance to find out all learning objects that are ‘problems’
involving, as cognitive operation, ‘recognition’ rather than ‘recall’).
REFERENCES
[1] Albert D., Held T. (1999) Component-based knowledge spaces in problem solving and
inductive reasoning. In D. Albert and J. Lukas (Eds.), Knowledge Spaces. Theories, Empirical
Research, Applications. Mahwah, NJ: Lawrence Erlbaum Associates.
[2] Albert D., Stefanutti L. (2003) Ordering and Combining Distributed Learning Objects through
Skill Maps and Asset Structures. Proceedings of the International Conference on Computers
in Education (ICCE 2003). Hong Kong, 2-5 December.
[3] Carpenter B. (1992). The logic of typed feature structures. Cambridge Tracts in Theoretical
Computer Science. Cambridge University Press, Cambridge.
[4] Davey B.A., Priestley H.A. (2002). Introduction to lattices and order. Second edition.
Cambridge University Press.
[5] Doignon J.-P., Falmagne J.-C. (1985). Spaces for the assessment of knowledge. International
Journal of Man-Machine Studies, 23, 175–196.
[6] Doignon J.-P., Falmagne J.-C. (1999). Knowledge Spaces. Berlin: Springer-Verlag.
[7] Schrepp M., Held T., Albert D. (1999). Component-based construction of surmise relations
for chess problems. In D. Albert and J. Lukas (Eds.), Knowledge Spaces. Theories, Empirical
Research, Applications. Mahwah, NJ: Lawrence Erlbaum Associates.
[8] Stefanutti L., Albert D., Hockemeyer C. (2003). Derivation of knowledge structures for
distributed learning objects. Proceedings of the 3rd International LeGE-WG Workshop, 3rd
December.
4th International LeGE-WG Workshop:
Towards a European Learning Grid Infrastructure
Progressing with a European Learning Grid 4