Content uploaded by Tobias Ley
Author content
All content in this area was uploaded by Tobias Ley on Aug 13, 2019
Content may be subject to copyright.
Volume 6(2), 120—139. http://dx.doi.org/10.18608/jla.2019.62.9
An Infrastructure for Workplace Learning Analytics:
Tracing Knowledge Creation with the Social
Semantic Server
Adolfo Ruiz-Calleja1*, Sebastian Dennerlein2, Dominik Kowald2, Dieter Theiler2, Elisabeth
Lex2,3, Tobias Ley4
Abstract
In this paper, we propose the Social Semantic Server (SSS) as a service-based infrastructure for workplace and
professional learning analytics (LA). The design and development of the SSS have evolved over eight years, starting
with an analysis of workplace learning inspired by knowledge creation theories and their application in different
contexts. The SSS collects data from workplace learning tools, integrates it into a common data model based on a
semantically enriched artifact-actor network, and offers it back for LA applications to exploit the data. Further, the
SSS design’s flexibility enables it to be adapted to different workplace learning situations. This paper contributes by
systematically deriving requirements for the SSS according to knowledge creation theories, and by offering support
across a number of different learning tools and LA applications integrated into the SSS. We also show evidence
for the usefulness of the SSS extracted from 4 authentic workplace learning situations involving 57 participants.
The evaluation results indicate that the SSS satisfactorily supports decision making in diverse workplace learning
situations and allow us to reflect on the importance of knowledge creation theories for this analysis.
Notes for Practice
•
We propose the Social Semantic Server (SSS) as a service-based infrastructure for workplace and
professional learning analytics (LA) that focuses on knowledge creation theories.
•We identify the requirements for the SSS and present its design and development.
•
We evaluated the SSS by integrating a set of learning tools and LA applications into the SSS and using it in
4 authentic workplace learning situations involving 57 participants.
Keywords
Learning analytics, informal learning, workplace learning, artifact-actor network, data infrastructure
Submitted: 28.06.2018 — Accepted: 01.02.2019 — Published: 05.08.2019
Corresponding author
1
Email: adolfo@tlu.ee Address: School of Digital Technologies, Tallinn University, Narva mnt 25, 10120, Tallinn, Estonia
2
Address: Know-Center GmbH. Research Center for Data-Driven Business and Big Data Analytics, Inffeldgasse 13, 6th floor, 8010 Graz, Austria
3Address: Graz University of Technology. Institute of Interactive Systems and Data Science, Inffeldgasse 13, 5th floor, 8010 Graz, Austria
4Address: School of Educational Sciences, Tallinn University, Narva mnt 25, 10120, Tallinn, Estonia
1. Introduction
Workplace and professional learning happens across a multitude of formal and informal settings where professionals advance
their competence, mostly in a self-directed manner. Workplace learning can be a rather informal way of gaining knowledge and
expertise by self-directed exploration and social exchange that is tightly connected to the processes and the places of work (Eraut,
2004). In contrast to formal education, workplace learning is often driven by personal interest or by problems that appear in the
work context. It typically lacks a pedagogical design to guide the learning process (Kooken, Ley, & De Hoog, 2007). While
professionals are also involved in more formal learning settings, such as training, they are commonly motivated by job-based
demands and the need to contribute to workplace performance. The fact that workplace learning is multi-episodic, happens
across diverse contexts, and is tightly coupled with the workplace poses several challenges for the design and development of
technology that supports and analyzes workplace learning (Klamma, 2013).
In this paper, we address challenges related to learning analytics (LA) in workplace settings. LA collects data about learning
processes and feeds it back to learners or trainers to support their decisions about their own or others’ learning. LA in the
120
workplace faces a number of challenges (Cardinali, 2015; Ruiz-Calleja, Dennerlein, Ley, & Lex, 2016; Ruiz-Calleja, Prieto,
Ley, Rodr
´
ıguez-Triana, & Dennerlein, 2017). For example, learners use a number of learning tools in a spontaneous and
difficult-to-foresee way because learning does not follow a planned curriculum or pedagogical design. A number of such
learning tools have been proposed to support specific workplace learning tasks, such as the creation of portfolios that allow
learning to be traced in multiple contexts (Krull & Leijen, 2015), or peer discussions to support help-seeking (Santos et al.,
2016). However, if we want to look at workplace and professional learning processes across different tools, contexts, and
learning tasks, then LA needs to take a more holistic perspective.
To provide this more holistic picture for workplace LA, we need to coherently analyze data from several tools used for
learning in the workplace. Some LA infrastructures have been proposed to collect, integrate, and process data from several
learning tools. While some of these proposals have been designed and tested in realistic situations (e.g., Renzel & Klamma,
2013; Siadaty et al., 2012)), most of them still focus on a limited number of learning tasks and tools. Additionally, a holistic
perspective requires us to rely on a careful analysis of existing learning theories, one of the major challenges in the LA
community (Ga
ˇ
sevi
´
c, Dawson, & Siemens, 2015). For workplace and professional learning, this is even more critical because
there is no curriculum or pedagogical design to guide the analysis. Thus, a focus on a particular learning theory is crucial to
guide the processes of collecting, managing, and representing workplace learning data. Much too often, theoretical claims
remain implicit.
In a recent review, we analyzed existing proposals for workplace LA (Ruiz-Calleja et al. 2017). This analysis leads us to
conclude that most current proposals focus on theories that follow knowledge acquisition or participation metaphors (Paavola
& Hakkarainen, 2005). In these cases, individuals are understood as the basic unit of knowing and learning (knowledge
acquisition), or learning is seen as an interactive process of participating in cultural practices (participation). We see much less
focus on the knowledge creation metaphor (Paavola & Hakkarainen, 2005), which considers learning as a joint development
of objects of activity. This is especially true for LA infrastructures that allow us to trace learning processes across several
learning tools and contexts. This constitutes a significant problem because it means missing essential elements of a learning
situation, such as how new knowledge is created or how innovation processes happen in communities. Considering that learning
should focus on innovation, creative problem solving, and knowledge creation (Peschl & Fundneider, 2014) in order to keep a
competitive edge in the current knowledge-based economy, this missing emphasis on the knowledge creation metaphor in LA is
especially troublesome.
To address these limitations, we propose to exploit the Social Semantic Server (SSS) (Dennerlein, Kowald, et al., 2015) as
an infrastructure for workplace LA. This paper systematically derives the SSS requirements according to knowledge creation
theories, with a special focus on how data from different learning tools is coherently combined and offered back to LA
applications. We also illustrate the support offered by the SSS across a number of different learning tools and settings and
collect evidence for its usefulness from four evaluation studies. These studies allowed us to reflect on the importance of
knowledge creation theories for workplace LA.
The rest of the paper is structured as follows: first we summarize the state of the art related to workplace LA infrastructures;
then we describe the SSS, whose evaluation is then presented and discussed; we end the paper by summarizing its conclusions.
2. Data Infrastructure for Workplace LA
The field of workplace LA is still in its early stages of development, but interest in it has increased in the last few years
(Ruiz-Calleja, Prieto, Ley, Rodr
´
ıguez-Triana, & Dennerlein, 2017). Some LA projects, such as LACE
1
, moved their attention
to the workplace domain (Cardinali, 2015), and other workplace learning projects, such as Learning Layers
2
, began to use
LA to analyze and support learning processes (Ruiz-Calleja, Dennerlein, Ley, & Lex, 2016). These projects exploited LA
techniques as a way to assess or support decision making in workplace learning processes. These proposals are implicitly
or explicitly grounded by particular learning theories (Ga
ˇ
sevi
´
c, Dawson, & Siemens, 2015). We reviewed them following
the three metaphors of learning — knowledge acquisition, participation, and knowledge creation — defined by Paavola &
Hakkarainen (2005). These metaphors can be understood as different lenses for the design or analysis of learning situations and
are “closely connected to the way knowledge is understood in different conceptions of learning” (Paavola & Hakkarainen, 2005).
The metaphors help us to understand the assumptions that guide the creation of existing LA applications and infrastructures,
especially those assumptions related to how knowledge is represented.
Many LA proposals follow the knowledge acquisition metaphor. This metaphor assumes individuals as the basic unit
of learning. Hence, these LA applications commonly model the learners according to the knowledge they acquired (e.g.,
Ley & Kump, 2013; Niemann & Wolpers, 2014). Depending on the learning tools, this knowledge may be stated as a set of
competencies (e.g., Krull & Leijen, 2015) or as a set of topics for which the learner is considered an expert (e.g., Ley & Kump,
1http://www.laceproject.eu/
2http://learning-layers.eu
121
2013). These LA systems typically use ontologies to structure the data they manage (e.g., Nussbaumer et al., 2012; Siadaty,
Ga
ˇ
sevi
´
c, & Hatala, 2016b) or other formal conceptualizations of the learning domain, such as knowledge spaces (e.g., Ley &
Kump, 2013).
Other proposals (e.g., Rajagopal, van Bruggen, & Sloep, 2017; Buckingham-Shum & Ferguson, 2012) follow the par-
ticipation metaphor, which assumes that learning happens by participating in cultural practices that shape cognitive activity
in manifold ways. These LA applications focus on modelling learning communities and groups depending on their social
behaviour. Therefore, they create social networks to abstract the social interactions that occur in the tool. In many cases, social
network analysis techniques are employed to extract the community’s expertise about certain topics or to detect communities
(Klamma, 2013) and unconnected subnetworks in professional networks (e.g., de Laat & Schreurs, 2013).
Other examples (e.g., Derntl, G
¨
unnemann, & Klamma, 2013; Southavilay, Yacef, Reimann, & Calvo, 2013) can be found
that follow the knowledge creation metaphor, which deals with the collaborative and systematic development of common
objects of activity. These LA applications model how learning materials and conceptual artifacts are collaboratively created
(Schoefegger, Seitlinger, & Ley, 2010; Th
¨
us, Chatti, Brandt, & Schroeder, 2015). The group of learners is taken as the unit of
analysis, considering also their tools and common artifacts (Berendt, Vuorikari, Littlejohn, & Margaryan, 2014; Buckingham-
Shum & Ferguson, 2012). Hence, interactions between learners and artifacts and the contexts in which these interactions happen
are taken into account, creating a context-aware artifact-actor network (AAN) (Ruiz-Calleja, Dennerlein, Tomberg, Ley, et al.,
2015), which is then exploited to understand the evolution of the learners and artifacts (e.g., Fidalgo-Blanco, Sein-Echaluce,
Garcia-Penalvo, & Conde, 2015; Th
¨
us, Chatti, Brandt, & Schroeder, 2015). It is typical for such systems to make use of
folksonomies, enabling users to introduce new and unexpected terms or topics (Schmidt et al., 2009). The number of LA
proposals that follow this metaphor is much lower than in the previous two metaphors. This is surprising because of the long
history of knowledge creation theory (Nonaka, 1994) and its recognized importance in workplace learning and professional
development in the knowledge society (Paavola & Hakkarainen, 2005).
Another restriction shared by all of the LA proposals presented above is that they only collect and process data from a single
application. Consequently, these proposals put less emphasis on the reusability of the data they manage and the algorithms
employed to manipulate this data. However, the LA vision goes beyond these restrictions (Siemens et al., 2011): learners,
especially workplace learners, typically employ several tools in a way that is difficult to foresee. Researchers are therefore
encouraged to develop open proposals that enable the integration of content and data from different sources. These proposals
should also be extensible for third parties to integrate their own data sources and data processing techniques. Following this
vision, several LA infrastructures were proposed to enhance the integration of data from several tools as well as the reusability
of this data and the algorithms to process it (Duval, 2011) (see Table 1). Next, we will review these proposals.
Several authors propose infrastructures that exploit the data collected by learning management systems (LMSs) in formal
learning contexts. LMSs successfully integrate data from different tools, but they structure learning processes according to
a pre-described pedagogical design that does not commonly exist in workplace learning. Nonetheless, some authors exploit
the data collected by LMSs for analysis that goes beyond the pedagogical design. For example, Fidalgo-Blanco et al. (2015)
use Moodle data to assess the individual contributions in teamwork activities. For this purpose, they analyze the relationships
between learners and between learners and learning artifacts inside each team. They focus on a set of indicators to understand
and assess how the team collaborated. Another very interesting proposal is the Connected Learning Analytics (CLA) toolkit
(Bakharia, Kitto, Pardo, Ga
ˇ
sevi
´
c, & Dawson, 2016). A distinguishing characteristic of the CLA toolkit is that it collects data
from social media applications (e.g., Facebook and Twitter) to analyze student behaviour and student relationships. The authors
applied the CLA toolkit in different “student-facing LA” case studies (Kitto, Lupton, Davis, & Waters, 2017) to make learners
reflect upon and change their learning behaviour.
Other informal learning infrastructures integrate tools and support LA without a pedagogical design. An example is the
ROLE Sandbox
3
, a widget-based personal learning environment built on theories of self-regulated learning (Kravcik & Klamma,
2012). In Renzel & Klamma (2013), ROLE Sandbox log data is exploited to extract statistics and to define several social
networks. The authors argue that directly analyzing web log data has the advantage of guaranteeing data interoperability among
different services without needing to develop new data standards. This solution technically enables data sharing and processing,
but web log data does not include any kind of learning concept, which hinders the integration and reuse of learning-specific
information. In order to overcome this limitation, other technical frameworks and infrastructures have been proposed. Two
interesting examples are the Contextualized Attention Metadata (CAM) framework (Schmitz, Wolpers, Kirschenmann, &
Niemann, 2011) and Learn-B (Siadaty et al., 2012). Their aim is to enhance the collection, processing, and offering of learning
data. The CAM framework is used to log activities from those tools that generate CAM records; the logs can then be offered
to analyze the learning process. The CAM framework has already been employed for several purposes, including learning
object classification, competency detection, emotional state recognition, and goal and intention detection (see Schmitz et al.
2011). On the other hand, Learn-B is a service-based software environment designed to support self-regulated learning in the
3http://role-sandbox.eu
122
Table 1. Comparison of Several Infrastructures and Frameworks for LA
Functionality offered Technical issues
Knowledge Retrieves Informal Workplace
metaphor data from Extensibility learning learning Data model API
Fidalgo-Blanco et al. K. creation Moodle Low Partly No AAN
CLA toolkit K. creation Social Media High Partly No Ontology-based JSON-LD xAPI
ROLE Sandbox Participation ROLE widgets Low Yes Partly Social network REST
CAM framework K. acquisition Integrated tools High Yes Partly Ontology-based
Learn-B K. acquisition Integrated tools High Yes Yes Ontology-based
Apereo LAI K. acquisition xAPI tools High No No Ontology-based xAPI
Watershed LRS K. acquisition xAPI tools Low Yes Partly Ontology-based xAPI
SSS K. creation Integrated SSS High Yes Yes Context-aware AAN REST
workplace. Learn-B was employed in several studies of the Intel-LEO project
4
for several purposes, such as assessing the
impact of scaffolding practices in workplace environments (Siadaty, Ga
ˇ
sevi
´
c, & Hatala, 2016a,b). Following the knowledge
acquisition metaphor of learning, both the CAM framework and Learn-B defined ontologies to structure activity logs. Both are
also able to describe the contexts where the learning activities happen.
As several infrastructures were proposed, how to share data among them became a relevant problem. For this reason, open
software and standards were promoted for LA (Siemens et al., 2011). In this regard, Experience API (xAPI)
5
was adopted
as a de facto standard for the exchange of learning data. Its main idea is to define a common data format between learning
tools and infrastructures to exchange information about learning events. According to xAPI, each learning event is defined
by a quadruple: a subject, a verb, an object, and a context, while an ontology should be implemented to define the different
elements of the quadruple. As an example, Bakharia et al. (2016) use JSON for Linked Data (JSON-LD) to define an extensible
vocabulary for xAPI statements.
Some learning record stores (LRSs) have also been proposed to collect and manipulate xAPI data. Examples of LRSs with
an open licence are Learning Locker
6
and Larissa
7
. Other initiatives have built on top of them to create open infrastructures or
toolkits that can be adapted to different learning situations. Some examples are Starfish Analytics
8
, Jisc Learning Analytics
9
,
and Apereo Learning Analytics Initiative (LAI)
10
(we included Apereo LAI in Table 1 as an example to represent this group
of LRSs). These proposals support the collection and storage of xAPI data. They also offer some data analysis services
and some user interfaces (e.g., SNA algorithms and dashboards). These services and interfaces can be adapted to different
learning situations, or new services can be integrated into the toolkit. However, all of these initiatives focus on formal learning
and take an institutional perspective. An interesting example of an xAPI-compliant infrastructure that supports informal
learning is Watershed LRS
11
. The data analysis of Watershed LRS again follows the knowledge acquisition approach in a rather
individualistic way. However, it is a closed infrastructure, offered as a cloud service, that cannot be extended by third parties.
3. The SSS
The SSS (Dennerlein, Kowald, et al., 2015) is an infrastructure that collects data from workplace learning tools and offers
it back to be used by LA applications. It evolved over eight years from a close analysis of workplace learning practices in
different domains carried out in the MATURE project
12
(Ravenscroft, Schmidt, Cook, & Bradley, 2012). More recently, it has
been applied in the Learning Layers project
13
to support informal workplace learning (Ley et al., 2014) with a special focus
on small and medium-size enterprises working in innovation-driven domains. Its theoretical roots lie in knowledge creation
theories. Moreover, its design was based on a number of additional empirical studies, such as in-depth case studies of workplace
and organizational learning (e.g., Kaschig et al., 2012) and a number of design-based research activities in several contexts
(Dennerlein, Theiler, et al., 2015). These studies contributed to the understanding of how individual, group, and organizational
learning are intertwined in knowledge creation. To name just a few examples, the studies found out how professionals make
sense of experiences and informally learn from them (Dennerlein et al., 2014), how help-seeking happens in professional
4http://intelleo.eu/index.php
5https://experienceapi.com
6https://www.ht2labs.com/learning-locker-community/overview/
7https://github.com/Apereo-Learning-Analytics-Initiative/Larissa
8https://www.starfishsolutions.com/home/starfish-enterprise-success-platform/starfish-analytics/
9https://www.jisc.ac.uk/learning-analytics
10https://www.apereo.org/communities/learning-analytics-initiative
11http://www.watershedlrs.com
12https://mature-ip.eu/
13http://learning-layers.eu
123
Figure 1. Potential scenario supported by the SSS
networks (Santos et al., 2016), and how organizations create boundary objects to facilitate knowledge sharing (Kaschig et al.,
2012). The SSS was designed by deriving requirements from the tools and services needed to support these empirical studies.
3.1 Requirements for the SSS
Figure 1 depicts a typical scenario for a workplace LA infrastructure. Workplace learning participants (e.g., workers or trainers)
use a set of tools to learn at the workplace. The workplace LA infrastructure collects the learning events from these tools and
creates a coherent dataset out of them. This data is then offered back to workplace LA applications to support the decision
making of workplace learning participants based on their learning evidence.
The SSS has evolved into an open-source infrastructure designed to address the knowledge creation metaphor for this type
of scenario. It is not restricted to a specific domain or activity, so it should support a wide range of workplace learning scenarios
(
REQ3
in Table 2 and Figure 1), which may differ in the way they are enacted, in their number of participants, and in their
level of formality (Kooken, Ley, & De Hoog, 2007). Therefore, the SSS should be flexible enough to adapt to many different
learning situations.
The SSS is meant to be used during normal activity in real work environments (
REQ2
). Hence, the SSS should be able to
remove the inherent boundaries from the large variety of tools that are currently used for workplace learning (Kooken, Ley,
& De Hoog, 2007). It is well known that in real work environments different tools are used for learning purposes (Cardinali,
2015). Hence, the SSS should collect the learning events tracked by different tools and integrate them (
REQ1
). It should also
enable different integration strategies, because the technical aspects of these tools may also differ.
The data collected by the SSS should be offered back to LA applications later on (
REQ4
). Therefore, the SSS should offer a
data-access API for external applications. It would also be desirable for the SSS to allow the definition of new data APIs. Thus,
the wide range of LA applications that are currently used for workplace learning (Ruiz-Calleja, Prieto, Ley, Rodr
´
ıguez-Triana,
& Dennerlein, 2017) could potentially exploit the SSS’s data.
Other requirements related to how the SSS structures the knowledge derive from its focus on the knowledge creation
metaphor (
REQ5
). As we showed in the previous section, the LA applications that follow this metaphor establish an AAN.
This AAN should be able to describe different relationships between actors and artifacts (e.g., resource creation or access). It is
key for this metaphor to track the context in which interactions between learners, and between learners and artifacts, take place.
The special focus of knowledge creation on emerging knowledge also requires the SSS to represent different data structures
in different levels of maturity (Ruiz-Calleja, Prieto, Ley, Rodr
´
ıguez-Triana, & Dennerlein, 2017). Finally, because the SSS
is meant to support decision making in workplace learning practices (
REQ6
), exploitating the data collected by the SSS is
expected to have a positive impact on these practices.
3.2 Design and Implementation of the SSS
3.2.1 SSS Data Model
The requirements of the SSS data model are derived from its focus on the knowledge creation metaphor (REQ5) and the need to
support the semantic integration of data collected from different learning tools (REQ3) with different levels of formality (REQ1).
The basis of the SSS data model is an AAN, in accordance with the analysis of LA applications that follows the knowledge
creation metaphor. Therefore, the SSS can explicitly describe the relationships between learners, between artifacts, and between
learners and artifacts. Furthermore, different types of relationships can be defined, giving meaning to the connections among
entities. Some contextual information (e.g., time or some keywords) can also be attached to the entities.
124
Figure 2. Example of an AAN from the SSS
Figure 3. The SSS core ontology
As an example, a typical situation in informal learning is a worker (let’s call him Paul) sharing a document that describes
a guideline for a particular work process. A colleague (Peter) finds this document, marks it for his own use, and tags it as
“useful.” In this case, the AAN will register four entities (both users, the document, and the tag) and four relationships with
three different meanings (“tagged,” “shared,” and “isAssignedTo”). In addition, the relationships between the entities will
include the time frame of the event and maybe some contextual information, such as the location or the tool employed for the
event. Figure 2 graphically depicts the resulting AAN from this example.
This AAN can be seen as a high-level abstraction that offers a common data model to integrate data from multiple learning
tools. However, the entities related to the AAN (actors, artifacts, relationships, and contexts) should be semantically described
if the semantic integration of data is required. For this reason, the SSS includes a core ontology. This ontology is used to
describe the entities in the AAN, their relationships, and the parameters to define the context where these relationships happen.
Hence, the data model of the SSS is based on a context-aware and semantically enriched AAN.
Figure 3 represents the entities and the most important relationships in the SSS core ontology. The main entities to define
nodes in the AAN are
User
and
Entity
. Users can be aggregated into
Circles
(i.e., an abstraction similar to Google+
Circles, to aggregate users into groups) and entities into
Spaces
(i.e., an abstraction similar to Dropbox spaces, to aggregate
documents into folders). Then, some metadata can be attached to the entities (
Rating
and
Tag
). Finally, the
Activity
is
used to trace activities where users and entities are involved. Note that the SSS core ontology does not include a concept to
define contexts because the parameters related to learning contexts highly depend on each specific situation. For example, in
some situations, the context is related to the metadata attached to the entity (e.g., tags), while in others it is related to the spaces
to which the entities are aggregated.
This ontology can be extended later on to include additional concepts for specific learning situations. For instance, in
order to describe the AAN depicted in Figure 2, the concept Document should be defined as a subclass of Entity, and the
relationship
Shared
should also be defined. In some other cases, ontology extensions are used to define parameters to describe
learning contexts, such as location. It should be noted that the SSS faces a well-known trade-off: ontologies are more expressive
vocabularies than folksonomies and are used to model domains or to allow the semantic integration of several applications;
however, ontologies are sometimes not able to collect emerging knowledge because they are more difficult to modify and
evolve. For this reason, ontology extensions can provide further structure by semantically defining narrower concepts (e.g.,
different types of “artifacts”), but this semantic definition can also be avoided (e.g., by defining a folksonomy of tags). Thus,
the SSS can integrate data with different levels of formality (Ruiz-Calleja, Dennerlein, Tomberg, Ley, et al., 2015).
125
Figure 4. SSS service architecture
3.2.2 SSS Software Architecture and Implementation
The SSS software infrastructure should allow the integration of a wide variety of tools used for learning in the workplace
(REQ3 and REQ2). It should also be flexible, to be adapted to different informal learning situations (REQ1), and extensible, so
new functionalities can be offered to data-consuming applications (REQ4).
The SSS software architecture follows the service-oriented-architecture (SOA) style (Earl, 2005). SOA promotes archi-
tectures based on the light integration of loosely coupled services that offer a granular functionality and can be orchestrated
to provide a more complex functionality. SOA leads to flexible and modular architectures, since services can be exchanged
if needed. This is achieved by dividing the functionality of the SSS into fine granular services that can be easily maintained,
reused, combined, and replaced. Thus, by adding new services or configuring the existing ones, the SSS can be extended to
offer additional functionalities or it can be adapted to specific learning scenarios.
Figure 4 depicts the architecture of each SSS service. Each service comprises a set of
Service Implementations
, a
Service Implementation Registry
, and a
Service API
. Each service may include several
Service Imple-
mentations
(or just one) that offer the same functionalities in different ways. For example, a tag recommendation service
may have two implementations, each of them based on different recommendation algorithms. Each service may define its
own
Datatypes
and can have some
Configuration
parameters. In the previous example, the configuration parameters
define the data sources accessed by the implementations of the recommender service or the way each result is ordered. This
way, it is possible to modify the internal logic of the services (and adapt their functionality accordingly) without needing to
change the
Service API
. Each
Service Implementation
also includes a
Data Access Interface
, which is
used to access data sources. These data sources can be databases integrated into the SSS (e.g., MySQL) or external data sources
(e.g., external applications that share their data with the SSS). The
Service Implementation Registry
mediates
the communication between the
Service API
and the
Service Implementations
, deriving each API call to the
corresponding implementation through its Service Container.
Figure 5 represents a possible configuration of the SSS software architecture. It can be seen that external learning tools
submit their data to the SSS by calling the
Activity service
, which traces the interaction between learners and resources.
The data is then stored by the
Metadata services
(e.g.,
Data export service
), which manage the datasets of the
SSS. These
Metadata services
also offer abstractions — based on the AAN previously described — for other services
to access the data. Thus, the data APIs implemented in the
Metadata services
wrap the interfaces of the underlying
databases, which may vary depending on the data representation used (e.g., SQL, NoSQL, or document-based representations).
Thus, the implementation of the data model can be abstracted to fit the business logic of each service. This solution makes the
SSS data layer scalable and adaptable. However, this solution does not require good data access performance. This is because
learning datasets are typically smaller than domains, and current data stores offer enough data access performance. While it is
true that the implementation of the
Metadata services
may hinder the performance because it adds an extra software
layer, our previous experience (see the first paragraph of section 3) shows that this does not represent a problem for users or
software developers. Finally, the business logic of the SSS is composed of another collection of services. They can be
Simple
services
, which serve one functionality (e.g.,
Group Access Restriction
, which controls user restrictions) or
manage some entity types (e.g.,
Tag
,
Collections
,
Q&A
, or
Activity
). Others are
Composed services
, which
126
Figure 5. Possible configuration of the SSS software architecture
exploit other services to provide their own functionality (e.g.,
Search
or
Recommendations
). See Dennerlein, Kowald, et
al. (2015) for additional details about these services.
As previously seen, the design of the SSS architecture promotes its flexibility and extensibility. Services can be composed,
and different implementations or configurations of the services can coexist. This way, the SSS can be adapted to different
workplace learning situations. Furthermore, the SSS includes a set of services that can be exploited by several LA functionalities,
such as the Activity service or the Metadata services, while the Service API is offered as a REST API.
Regarding the integration of learning tools and LA applications, the SSS supports two strategies. The first one is a loosely
coupled integration, where external applications make use of the SSS data API either to publish or to retrieve data. In these
cases, the functionality of the SSS does not need to be extended or modified, although an extension of the SSS may be required
to enable the semantic integration of the data retrieved (i.e., new concepts may need to be defined). The second strategy is a
tightly coupled integration, where part of the functionality of the applications is developed as SSS services or as extensions
of already-existing ones. It is sometimes the case that these new services need to define new API methods to offer new
functionalities that could potentially be reused by other applications. Therefore, the SSS can be seen as a framework that
facilitates the development of workplace LA applications (Dennerlein, Kowald, et al., 2015).
The implementation of the SSS architecture is based on microservices (Newman, 2005), which have emerged as a novel
way to design software in the form of services to promote independence in their own deployment and to deal with technological
heterogeneity. This independence of the services increases the flexibility of the SSS infrastructure because some of them
can be deployed and others can be integrated if needed. Furthermore, the loose coupling of SSS services also enables their
development by third parties and the integration of software frameworks inside the SSS. In fact, the current version of the SSS
(which is coded in Java and available in our GitHub repository
14
) integrates the TagRec (Kowald, Kopeinik, & Lex, 2017)
framework for the development of tag recommender algorithms.
4. Evaluation
In order to assess whether the SSS meets the requirements stated in the previous section, we developed a set of workplace
learning tools that submit their data to the SSS, as well as LA applications that consume this data. We then used the SSS in
authentic evaluation studies carried out in the Learning Layers project. Note that carrying out authentic studies that involve
different stakeholders (Dewan, 2001) is typically done to evaluate collaborative systems in education (e.g., Alario-Hoyos et al.
2013). Table 2 summarizes the SSS requirements and their relationships with the evaluation studies, the evaluation methods,
and the data sources.
4.1 Workplace Learning Tools Integrated into the SSS
We integrated into the SSS some domain-independent tools offered by third parties during the Learning Layers project. These
tools were extended to submit data to the SSS or to make use of some of its services. One example is the web browser Chrome
15
.
We developed a Chrome plugin (called Bookmarker (Ruiz-Calleja, Dennerlein, Tomberg, Ley, et al., 2015)) that allows us to
create, tag, and submit bookmarks to the SSS from a Chrome interface. We followed a similar strategy to integrate the blog
editor WordPress16 with an extension called Attacher (Ruiz-Calleja, Dennerlein, Tomberg, Ley, et al., 2015). Attacher allows
14https://github.com/learning-layers/SocialSemanticServer
15https://www.google.com/chrome/
16https://wordpress.com
127
Table 2. Relationship between SSS Requirements, Evaluation Studies, Evaluation Methods and Data Sources
Tag Requirement Evaluation studies Evaluation methods Data sources
Supports LA in a wide range of TT-14, CC-15, Multiple authentic studies, SSS logs, interviews,
REQ1 learning scenarios MR-15, and RW-16 mixed methods questionnaires
Integrates tools used TT-14, CC-15 Multiple authentic studies, Implementation of the SSS,
REQ2 for learning in the workplace MR-15, and RW-16 feature analysis SSS logs, LA applications
Collects and integrates data TT-14, MR-15,
REQ3 from different tools and RW-16 Multiple authentic studies SSS logs
Enables the consumption LA applications
REQ4 of its data by LA applications development Feature analysis LA applications
Focuses on knowledge TT-14 Multiple authentic studies, Implementation of the SSS,
REQ5 creation metaphor and CC-15 mixed methods, feature analysis interviews, LA applications
Ensures that the SSS data is relevant for analy- TT-14, CC-15, Multiple authentic studies,
REQ6 zing and supporting learning practices MR-15, and RW-16 mixed methods SSS logs, interviews
blog editors to browse the resources (e.g., bookmarks) contained in the SSS from the blog editing interface, to access their
corresponding URLs, and to cite them in the blog posts. Attacher also registers in the SSS the blog posts published. A different
strategy was followed to integrate Evernote
17
(Dennerlein, Kowald, et al., 2015). In this case, an SSS service was created
to access Evernote API in order to semantically integrate into the SSS the notes taken by its users, turning Evernote into an
external data source from which data could be imported to the SSS.
We also developed from scratch other learning tools that make use of the SSS. In these cases, the SSS was exploited as
a framework for the development of workplace learning tools that are tightly integrated into it (Dennerlein, Kowald, et al.,
2015). This approach was followed by KnowBrain (Dennerlein, Theiler, et al., 2015), a collaborative resource-hosting tool built
on top of the SSS. KnowBrain exploits the SSS services to allow users to manage, tag, and share resources, such as learning
documents or bookmarks. Similarly, a set of three tools was developed to support informal learning in the health care domain,
which is typically combined in one tool set that uses the SSS as a common back-end infrastructure. The first tool is Bits &
Pieces (Dennerlein et al., 2014), a visual categorization tool to enhance individual and collaborative sense-making processes; it
exploits the SSS to enable learners to define semantic and contextualized relationships between learning artifacts. The second
tool is Discussion Tool, a question and answer tool that offers an interface similar to a web forum (Dennerlein et al., 2014); it
exploits the SSS to relate the resources created (questions and replies) to each other and these resources to their authors and
readers. The third tool is Living Documents (Bachl, Zaki, Schmidt, & Kunzmann, 2014), a collaborative text editor; it exploits
the SSS to allow its users to access the resources registered in the infrastructure.
4.2 LA Applications That Exploit the SSS
In addition to the learning tools mentioned in the previous subsection, we developed a set of LA applications that exploit the
data collected by the SSS. Specifically, we developed a visual dashboard and five recommender services, which offer typical
functionalities of workplace LA (Ruiz-Calleja, Prieto, Ley, Rodr
´
ıguez-Triana, & Dennerlein, 2017). The development of a
bigger set of applications that offer other workplace LA functionalities is part of our future work.
4.2.1 SSS Dashboard
The SSS Dashboard (Ruiz-Calleja, Dennerlein, Ley, & Lex, 2016) allows end users to visualize and browse the data collected
by the SSS. Specifically, the SSS Dashboard contains three visualizations:
filter events
, which represents the list of
events collected by the AAN and allows users to filter them by their actors, by the artifacts involved, or by the actions done;
Social network
, which represents a social network of the artifact-mediated relationships between the actors registered by
the SSS; and Tag Cloud, which represents a tag cloud of the tags registered in the AAN.
These visual abstractions are suitable for learners and trainers to visualize “uptake relations,” which have been defined
as the smallest units of meaning-making activities in knowledge creation (Suthers & Dwyer, 2014). For example, Figure 6
shows how the Dashboard represents these meaning-making activities by building a social network that is generated from the
uptake relations arising from reciprocal contributions to a particular artifact. These uptake activities represent the use and
reuse of certain tags when collecting and annotating learning materials. The directional social network, therefore, represents
the subgroups of learners that have collaborated around the creation, use, and enrichment of concepts. It also represents the
subgroup of learners who have influenced others in this process.
The SSS Dashboard was implemented as a loosely coupled application that consumes the data offered by the SSS. It
simply gathers a .csv file from the
Data Export Service
API that contains the events registered in the AAN. This
17https://evernote.com/
128
Table 3. Main Characteristics of the SSS Evaluation studies
Tag Date Location Duration Domain Purpose Users Tools LA applications
Reflect on how to introduce Chrome,
TT-14 2014 Estonia 5 months Education technology in education 11 WordPress SSS Dashboard
CC-15 2015 Austria 4 weeks Research Study state of the art 18 KnowBrain 3Layers, MostPopular
Evernote, Bits & Pieces, MostPopular, CF
MR-15 2015 England 2 months Medical Make sense of their 6 Discussion Tool, recommendation
practice working experience Living Documents algorithm
Evernote, Bits & Pieces, MostPopular, CF
RW-16 2016 Europe 5 months Research Plan and evaluate 22 Discussion Tool, recommendation
interventions Living Documents algorithm
implementation decision is not as efficient as extending the
Data Export Service
with a CRUD REST API to offer
abstraction suitable for the Dashboard. However, we reduced our development time because we reused the
Data Export
Service
extension developed for the recommendation algorithms (see next subsection). The current version of the Dashboard
was implemented using Javascript and well-known libraries, such as d318.
4.2.2 Recommendation Algorithms
Other LA applications that exploit the data collected by the SSS are recommender services that take advantage of the TagRec
(Kowald, Kopeinik, & Lex, 2017) framework integrated into the SSS. As part of this evaluation, we developed a set of
recommendation algorithms that exploit the SSS data for different purposes: to recommend learning resources (Seitlinger et al.,
2015), to recommend tags to be assigned to learning resources, and to recommend people to interact with (Kopeinik, Kowald,
Hasani-Mavriqi, & Lex, 2017).
Several recommendation algorithms are currently available. These include tag and resource recommendations based on
recent or most popular resources (Kowald & Lex, 2016; Kowald, Seitlinger, Trattner, & Ley, 2014). Thus, these approaches
rank the resources in the learning system based on popularity or recency (i.e., time since last usage). Another recommendation
approach is based on collaborative filtering (CF) (Schafer, Frankowski, Herlocker, & Sen, 2007), which means that items of
similar users are recommended. In the future, we also plan to extend this CF-based approach with a content-based approach as
provided in the ScaR recommender framework (Lacic, Traub, Kowald, & Lex, 2015), which would allow us to incorporate
similarities not only between users but also between resources (e.g., based on description texts or even full contents).
An algorithm that is especially designed to follow knowledge creation theory is the 3Layers tag recommendation approach
(Seitlinger, Kowald, Trattner, & Ley, 2013; Kowald, Seitlinger, Kopeinik, Ley, & Trattner, 2013). It learns how a learner, or
group of learners, categorizes resources. It traces the tags used for particular resources; later on, it dynamically builds an
understanding of the semantic contexts over time. The algorithm then recommends tags that match a particular semantic context.
This mirrors a situation where the interpretation of a resource (its semantic context) emerges from past artifact-mediated activity,
rather than being predefined by an ontology of the domain.
The recommender services were implemented
19
as a microservice in the SSS (
Recommendations
) service, which
includes the TagRec framework. The
Data Export Service
was also extended to dynamically create a .csv file with the
structure required by the framework, out of the data contained in the SSS database. This extension of the
Data Export
Service
to offer the data as a .csv file did not take efficiency as a requirement. However, it managed to wrap the data
contained in the SSS database to be offered to a service that takes a .csv file as an input without needing to modify the service.
The
Recommendations
microservice also defines new methods in the SSS API. As an example, it includes a method that
provides a set of recommended tags whose parameters are a user, a set of entities, a category, and a maximum number of tags to
recommend. External applications (e.g., KnowBrain and Bits & Pieces) exploit this method to provide tag recommendations to
their users.
4.3 Evaluation Studies
Due to the complexities of introducing and evaluating workplace learning technology, it was not feasible to evaluate all
requirements of the infrastructure in a single study. Hence, we conducted four evaluation studies to collect evidence for the
six requirements described in the previous section. These studies cover different domains and different goals and happened
in different countries. We focused on uptake events and on how the infrastructure traced and supported them, as these are
clear indications of collaborative knowledge building. Table 3 summarizes the most important characteristics of the evaluation
studies, while further information is provided next.
18http://d3js.org/
19https://github.com/learning-layers/TagRec/
129
Figure 6. SSS Dashboard interface: social network visualization from the data collected in TT-14
4.3.1 Professional Teacher Training Course (TT-14)
The first evaluation study was done in a professional teacher training course (
TT-14
in Tables 2 and 3) held at Tallinn University
(Estonia) between September 2014 and January 2015. The study tested the SSS Dashboard in an authentic experience where
learners used multiple workplace learning tools (
REQ3
and
REQ2
) and evaluated the Dashboard’s support for understanding
the learning process (
REQ1
and
REQ6
) according to the knowledge creation metaphor (
REQ5
). Here we focus on the role
played by the SSS, while additional details are reported in Ruiz-Calleja et al. (2016).
Learning context and participants:
The main purpose of the course is to help teachers reflect on how to introduce new
technologies and pedagogical techniques into their classrooms. Therefore, they had to address real problems and opportunities
in their workplace. A group of 10 professional teachers (“learners” from now on) attended the course and were guided by a
trainer. As part of the course activities, the learners were asked to browse the web looking for resources that could extend or
contrast the information provided by the trainer. Using these resources, the learners were asked to write blog posts where they
reflected about their own teacher practice and how to introduce new technology in their own classrooms. Each learner wrote
10 blog posts, which they shared with the rest of the learners. They used Chrome as a web browser to search and discover
web resources that would help them design their teaching. These resources were submitted as bookmarks to the SSS with the
support of the Bookmarker extension. They used WordPress as a blog editor. The Attacher plugin was installed to facilitate
access to bookmarks published on the SSS.
SSS configuration:
A simple configuration of the SSS was required for this study because all the learning tools and LA
applications are loosely integrated. The service
Tag
was exploited by Attacher and Bookmarker to submit learning resources,
attaching to them some contextual metadata. Attacher also uses the
Search
service to search for resources, and the SSS
Dashboard uses the
Data Export Service
to extract the AAN from the SSS. Regarding the data model, a folksonomy of
tags was created during the study, but there was no need to define new semantic concepts. Hence, it was not necessary to extend
the core ontology of the SSS.
Data collection and analysis:
Once the training course was over, one of the learners and the trainer used the SSS Dashboard
to visualize the data collected by the SSS. Figure 6 represents the social network interface of the SSS Dashboard (the names of
the learners were changed). The functionality of the SSS Dashboard was explained to the trainer and the learner, and, after that,
six tasks were proposed for them to accomplish with the Dashboard. The tasks emerged from the learning procedures defined
by the three learning metaphors (e.g., “Detect which interests two learners have in common” or “Identify learning topics that
are surprising or unexpected to you”). Once they finished, a semi-structured interview was carried out to further understand
their opinion about the dashboard and how useful they found the graphical abstractions offered (e.g., “If the SSS Dashboard
was available during the course, what would you use it for?”). The interactions of the trainer and the learner with the SSS
Dashboard, their voice while using it, and the interviews were recorded. Later on, two researchers listened to the recordings
and extracted their most important aspects.
Resulting AAN:
During five months, eight of the learners frequently used Chrome and WordPress, while the other two
hardly used these tools since they were not active in the course. The SSS coherently combined the data from Chrome and
WordPress, and a total of 320 events were registered. Out of these events, the SSS created an AAN that contained 11 actors, 53
resources, and 116 tags.
Main study results:
Both the trainer and the learner were able to accomplish the six tasks proposed using the SSS
Dashboard. Both of them agreed that by using it they could better understand the learning process. Interestingly, they
understood the collaboration among learners as a process mediated by artifacts (trainer while using the Dashboard: “as a
130
trainer I would be worried because these two learners did not reuse artifacts from others and they do not share information with
others”). During the interview, both of them agreed that the SSS Dashboard was a useful application. However, the trainer
found it interesting to understand the learning process (trainer when interviewed: “an average trainer would use it to understand
what is going on in the course”), while the learner reduced its potential use to identifying relevant learning artifacts or finding
potential collaborators (learner when interviewed: “I would use the dashboard to find out if there are learners that use the same
resources as me and to get an overview of the resources used by others”).
4.3.2 Collaborative Digital Curation (CC-15)
The second evaluation study was conducted as part of a collaborative digital curation scenario (
CC-15
) coordinated from
Austria in September 2015. The aim of this study was to test the tag recommender services in a real work environment (
REQ1
and
REQ2
) and to compare the support provided by the frequency-based MostPopular algorithm and the 3Layers algorithm
(
REQ5
), as well as to evaluate the benefit offered to the participants (
REQ6
). Part of the results were published in Seitlinger et
al. (2017).
Learning context and participants:
As part of their job, 18 professional researchers (“users” from now on) explored
the topic “smart workplaces” over four weeks in order to collaboratively write a state-of-the-art overview. These researchers
belonged to two universities from two European countries. They coordinated among themselves to collect and share topic-
relevant resources (i.e., web pages or text documents), sharing at least four every week. For this purpose they used KnowBrain,
which allowed them to upload, classify, tag, and share resources. Each resource was related to a category (out of six predefined
ones) that classified the resources according to their topics, and each was annotated with free tags. The users were supported
in the process of tagging resources by a tag recommender service integrated into KnowBrain, which suggested seven tags to
describe each resource depending on the category selected for the resource. These recommendations were extracted from two
algorithms that exploited the data from the SSS: either 3Layers or the frequency-based MostPopular. One of the assumptions of
the study was that 3Layers recommendations would be more suitable for creative group work because they would pick up and
feed back emergent topics more quickly than MostPopular recommendations.
SSS configuration:
KnowBrain is a learning application tightly integrated into the SSS whose complex functional-
ity uses the following SSS services (Dennerlein, Kowald, et al., 2015):
Recommendations
,
Gardening Knowledge
Structures
,
Collections
,
Q&A
,
Search
,
Tag
,
Data Export Service
,
Activity
, and
Group Access Re-
striction
. KnowBrain submits data to the SSS through the
Collections
,
Q&A
, and
Tag
services. This data is later on
exploited by the recommender services (
recommendations
service). The recommender services extract the AAN from
the
data export service
and offer the tag recommendations through an extension of the SSS API. Additionally, the
SSS core ontology was extended to define two new types of entities (
file
and
image
). Two other concepts were introduced:
collection, to aggregate entities, and friend, to allow explicit relationships between users.
Data collection and analysis:
We analyzed the KnowBrain and SSS logs to understand the tag recommendations and
selections during the study. We measured the support of each recommendation algorithm by means of the F1 score (Power,
2011), which is calculated based on the number of accepted recommendations of a tag and the number of times the tag was
recommended to the users. We used the F1 score because it is a simple and commonly employed metric that considers both
the precision and the recall of the recommendations, providing a more robust evaluation procedure. We also analyzed the
timestamps of the log entries to understand how the tag assignments evolved in KnowBrain and which tags were uptaken by
which users.
Resulting AAN:
Eighteen users participated in the study. A total of 2,654 user events were registered, out of which the
SSS created an AAN that contained 18 actors, 122 resources, 263 unique tags, 701 relationships between tags and resources,
and 6 categories. The recommender services made extensive use of this AAN; there were 183 recommendation events with a
total of 1,281 tags recommended.
Main study results:
The score obtained by the 3Layers algorithm (F1-score = 0.34) was higher than the score obtained by
the frequency-based MostPopular algorithm (F1-score = 0.27). These results show that the recommendations offered by the
3Layers algorithm have a bigger influence on the users’ behaviour than the popularity-based algorithm. We assume that this was
the case because 3Layers better reflected the shared interpretations that emerged in the group as a result of their creative group
work. We could also notice its impact by the capability of the 3Layers algorithm to raise awareness of topics once they are
introduced into the users’ community. For example, the tags
wellbeing
and
social involvement
were marginal at the
time they were introduced. These tags represented new concepts introduced in the learning community and were recommended
by the 3Layers algorithm (and not by the MostPopular algorithm), causing them to be quickly taken up by other users.
4.3.3 Meaning Making for Health Care Professionals (MR-15)
The third evaluation study was part of a three-year design-based research project that aimed at developing workplace learning
tools for interdisciplinary health care professionals. It was conducted at the participants’ daily work from October to November
2015. The aim of the study was to assess whether the SSS can collect and integrate data from multiple tools (
REQ3
) that are
131
employed to support informal learning while working (
REQ1
and
REQ2
). The benefit of the data collected to the participants
will also be assessed (REQ6).
Learning context and participants:
The study involved six English health care professionals (two doctors, one practice
manager, one office supervisor, one administrator, and one IT support manager). During the study, the health care professionals
used a tool set that included four tools integrated into the SSS. These tools supported the collection, categorization, and
formalization of informal learning experiences: Evernote helped record informal learning experiences by taking multimedia
notes; Bits & Pieces facilitated making sense of the notes taken with Evernote; Discussion Tool supported parallel discussions
and promoted professional engagement in sense-making processes; and Living Documents supported the formalization of
the conclusions. In addition, Bits & Pieces exploited two of the recommender algorithms integrated into the SSS: a time-
based recommender, which recommended tags to annotate learning artifacts, and a collaborative filtering algorithm, which
recommended resources (e.g., Evernote notes) created by other professionals.
SSS configuration:
In this study, the SSS configuration is more complex because four learning tools are involved. For the
tight integration of Bits & Pieces, two new services (
Learning Episode
and
Category
) were needed. These services
categorize and contextualize learning resources in an abstraction called
Learning Episode
, defined as a new entity type in
the SSS ontology. Discussion Tool is also tightly integrated into the SSS. It exploits services such as
Q&A
,
Entity
, and
Tag
.
Although no new service was needed for this integration, the SSS ontology was extended with the entity type
Discussion
and the relationship
Like
. The other two applications involved simply registered in the SSS the documents managed by their
users. For this purpose, two new services were required:
Data Import
to access Evernote API and
Living Document
to retrieve documents written by Living Document. These services also defined new entity types:
Mail
,
Evernote note
,
and LivingDoc.
Data collection and analysis:
We conducted questionnaires and semi-structured interviews with the professionals to evalu-
ate the impact of the recommender services on supporting informal learning. The questionnaire addressed the appropriateness
of functionalities and asked for their evaluation in the form of open questions; the interviews focused on how the tools were
used and on their impact on learning and working. We analyzed the data following Mayring’s inductive qualitative content
analysis (Mayring, 2014): first, we transcribed the interviews; second, we paraphrased the contents; and third, we extracted
categories to iteratively discover structure such as use cases, working functionalities, and corresponding reasoning in the data.
We also analyzed the SSS logs to describe the resulting AAN and to quantify the tag recommendations.
Resulting AAN:
The SSS registered 8,345 user events and 151 resources imported from Evernote and Bits & Pieces. The
resulting AAN contained 6 actors, 306 resources (145 bits used in episodes, 29 discussions related to 36 discussion entries
from Discussion Tool, 13 documents from Living Documents, and 48 learning episodes from Bits & Pieces), 31 tags, and 71
categories. Based on this AAN, tag recommendations were computed, which led to 12 accepted recommendations.
Main study results:
In the analysis of the qualitative data, we found out that the users all followed the informal learning
process from tracing experiences, organizing and discussing them, and finally transforming outcomes into a shared report. The
support of the SSS as an infrastructure to share, categorize, and evolve annotated learning resources among four different tools
was key for the learning process. In addition, the SSS integrated the data collected from the four tools and offered it to the
recommender algorithms. However, these recommender algorithms did not play a major role in this particular scenario. One
of the reasons is that the resource recommendations were happening on a personal level by having bits added by colleagues
rather than using the ones provided by the system. The infrequent use of the tagging functions and the limited number of tags
registered in the system lowered the impact of the tag recommendations. We claim that the infrequent use is due to users’ lack
of experience with similar tools and functionalities, which hides the additional value of tagging resources. However, some of
the users appreciated the tags when searching for resources (health care professional: “I use tags and things like that because I
think obviously if [. . .] everybody’s got a lot more diverse tags it would be very useful to search on; I like that”). Despite this,
the professionals stated in the interviews that the recommendation of tags eases the process of tagging and that they can see
how the tags can help them arrive at a professionally agreed upon vocabulary.
4.3.4 Collaborative Planning of Research Work (RW-16)
The fourth evaluation study was also part of the three-year design-based research project reported in MR-15. In this case, the
SSS supported a group of researchers from March to July 2016 to coordinate their work related to the planning of research
interventions, evaluation of the data gathered, and reflection on the results obtained. In a similar way as the MR-15 pilot, we
assessed whether the SSS could collect and integrate data from multiple tools (
REQ3
) used to support informal learning while
working (REQ1 and REQ2). We also evaluated the impact of the data collected to support users’ learning practice (REQ6).
Learning context and participants:
This study involved 22 researchers belonging to 10 different institutions and collabo-
rating on a European project. During the study, the researchers used the SSS and the same tool set described in the MR-15 pilot
study. They used Evernote, Bits & Pieces, Discussion Tool, and Living Documents to communicate and to coordinate their
activities. These tools supported the individual and collaborative collection, sense-making, sharing, and formalization of ideas.
SSS configuration: The SSS configuration is exactly the same as in the MR-15 evaluation study.
132
Data collection and analysis:
As was the case in the MR-15 pilot, we quantitatively analyzed the SSS logs to describe the
resulting AAN and to quantify the tag recommendations. We also analyzed the log files to detect uptake events where tags were
introduced and reused by another user. Thus, we assessed how the recommender services supported collaborative learning.
Resulting AAN:
The SSS registered 182,911 user events and 293 resources imported from Evernote and Bits & Pieces.
The resulting AAN contained 40 actors, 1,056 resources (259 bits used in episodes, 75 discussions related to 96 discussion
entries from Discussion Tool, 49 documents from Living Documents, and 29 learning episodes from Bits & Pieces), 688 tag
assignments of a total of 301 distinct tags, and 132 categories. Based on this, AAN tag recommendations were computed,
which led to 102 accepted recommendations.
Main study results:
The log analysis shows that the researchers followed a similar learning process as in the MR-15
pilot study: tracing experiences, making sense of and discussing them, and transforming outcomes into reports. Again, the
SSS infrastructure provided a key support to this learning process as it allowed learners to share data and documents across
the applications. In this pilot study, the learners made extensive use of tags. These tagging events were facilitated by the
recommender service, which also promoted the uptake of tags. In fact, 88 tag-uptake events were registered involving 67 unique
tags. As we have shown in previous studies, these uptake events can facilitate the establishment of shared understanding in the
sense-making process (Dennerlein, Seitlinger, Lex, & Ley, 2016; Ley & Seitlinger, 2015).
4.4 Discussion of the Evaluation Findings
4.4.1 Accomplishment of the SSS Requirements
Supports informal and formal scenarios from different domains (REQ1):
The diversity of learning and LA applications
integrated into the SSS allows it to support LA in a wide range of workplace learning scenarios and contexts. In fact, our
evaluation studies range from formal scenarios in a professional training course (TT-14) to completely informal others (MR-15
and RW-16) where participants had the opportunity to use the learning tools in authentic working conditions. The flexibility
of the SSS — promoted by its microservice-based architecture and its data model — was key to supporting these scenarios,
because different services and ontology extensions were required.
Integrates tools used for learning in the workplace (REQ2):
The SSS collects and integrates data from workplace
learning tools that differ both in their functional and in their technical characteristics. These tools include general-purpose
and independent ones that exploit the
Activity Service
(e.g., WordPress (TT-14)), those that require a new service for
collecting their data (e.g., Evernote (MR-15 and RW-16)), and purpose-specific tools. The different integration strategies —
again promoted by the microservice-based architecture of the SSS — and the flexibility of the SSS data model made it possible
to collect data from a wide variety of tools.
Collects and integrates data (REQ3):
The evaluation studies TT-14, MR-15, and RW-16 included learning processes
where the SSS collected and coherently integrated data from multiple learning tools to offer it back to LA applications. The
data model managed by the SSS played a major role in the data integration: the AAN is a flexible structure that allows it to
relate different concepts, while the SSS core ontology offered a high-level abstraction common to many learning tools that
enabled it to describe the entities of the AAN.
Supports LA applications (REQ4):
The development of LA applications showed how the data collected by the SSS can
be exploited by LA applications. We exemplified this by integrating a visual dashboard and a set of recommender algorithms.
These LA applications not only offer different functionalities but also follow different approaches to consuming the data
from the SSS: while the SSS Dashboard was implemented as an external application that simply retrieves the SSS data from
the
Data Export Service
API, the recommender algorithms were implemented as microservices that extend the SSS
functionality and offer a new data API that other applications (KnowBrain and Bits & Pieces) exploit. Again, the SSS flexibility
was key to enabling several integration strategies for its data consumption. It is also noteworthy that, due to the reduced number
of participants, these studies do not assess whether the LA applications could exploit larger amounts of data collected by the
SSS. However, a previous study (Kowald et al., 2015) shows how the 3Layers and MostPopular algorithms could satisfactorily
exploit data from 10,000 users imported to the SSS from several social tagging systems.
Supports the knowledge creation metaphor (REQ5):
The studies we conducted offer clear evidence for the focus on
the knowledge creation metaphor by the SSS to support workplace LA. In TT-14, the SSS Dashboard supported learners and
trainers to understand the learning process from the knowledge creation point of view. Both types of users identified how
artifacts were co-developed and how knowledge emerged inside the community. We obtained the first evidence that trainers
appreciated the visualization of shared artifacts and how learners contributed to these artifacts.
CC-15 showed that the 3Layers recommender algorithm provided more accurate results than the MostPopular algorithm,
which offered the most popular results in a certain period. 3Layers also facilitated the introduction of new concepts into the
community of learners because it also recommended newly introduced tags even if they were not the most common tags.
3Layers recommendations therefore better reflected the situation of creative group work (Seitlinger et al., 2017), which is
evidence of the algorithm’s foundation in knowledge creation theories.
133
In MR-15 and RW-16, the infrastructure supported knowledge maturation as it helped learners move through a collaborative
learning process across several tools. Learners started with the collection of ideas, then could share them, and finally were able
to formalize them. The SSS, and more specifically its data model, played a central role in making objects of activity available
across tools and in allowing the tracing of interactions with these objects: first, the explicit relationships between the entities
enabled the detection of uptake events and co-creation processes; second, the flexibility of the AAN allowed it to introduce new
artifacts and tags and to dynamically learn the participants’ shared categories over time. The limited number of participants in
the evaluation studies limits the generalizability of these findings to larger communities of learners; it would be interesting to
assess the support provided by the SSS to knowledge building in organizations, or even in cross-organizational networks, where
innovation is more likely to happen.
Provides data that is relevant for analyzing and supporting learning practices (REQ6):
Finally, we exemplified the
impact of the SSS on workplace learning practices. In TT-14, the data facilitated the comprehension of the learning process by
a trainer and a learner. Since they only used the SSS Dashboard once the course was over, it did not have a direct influence on
this process. However, they understood the Dashboard as a useful application, and the trainer explicitly said that she would have
intervened if she had seen it during the course. In CC-15, the SSS data influenced the users when tagging resources, helping
them to co-create conceptual artifacts and to introduce new ones into the learning community. The tag recommender 3Layers,
based on knowledge creation, achieved higher acceptance than the one based on popularity only, MostPopular. In MR-15, the
SSS supported a two-year design-based research project that resulted in new tools and practices for health care professionals.
The SSS allowed individuals to collect and formalize learning experiences, share them, and work collaboratively on them. Due
to the limited use of the tagging system and the resource recommendations, the SSS data had little impact on the users in this
study. On the other hand, RW-16 showed how an informal learning process supported by a similar technical configuration was
enhanced by the tag recommender services. However, it remains to be seen whether recommender services are an effective way
to exploit the data. Another open question is how to increase their impact on workplace learning processes, especially on the
more informal ones that entail a larger number of learners. In this regard, we are currently starting a large-scale study on the
use of the recommender services in the health care domain.
4.4.2 An AAN to Support the Knowledge Creation Metaphor
We started this paper from the assumption that a technical infrastructure that supports LA needs to start from an understanding
of what learning is, how it takes place, and how it can be supported. The SSS has its roots in the conception of learning
as a knowledge-building and maturing process, and it is therefore rooted in an understanding of knowledge as a dynamic
and social process of co-construction and creation (Ravenscroft, Schmidt, Cook, & Bradley, 2012). The evaluation studies
that we have conducted show several knowledge creation and maturing activities that have been traced and supported. Our
experience also shows that the knowledge creation metaphor provides a special challenge for the design of LA infrastructures.
Such infrastructures should offer enough flexibility—both in their software architecture and in their data model—to allow for
knowledge to emerge from individual and collaborative activities.
Uptake has been described as the smallest unit in collaborative learning that can be observed as an interaction (Suthers &
Dwyer, 2014). In our evaluation studies, uptake occurred at several places. In RW-16, for example, several learners contributed
ideas to shared collections. Reuse of tags occurred in CC-15 and RW-16. A powerful analytic framework has recently been
created that allows interactions to be analyzed in “contingency graphs” (Suthers & Dwyer, 2014). With the uptake framework,
a big step has been taken toward making collaborative learning activity traceable for LA. While the uptake framework exists as
an analytical tool that allows post hoc analysis of learning data, with the SSS, we provide the technical means to trace and
represent uptake events in life systems and feed back this data to learners and trainers.
Once the infrastructure became focused on challenging scenarios of knowledge creation, we found it easier to support use
cases that are more in line with other metaphors. For example, the SSS also includes recommender services that are more
related to knowledge acquisition, such as time-based recommenders, or participation, such as collaborative filtering. Another
example is the exploitation of SSS data to create a social network graph representation that shows community membership and
centrality of members (Ruiz-Calleja, Dennerlein, Tomberg, Pata, et al., 2015), following the participation metaphor.
5. Conclusions and Future Work
This paper proposes to exploit the SSS as an infrastructure for workplace LA that follows the knowledge creation metaphor. For
this purpose, we first derived the requirements for an infrastructure that complies with knowledge creation theories. We then
designed and developed the SSS as a microservice-based infrastructure whose data model is based on a semantically enriched
and context-aware AAN. Its evaluation entailed the integration of several learning tools that submit their data to the SSS and
some LA applications — a visual dashboard and a set of recommender services — that exploit its data. This evaluation also
comprised four authentic workplace learning scenarios where 57 participants were satisfactorily supported by the SSS.
Two characteristics of the SSS are key for its support of workplace LA. First, the flexibility of its software architecture allows
134
it to adapt to a wide range of learning scenarios by configuring the microservices that are deployed. It also allows different
integration strategies for learning tools and LA applications. Thus, it is possible to collect data from external tools and to develop
new services to provide additional functionality (e.g., tag recommendation) that can be exploited by external LA applications.
The second key characteristic is the SSS’s data model. The AAN allows the model to describe contextualized uptake events.
Furthermore, the AAN allows the model to describe the entities of the AAN with different degrees of formality, allowing both
the semantic integration of data, using ontologies, and the description of emerging knowledge, using folksonomies.
Our research suffers from the limitations that are common when addressing learning at a workplace. Because workers
learn in a self-directed way, any technology that supports this learning needs to seamlessly fit existing practices that may vary
dramatically in different domains and organizations. Hence, samples sizes of studies are usually small, and it is often difficult
to assess how technology might be adopted in large networks of learners. We addressed these challenges by employing an
iterative, design-based research strategy that collects evidence over a number of iterations and across different contexts. In
this paper, we reported on four such studies. Together with the previous empirical studies that we have also mentioned, these
should allow us to build up converging evidence of the utility of our general approach over time, although the scalability of our
proposal reminds to be assessed.
Our future work will focus on further exploiting the SSS for workplace LA in real scenarios. We are currently working on a
large-scale study that includes the use of recommenders in the health care domain. We will also promote the development of
a wider collection of LA applications that make use of the SSS data. We plan to encourage the community of learning tool
developers to adopt the SSS by making it compliant with xAPI. xAPI could be supported by developing a new
Metadata
service
and a new
Activity service
. The development of such services, and their corresponding APIs, would be
facilitated if we used as a dataset an LRS with an open licence. Finally, we will explore the potential use of the SSS data to
support additional LA services, such as the automatic detection of learning needs in communities of workers.
Declaration of Conflicting Interest
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
This research has been partially funded by the FP7 ICT Workprogramme of the European Community: “Learning Layers —
Scaling up Technologies for Informal Learning in SME Clusters” (grant no. 318209) and by the European Union’s Horizon
2020 research and innovation program: “CEITER — Cross-Border Educational Innovation through Technology-Enhanced
Research” (grant no. 669074) and “AFEL — Analytics for Everyday Learning” (grant no. 687916). It has also been partially
funded by the Know-Center GmbH Graz (Austrian FFG COMET Program).
References
Alario-Hoyos, C., Bote-Lorenzo, M., G
´
omez-S
´
anchez, E., Asensio-P
´
erez, J., Vega-Gorgojo, G., & Ruiz-Calleja, A. (2013).
GLUE! An architecture for the integration of external tools in virtual learning environments. Computers & Education,60(1),
122–137. https://dx.doi.org/10.1016/j.compedu.2012.08.010
Bachl, M., Zaki, D., Schmidt, A. P., & Kunzmann, C. (2014). Living documents as a collaboration and knowledge maturing plat-
form. In Proceedings of the 14th International Conference on Knowledge Technologies and Data-Driven Business (i-KNOW
14), 16–19 September 2014, Graz, Austria (pp. 30:1–30:4). New York: ACM.
https://dx.doi.org/10.1145/2637748.2638437
Bakharia, A., Kitto, K., Pardo, A., Ga
ˇ
sevi
´
c, D., & Dawson, S. (2016). Recipe for success: Lessons learnt from us-
ing xAPI within the Connected Learning Analytics toolkit. In Proceedings of the 6th International Conference on
Learning Analytics and Knowledge (LAK ’16), 25–29 April 2016, Edinburgh, UK (pp. 378–382). New York: ACM.
https://dx.doi.org/10.1145/2883851.2883882
Berendt, B., Vuorikari, R., Littlejohn, A., & Margaryan, A. (2014). Learning analytics and their application in technology-
enhanced professional learning. In A. Littlejohn & A. Margaryan (Eds.), Technology-Enhanced Professional Learning:
Processes, Practices and Tools (pp. 144–157). Abingdon-on-Thames, UK: Routledge.
Buckingham-Shum, S., & Ferguson, R. (2012). Social learning analytics. Journal of Educational Technology & Society,15(3),
3–26.
Cardinali, F. (2015). Towards Learning Analytics Interoperability at the Workplace (LAW Pro-
file). Learning Analytics Review no. 5 (Tech. Rep. No. ISSN: 2057-7494). Retrieved from
http://www.laceproject.eu/learning-analytics-review/law-interoperability/
de Laat, M., & Schreurs, B. (2013). Visualizing informal professional development networks. Build-
ing a case for learning analytics in the workplace. American Behavioral Scientist,57(10), 1421–1438.
https://dx.doi.org/10.1177%2F0002764213479364
135
Dennerlein, S., Kowald, D., Lex, E., Theiler, D., Lacic, E., Ley, T., . .. Ruiz-Calleja, A. (2015). The Social Semantic Server: A
flexible framework to support informal learning at the workplace. In Proceedings of the 15th International Conference on
Knowledge Technologies and Data-Driven Business (I-KNOW 2015), 21–23 October 2015, Graz, Austria (pp. 26:1–26:8).
New York: ACM. https://dx.doi.org/10.1145/2809563.2809614
Dennerlein, S., Rella, M., Tomberg, V., Theiler, D., Treasure-Jones, T., Kerr, M., .. . Trattner, C. (2014). Making sense of
Bits and Pieces: A sensemaking tool for informal workplace learning. In Proceedings of the 9th European Conference on
Technology Enhanced Learning (EC-TEL 2014), 16–19 September 2014, Graz, Austria (pp. 391–397). Lecture Notes in
Computer Science, Springer. https://dx.doi.org/10.1007/978-3-319-11200-831
Dennerlein, S., Seitlinger, P., Lex, E., & Ley, T. (2016). Take up my tags: Exploring benefits of meaning making in a
collaborative learning task at the workplace. In Proceedings of the European Conference on Technology Enhanced Learning
(EC-TEL 2016), 13–16 September 2016, Lyon, France (pp. 377–383). Lecture Notes in Computer Science, Springer.
Dennerlein, S., Theiler, D., Marton, P., Santos Rodriguez, P., Cook, J., Lindstaedt, S., & Lex, E. (2015). KnowBrain: An online
social knowledge repository for informal workplace learning. In Proceedings of the European Conference on Technology
Enhanced Learning (EC-TEL 2015), 15–17 September 2015, Toledo, Spain (pp. 509–512). Lecture Notes in Computer
Science, Springer. https://dx.doi.org/10.1007/978-3-319-24258-348
Derntl, M., G
¨
unnemann, N., & Klamma, R. (2013). A dynamic topic model of learning analytics research. In Proceedings of
the LAK Data Challenge, Held at the 3rd International Conference on Learning Analytics and Knowledge (LAK ’13), 8–12
April 2013, Leuven, Belgium (pp. 1–5). New York: ACM.
Dewan, P. (2001). An integrated approach to designing and evaluating collaborative applications and infrastructures. Computer
Supported Cooperative Work,1(10), 75–111. https://dx.doi.org/10.1023/A:1011266229161
Duval, E. (2011). Attention please! Learning analytics for visualization and recommendation. In Proceedings of the 1st
International Conference on Learning Analytics and Knowledge (LAK ’11), 27 February–1 March 2011, Banff, AB, Canada
(pp. 9–17). New York: ACM.
Earl, T. (2005). Service-Oriented Architecture: Concepts, Technology and Design. Upper Saddle River, NJ, USA: Prentice
Hall PTR.
Eraut, M. (2004). Informal learning in the workplace. Studies in Continuing Education,26(2), 247–273.
https://dx.doi.org/10.1080/158037042000225245
Fidalgo-Blanco, A., Sein-Echaluce, M. L., Garcia-Penalvo, F. J., & Conde, M. A. (2015). Using learning analytics to improve
teamwork assessment. Computers in Human Behavior,47(1), 149–156. https://dx.doi.org/10.1016/j.chb.2014.11.050
Ga
ˇ
sevi
´
c, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends,59(1),
64–71. https://dx.doi.org/10.1007/s11528-014-0822-x
Kaschig, A., Maier, R., Sandow, A., Brown, A., Ley, T., Magenheim, J., . . . Seitlinger, P. (2012). Technological and
organizational arrangements sparking effects on individual, community and organizational learning. In Proceedings of
the 7th European Conference on Technology-Enhanced Learning (EC-TEL 2012), 18–21 September 2012, Saarbr
¨
ucken,
Germany (pp. 180–193). Lecture Notes in Computer Science, Springer.
Kitto, K., Lupton, M., Davis, K., & Waters, Z. (2017). Designing for student-facing learning analytics. Australasian Journal of
Educational Technology,33(5), 152–168. https://dx.doi.org/https://doi.org/10.14742/ajet.3607
Klamma, R. (2013). Community learning analytics — Challenges and opportunities. In Proceedings of the 12th Interna-
tional Conference on Web-Based Learning (ICWL 2013), 6–9 October 2013, Kenting, Taiwan (pp. 284–293). Springer.
https://dx.doi.org/10.1007/978-3-642-41175-529
Kooken, J., Ley, T., & De Hoog, R. (2007). How do people learn at the workplace? Investigating four workplace learning assump-
tions. In Proceedings of the 2nd European Conference on Technology Enhanced Learning (EC-TEL 2007), 17–20 September
2007, Crete, Greece (pp. 158–171). Heidelberg, Germany: Springer. https://dx.doi.org/10.1007/978-3-540-75195-312
Kopeinik, S., Kowald, D., Hasani-Mavriqi, I., & Lex, E. (2017). Improving collaborative filtering using a cognitive model of
human category learning. The Journal of Web Science,2(4), 45–61. https://dx.doi.org/10.1561/106.00000007
Kowald, D., Kopeinik, S., & Lex, E. (2017). The TagRec framework as a toolkit for the development of tag-based recommender
systems. In Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization, 9–12 July 2017,
Bratislava, Slovakia (pp. 23–28). New York: ACM. https://dx.doi.org/10.1145/3099023.3099069
Kowald, D., Kopeinik, S., Seitlinger, P., Ley, T., Albert, D., & Trattner, C. (2015). Refining frequency-based tag reuse
predictions by means of time and semantic context. In Mining, Modeling, and Recommending “Things” in Social Media (pp.
55–74). Springer.
Kowald, D., & Lex, E. (2016). The influence of frequency, recency and semantic context on the reuse of tags in social tagging
systems. In Proceedings of the 27th ACM Conference on Hypertext and Social Media, 10–13 July 2016, Halifax, NS, Canada
(pp. 237–242). New York: ACM. https://dx.doi.org/10.1145/2914586.2914617
136
Kowald, D., Seitlinger, P., Kopeinik, S., Ley, T., & Trattner, C. (2013). Forgetting the words but remembering the meaning:
Modeling forgetting in a verbal and semantic tag recommender. In Mining, Modeling, and Recommending “Things” in
Social Media (pp. 75–95). Springer.
Kowald, D., Seitlinger, P., Trattner, C., & Ley, T. (2014). Long time no see: The probability of reusing tags as a function of
frequency and recency. In Proceedings of the 23rd International Conference on World Wide Web (WWWC 2014), 7–11 April
2014, Seoul, Korea (pp. 463–468). New York: ACM. https://dx.doi.org/10.1145/2567948.2576934
Kravcik, M., & Klamma, R. (2012). Supporting self-regulation by personal learning environments. In Proceedings of the 12th
International Conference on Advanced Learning Technologies (ICALT 2012), 7–10 July 2012, Rome, Italy (pp. 710–711).
IEEE. https://dx.doi.org/10.1109/ICALT.2012.192
Krull, E., & Leijen, A. (2015). Perspectives for defining student teacher perfomance-based teaching skills indicators
to provide formative feedback through learning analytics. Creative Education,6(10), 914–926. Retrieved from
10.4236/ce.2015.610093
Lacic, E., Traub, M., Kowald, D., & Lex, E. (2015). ScaR: Towards a real-time recommender frame-
work following the microservices architecture. In Proceedings of the Workshop on Large Scale Recommender
Systems (LSRS2015) at RecSys 2015, 16–20 September 2015, Vienna, Austria (Vol. 15). Retrieved from
https://pure.tugraz.at/ws/portalfiles/portal/3521071/RecSysLSRS2015.pd f
Ley, T., Cook, J., Dennerlein, S., Kravcik, M., Kunzmann, C., Pata, K., . .. Trattner, C. (2014). Scaling informal learning at
the workplace: A model and four designs from a large-scale design-based research effort. British Journal of Educational
Technology,45(6), 1036–1048. https://dx.doi.org/10.1111/bjet.12197
Ley, T., & Kump, B. (2013). Which user interactions predict levels of expertise in work-integrated learning? In Proceedings of
the 8th European Conference on Technology-Enhanced Learning (EC-TEL 2013), 17–21 September 2013, Paphos, Cyprus
(pp. 178–190). Lecture Notes in Computer Science, Springer. https://dx.doi.org/10.1007/978-3-642-40814-415
Ley, T., & Seitlinger, P. (2015). Dynamics of human categorization in a collaborative tagging system: How so-
cial processes of semantic stabilization shape individual sensemaking. Computers in Human Behavior,51, 140–151.
https://dx.doi.org/10.1016/j.chb.2015.04.053
Mayring, P. (2014). Qualitative Content Analysis: Theoretical Foundation, Basic Procedures and Software Solution (Tech.
Rep.). Klagenfurt, Austria: SSOAR.
Newman, S. (2005). Building Microservices. Designing Fine-Grained Systems. Sebastopol, CA, USA: O’Reilly Media.
Niemann, K., & Wolpers, M. (2014). Usage-based clustering of learning resources to improve recommendations. In Proceedings
of the 9th European Conference on Technology-Enhanced Learning (EC-TEL 2014), 16–19 September 2014, Graz, Austria
(pp. 317–330). Lecture Notes in Computer Science, Springer. Retrieved from 10.1007/978-3-319-11200-824
Nonaka, I. (1994). A dynamic theory of organizational knowledge creation. Organization Science,5(1), 14–37.
Nussbaumer, A., Berthold, M., Dahrendorf, D., Schmitz, H., Kravcik, M., & Albert, D. (2012). A mashup recommender
for creating personal learning environments. In Proceedings of the 11th International Conference on Web-Based Learning
(ICWL 2012), 2–4 September 2012, Sinaia, Romania (pp. 79–88). Lecture Notes in Computer Science, Springer. Retrieved
from 10.1007/978-3-642-33642-39
Paavola, S., & Hakkarainen, K. (2005). The knowledge creation metaphor — An emergent epistemological approach to
learning. Science & Education,14(6), 535–557. Retrieved from 10.1007/s11191-004-5157-0
Peschl, M. F., & Fundneider, T. (2014). Designing and enabling spaces for collaborative knowledge creation and innovation:
From managing to enabling innovation as socio-epistemological technology. Computers in Human Behavior,37, 346–359.
https://dx.doi.org/10.1016/j.chb.2012.05.027
Power, D. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation.
Journal of Machine Learning Technologies,2(1), 37–63.
Rajagopal, K., van Bruggen, J. M., & Sloep, P. B. (2017). Recommending peers for learning: Matching on dis-
similarity in interpretations to provoke breakdown. British Journal of Educational Technology,48(2), 385–406.
https://dx.doi.org/10.1111/bjet.12366
Ravenscroft, A., Schmidt, A., Cook, J., & Bradley, C. (2012). Designing social media for informal learn-
ing and knowledge maturing in the digital workplace. Journal of Computer Assisted Learning,28(3), 235–349.
https://dx.doi.org/10.1111/j.1365-2729.2012.00485.x
Renzel, D., & Klamma, R. (2013). From micro to macro: Analyzing activity in the ROLE sandbox. In Proceedings of the
3rd International Conference on Learning Analytics and Knowledge (LAK ’13), 8–12 April 2013, Leuven, Belgium (pp.
250–254). New York: ACM. https://dx.doi.org/10.1145/2460296.2460347
Ruiz-Calleja, A., Dennerlein, S., Ley, T., & Lex, E. (2016). Visualizing workplace learning data with the SSS Dashboard.
In CrossLAK 2016: International Workshop on Learning Analytics across Physical and Digital Spaces, 25 April 2016,
Edinburgh, UK (pp. 79–86). Retrieved from http://ceur-ws.org/Vol-1601/CrossLAK16Paper15.pdf
137
Ruiz-Calleja, A., Dennerlein, S., Tomberg, V., Ley, T., Theiler, D., & Lex, E. (2015). Integrating data across workplace learning
applications with a social semantic infrastructure. In Proceedings of the 14th International Conference on Web-Based
Learning (ICWL 2015), 5–8 November 2015, Guangzhou, China (pp. 208–217). Lecture Notes in Computer Science,
Springer. https://dx.doi.org/10.1007/978-3-319-25515-619
Ruiz-Calleja, A., Dennerlein, S., Tomberg, V., Pata, K., Ley, T., Theiler, D., & Lex, E. (2015). Supporting learning analytics
for informal workplace learning with a social semantic infrastructure. In Proceedings of the 10th European Conference
on Tecnology Enhanced Learning (EC-TEL ’15), 15–17 September 2015, Toledo, Spain (pp. 634–637). Lecture Notes in
Computer Science, Springer. https://dx.doi.org/10.1007/978-3-319-24258-376
Ruiz-Calleja, A., Prieto, L., Ley, T., Rodr
´
ıguez-Triana, M., & Dennerlein, S. (2017). Learning analytics for professional and
workplace learning: A literature review. In Proceedings of the European Conference on Technology Enhanced Learning
(EC-TEL 2017), 12–15 September 2017, Tallinn, Estonia (pp. 164–178). Lecture Notes in Computer Science, Springer.
https://dx.doi.org/10.1007/978-3-319-66610-513
Santos, P., Dennerlein, S., Theiler, D., Cook, J., Treasure-Jones, T., Holley, D., . . . Lex, E. (2016). Going be-
yond your personal learning network, using recommendations and trust through a multimedia question-answering ser-
vice for decision-support: A case study in the healthcare. Journal of Universal Computer Science,22(3), 340–359.
https://dx.doi.org/10.3217/jucs-022-03-0340
Schafer, J. B., Frankowski, D., Herlocker, J., & Sen, S. (2007). Collaborative filtering recommender systems. In The Adaptive
Web: Methods and Strategies of Web Personalization (pp. 291–324). Lecture Notes in Computer Science, Springer Berlin
Heidelberg. https://dx.doi.org/10.1007/978-3-540-72079-99
Schmidt, A., Hinkelmann, K., Ley, T., Lindstaedt, S., Maier, R., & Riss, U. (2009). Conceptual foundations for a service-oriented
knowledge and learning architecture: Supporting content, process and ontology maturing. In S. Schaffert, K. Tochtermann,
& T. Pellegrini (Eds.), Networked Knowledge — Networked Media: Integrating Knowledge Management, New Media
Technologies and Semantic Systems (pp. 79–94). Springer. https://dx.doi.org/10.1007/978-3-642-02184-86
Schmitz, H. C., Wolpers, M., Kirschenmann, U., & Niemann, K. (2011). Contextualized attention meta-
data. In C. Roda (Ed.), Human Attention in Digital Environments (pp. 186–209). New York: ACM.
https://dx.doi.org/10.1017/CBO9780511974519.008
Schoefegger, K., Seitlinger, P., & Ley, T. (2010). Towards a user model for personalized recommendations in work-integrated
learning: A report on an experimental study with a collaborative tagging system. In Proceedings of the 1st Workshop on
Recommender Systems for Technology Enhanced Learning (RecSysTEL 2010), 30 September 2010, Barcelona, Spain (Vol. 1,
pp. 2829–2838). Amsterdam, Holland: Elsevier. https://dx.doi.org/10.1016/j.procs.2010.08.008
Seitlinger, P., Kowald, D., Kopeinik, S., Hasani-Mavriqi, I., Lex, E., & Ley, T. (2015). Attention please! A hybrid resource
recommender mimicking attention-interpretation dynamics. In Proceedings of the 24th International Conference on World
Wide Web, 18–22 May 2015, Florence, Italy (pp. 339–345). New York. https://dx.doi.org/10.1145/2740908.2743057
Seitlinger, P., Kowald, D., Trattner, C., & Ley, T. (2013). Recommending tags with a model of human categorization. In Pro-
ceedings of the 22nd ACM International Conference on Information & Knowledge Management (CIKM 2013), 27 October–1
November 2013, San Francisco, CA, USA (pp. 2381–2386). New York: ACM.
https://dx.doi.org/10.1145/2505515.2505625
Seitlinger, P., Ley, T., Kowald, D., Theiler, D., Hasani-Mavriqi, I., Dennerlein, S., . . . Albert, D. (2017). Balancing the
fluency-consistency trade-off in collaborative information search with a recommender approach. International Journal of
Human Computer Interaction,34(6), 557–575. https://dx.doi.org/10.1080/10447318.2017.1379240
Siadaty, M., Ga
ˇ
sevi
´
c, D., & Hatala, M. (2016a). Associations between technological scaffolding and micro-level
processes of self-regulated learning: A workplace study. Computers in Human Behavior,55B(1), 1007–1019.
https://dx.doi.org/10.1016/j.chb.2015.10.035
Siadaty, M., Ga
ˇ
sevi
´
c, D., & Hatala, M. (2016b). Measuring the impact of technological scaffolding interventions
on micro-level processes of self-regulated workplace learning. Computers in Human Behavior,59(1), 469–482.
https://dx.doi.org/10.1016/j.chb.2016.02.025
Siadaty, M., Ga
ˇ
sevi
´
c, D., Jovanovi
´
c, J., Miliki
´
c, N., Jeremi
´
c, Z., Ali, L., . . . Hatala, M. (2012). Learn-B: A social
analytics-enabled tool for self-regulated workplace learning. In Proceedings of the 2nd International Conference on
Learning Analytics and Knowledge (LAK ’12), 29 April–2 May 2012, Vancouver, Canada (pp. 115–119). New York: ACM.
https://dx.doi.org/10.1145/2330601.2330632
Siemens, G., Ga
ˇ
sevi
´
c, D., Haythornthwaite, C., Dawson, S., Buckingham Shum, S., Ferguson, R., .. . Baker, R. (2011).
Open Learning Analytics: An Integrated and Modularized Platform (Tech. Rep.). Society for Learning Analytics Research
(SOLAR). (https://solaresearch.org/wp-content/uploads/2011/12/OpenLearningAnalytics.pdf)
Southavilay, V., Yacef, K., Reimann, P., & Calvo, R. A. (2013). Analysis of collaborative writing processes using revision maps
and probabilistic topic models. In Proceedings of the 3rd International Conference on Learning Analytics and Knowledge
(LAK ’13), 8–12 April 2013, Leuven, Belgium (pp. 38–47). New York: ACM. (10.1145/2460296.2460307)
138
Suthers, D., & Dwyer, N. (2014). Multilevel analysis of uptake, sessions, and key actors in a socio-technical network. In
Proceedings of the Computational Approaches to Connecting Levels of Analysis in Networked Learning Communities
Workshop at Learning Analytics and Knowledge, 24–28 March 2014, Indianapolis, IN, USA (pp. 1–6). New York: ACM.
https://dx.doi.org/http://ceur-ws.org/Vol-1137/LAK14CLAsubmission1.pd f
Th
¨
us, H., Chatti, M. A., Brandt, R., & Schroeder, U. (2015). Evolution of interests in the learning context data model. In Proceed-
ings of the 10th European Conference on Technology Enchanced Learning (EC-TEL 2015), 15–17 September 2015, Toledo,
Spain (pp. 479–484). Lecture Notes in Computer Science, Springer. https://dx.doi.org/10.1007/978-3-319-24258-343
139