Conference PaperPDF Available

Learning, Learning Analytics, Activity Visualisation and Open Learner Model: Confusing?



This paper draws on visualisation approaches in learning analytics, considering how classroom visualisations can come together in practice. We suggest an open learner model in situations where many tools and activity visu-alisations produce more visual information than can be readily interpreted.
Learning, Learning Analytics, Activity Visualisation
and Open Learner Model: Confusing?
S. Bull
, M. Kickmeier-Rust
, R. Vatrapu
, M.D. Johnson
, K. Hammermueller
W. Byrne
, L. Hernandez-Munoz
, Fabrizio Giorgini
and G. Meissl-Egghart
University of Birmingham, UK
Technical University of Graz, Austria
Copenhagen Business School, Denmark
Talkademy, Austria
Lattanzio Learning, Italy
Abstract. This paper draws on visualisation approaches in learning analytics,
considering how classroom visualisations can come together in practice. We
suggest an open learner model in situations where many tools and activity visu-
alisations produce more visual information than can be readily interpreted.
Keywords: Learning, learning analytics, open learner model, visualisations
1 Introduction
There is much attention on visualisation and learning analytics, based on large
amounts of data for a variety of purposes. Learning data is our focus here. Various
visualisations can be used, including dashboards (see [2]). We illustrate how visuali-
sations may be used, and unite them with an open learner model (OLM) to help
teachers and students interpret the range of visual analytics that may be produced.
OLMs are concerned with visualising the learner model (individual’s knowledge,
competencies, etc.) OLMs commonly aim to promote metacognitive behaviours (re-
flection, self-monitoring, planning) [3]. So, while learning analytics often show activ-
ity data (interaction time in discussion; links in social networks or collaboration tasks;
performance data), OLMs use inferences drawn from interaction to produce visualisa-
tions of the current state of the learner's understanding, competencies, and so on.
2 Next-TELL Visualisations
Next-TELL ( comprises many tools and visualisations to
support learning, as well as visualise activities outside Next-TELL tools, such as
Moodle discussions or Google Docs. Many visualisations display similar information
in a similar form as other approaches (e.g. [2]). Figure 1 shows visualisations from a
Moodle discussion to illustrate the type of information and visualisation available for
user interpretation. As in other learning analytics solutions, they indicate what is hap-
pening (here: interactions between learners, topics of discussion, word counts and
Fig. 1. Discussion visualisations: network graph, wordcloud, word count, thread plot
Fig. 2. Google Docs visualisations (revisions, semantic analyses between revisions; frequency)
Fig. 3. Visualisations for materials used and time spent using them
Fig. 4. Visualisations from a repertory grid exercise: response time; content word cloud
Fig. 5. Transforming detailed conceptual data for user interpretation
Fig. 6. OLM views: treemap; skill meters; word cloud
splitting of threads). Their benefits are correspondingly similar: users can see "what is
going on". The same visualisations can be reused, allowing flexibility across activi-
ties, e.g. a teacher may wish to view data about a subject across different activity
types, such as discussion forums, Google Docs, etc. Figure 2 gives additional exam-
ples for writing in Google Docs: revisions (including semantic analyses between
revisions), and word frequency. As in Figure 1, these can also offer useful human-
interpretable information about actions and progress. Figure 3 offers a different type
of visualisation: learning materials used (left), with the dark line showing number of
interactions with each material; and coloured blocks, the duration of interaction. The
closer the block to the line, the closer the student is to completing the time for that
topic (in a specific tool and context). Time spent is shown in the pie chart and calen-
dar (right), to allow teachers to see whether students are interacting as expected.
Other approaches are more conceptually oriented. The repertory grid asks students
to distinguish similar and different items from a group of three words/concepts. This
generates activity data, but can additionally help teachers better understand their stu-
dents, e.g. by identifying misconceptions if these are included in the exercise in the
word groups presented. Figure 4 shows response time and a conceptual word could.
Each of the above, in or outside Next-TELL, can be effective. However, with in-
creasing use of technologies in learning and increasing use of learning analytics come
questions of how to use the visualisation tools. With more visualisation it may be-
come less easy to gauge learning. In line with enhanced teacher understanding of
students’ knowledge or conceptualisations are approaches that model knowledge on a
detailed level. The Next-TELL ProNIFA tool is based on Competence-based Knowl-
edge Space Theory [4], which enables structuring and representation of a domain
based on prerequisite relations using a set-theoretic framework (relations amongst
problems such as test items). This can result in detailed domain representations and
routes that an individual might take through a domain (see Hasse diagram left of Fig-
ure 5). This can be translated for teachers, and a variety of learning analytics pro-
duced (centre and right of Figure 5: from activity and chat logs in a virtual world).
This interpretation requires pre-specified and defined conditions, heuristics and rules,
but is a powerful approach when such specifications are available, or if a teacher
wishes to define rules. In the case of the virtual world, it is difficult to get the data
straight from the activity into a form where competencies can be easily recognised.
Through its rules, ProNIFA allows activity and performance data to be displayed
(Figure 5), but can also send it to the OLM. The OLM takes data from various sources
(e.g. specific tools, discussion/chat interactions, Google Docs activity) and manual
self, peer and teacher assessments. The data may come to the OLM through the API,
if suitably structured, or it may be transformed using ProNIFA. The OLM is based on
competency frameworks: teachers can select from existing frameworks (e.g. Common
European Framework of Reference for Languages [5]), or can build their own, linking
competencies with activities (e.g. competencies for meeting planning and facilitation
in different activities: virtual world, chat, skype, face-to-face). Such a framework may
span several subjects, e.g. 21
Century Skills, English, Business.
Figure 6 shows the treemap OLM view (of one of the sub-groups of a meeting
competency framework); and corresponding information in skill meters and word
cloud. This can be displayed to: the student that is modelled; peers if the model has
been released; and teachers with reference to individuals or the group. Teachers may
use this information in classroom orchestration if data comes from several sources, or
for subsequent planning. Students may use the information primarily for metacogni-
tive behaviours such as planning, self-monitoring, self-assessment, reflection – help-
ing them take responsibility for some of the decisions in their learning. Users may
drill-down to reach evidence for the learner model from a competency or an activity
perspective. (Evidence may be quantitative; artefacts (e.g. essay, screen shot of learn-
ing analytics data); may comprise self or peer assessments; or automated data through
ProNIFA or directly from another online activity.) By default, each activity contribut-
ing to the OLM is weighted equally, but more recent data has higher weighting.
Teachers may alter these weightings. The OLM can therefore draw together data from
various sources when the need is to focus on learner competencies, while other learn-
ing analytics visualisations are used to the extent that they best suit a specific purpose.
In summary: The challenge is when visualisations in different tools show different
information in similar forms or, conversely, the same information in different forms.
If teachers wish to benefit from the range of learning tools and activity tracking pos-
sibilities available, it is inevitable that they will encounter this. Even our own visuali-
sations illustrate this (e.g. bar graphs for activities look similar to the OLM skill me-
ters; word clouds show concepts in a repertory grid exercise and competencies in the
OLM; pie charts show speed of response and materials used). Next-TELL does not
expect all tools or visualisations to be used by a single teacher. Nevertheless, there
may be several tools selected as applicable, and we expect that a teacher may also use
other tools in addition to those offered by Next-TELL. Apparent ‘clashes’ will there-
fore likely occur. Given this, we suggest teachers use tools and learning analytics
most suitable for the purpose at the time (essay revision visualisation when consider-
ing writing strategies; network graph for peer discussion; use of materials to indicate
progress through tasks). The OLM can also take this data, and can be used on its own
for recognising competencies with reference to individual activities as well as across
activities. In some cases these visualisations may be used reflectively; in other cases
they may be used to support on-the-spot decision-making (by learner or teacher).
This project is supported by the European Commission (EC) under the Information Society
Technology priority FP7 for R&D, contract 258114 NEXT-TELL. This document does not
represent the opinion of the EC and the EC is not responsible for any use that might be made.
1. Long, P., Siemens, G.: Penetrating the Fog: Analytics in Learning and Education, EDU-
CAUSE Review, Sept/Oct, 31-40 (2011)
2. Verbert, K., Duval, E., Klerkx, J., Govaerts, S., Santos, J.L.: Learning Analytics Dashboard
Appliclations, American Behavioral Scientist, DOI: 10.1177/0002764213479363, (2013)
3. Bull, S., Kay, J.: Open Learner Models as Drivers for Metacognitive Processes, in R.
Azevedo, V. Aleven, Int. Handbook on Metacognition and Learning Technologies (2013)
4. Doignin, J., Falmagne, J.: Knowledge Spaces, Springer-Verlag, Berlin (1999)
5. Council of Europe (not dated). The Common European Framework of Reference for Lan-
guages, Accessed 18 March 2013.
... Some researchers like Heald (2006) have used the term 'transparency illusion' to illustrate this phenomenon. AI-powered ed-tech companies for teachers face this risk as teachers in a classroom setting can be easily overloaded with too much information on their dashboards (Bull et al, 2013;Greller and Drechsler, 2012). In such scenarios it can be argued that the ed-tech company developing an AI product can implement the different components of the Transparency Index framework for all three stages of the AI tool development process but can avoid sharing all this information with end-users or third parties. ...
Full-text available
Numerous AI ethics checklists and frameworks have been proposed focusing on different dimensions of ethical AI such as fairness, explainability, and safety. Yet, no such work has been done on developing transparent AI systems for real-world educational scenarios. This paper presents a Transparency Index framework that has been iteratively co-designed with different stakeholders of AI in education, including educators, ed-tech experts, and AI practitioners. We map the requirements of transparency for different categories of stakeholders of AI in education and demonstrate that transparency considerations are embedded in the entire AI development process from the data collection stage until the AI system is deployed in the real world and iteratively improved. We also demonstrate how transparency enables the implementation of other ethical AI dimensions in Education like interpretability, accountability, and safety. In conclusion, we discuss the directions for future research in this newly emerging field. The main contribution of this study is that it highlights the importance of transparency in developing AI-powered educational technologies and proposes an index framework for its conceptualization for AI in education.
... There is a need, especially in educational games, to include psychometrically sound assessments of students' learning in educational games. The idea of LA dashboards focusing on learning relates to the open learner model (OLM) (Bull et al., 2013). By visualizing the inferences about students' learning and showing the learning analytics to the stakeholders (i.e., students and teachers), metacognitive behaviors (e.g., reflection, planning, self-awareness, self-monitoring) can be enhanced. ...
Full-text available
Learning analytics (LA) dashboards refer to digital tools designed to help learners keep track of their progress and goals. There is growing interest and research around the topic of LA dashboards in online learning environments, with many lessons to be learned by educational game developers and researchers. However, we need more research in this area. In this chapter we addressed these issues by reviewing the theories undergirding LA dashboards, presenting recommendations that can be used when designing LA dashboards, reviewing existing LA dashboards in educational games, and, finally, walking through an example of an LA dashboard in an educational game called Physics Playground (PP). Specifically, we illustrate how PP uses stealth assessment to compute students’ physics understanding using gameplay data and how it presents those estimates to the students in a LA dashboard we called My Backpack. This process is possible through an architecture that we briefly discuss in this chapter. We conclude with our plans for expanding the LA dashboard in PP.
... iCoRA shares with KMCD and CARTA the goal of making information on historical data of courses available to students. However, in line with known guidelines for student-oriented VLA tools (e.g., [3,10,11,14,57,70]), iCoRA resorts to visualization techniques to also provide performance predictions through visual representations. ...
Full-text available
Course selection is a crucial activity for students as it directly impacts their workload and performance. It is also time-consuming, prone to subjectivity, and often carried out based on incomplete information. This task can, nevertheless, be assisted with computational tools, for instance, by predicting performance based on historical data. We investigate the effects of showing grade predictions to students through an interactive visualization tool. A qualitative study suggests that in the presence of predictions, students may focus too much on maximizing their performance, to the detriment of other factors such as the workload. A follow-up quantitative study explored whether these effects are mitigated by changing how predictions are conveyed. Our observations suggest the presence of a framing effect that induces students to put more effort into course selection when faced with more specific predictions. We discuss these and other findings and outline considerations for designing better data-driven course selection tools.
... However, links are beginning to be made between learning analytics visualisations and OLMs (e.g. Bull et al. 2013b;Durall and Gros 2014;Ferguson 2012;Kalz 2014;Kay and Bull 2015), so we can anticipate future benefits from building on research in both areas. There are now new opportunities to be had from combining the larger scale learning analytics approaches and open learner models that visualise learning or understanding. ...
Full-text available
Today’s technology-enabled learning environments are becoming quite different from those of a few years ago, with the increased processing power as well as a wider range of educational tools. This situation produces more data, which can be fed back into the learning process. Open learner models have already been investigated as tools to promote metacognitive activities, in addition to their potential for maintaining the accuracy of learner models by allowing users to interact directly with them, providing further data for the learner model. This paper suggests the use of negotiated open learner models as a means to both maintain the accuracy of learner models comprising multiple sources of data and prompt learner reflection during this model discussion process.
For the special issue of the International Journal of Artificial Intelligence in Education dedicated to the memory of Jim Greer, this paper highlights some of Jim’s extensive and always-timely contributions to the field: from his early AI-focussed research on intelligent tutoring systems, through a variety of applications deployed to support students in university courses, to learning analytics tools for instructional experts and university administrators. A substantial quantity of his work included some aspect of open learner modelling, and/or involved core issues that are also central to open learner modelling. Accordingly, this paper identifies Jim’s profound influence throughout an open learner model research programme.
The main goal of multimodal learning analytics (MLA) research is to extend the application of learning analytics tools and services in learning contexts to collect, analyze, and combine digital traces and learning data of completely different sources that are available in lab-based learning contexts. Moreover, the characteristics and properties of these learning contexts cannot be described by a single source of data traces, but a combination of several modes and sources are vital in understanding these particular learning processes (Ochoa, Multimodal learning analytics. In: Handbook of learning analytics. Society for Learning Analytics Research (SoLAR), pp 129–141., 2017). One specific learning setting in which there are hardly any scientific findings and research in learning analytics is using MLA in hybrid laboratory environments in connection with experiential learning. In our chapter, we would like to demonstrate the potentials and prospects of providing MLA tools and services in laboratory-based learning scenarios. For this reason, an exploratory approach was chosen, in order to investigate the possibilities of MLA for the selected laboratories.
We present a systematic literature review of the emerging field of visual learning analytics. We review existing work in this field from two perspectives: First, we analyze existing approaches, audiences, purposes, contexts, and data sources—both individually and in relation to one another—that designers and researchers have used to visualize educational data. Second, we examine how established literature in the fields of information visualization and education has been used to inform the design of visual learning analytics tools and to discuss research findings. We characterize the reviewed literature based on three dimensions: (a) connection with visualization background; (b) connection with educational theory; and (c) sophistication of visualization(s). The results from this systematic review suggest that: (1) little work has been done to bring visual learning analytics tools into classroom settings; (2) few studies consider background information from the students, such as demographics or prior performance; (3) traditional statistical visualization techniques, such as bar plots and scatter plots, are still the most commonly used in learning analytics contexts, while more advanced or novel techniques are rarely used; (4) while some studies employ sophisticated visualizations, and some engage deeply with educational theories, there is a lack of studies that both employ sophisticated visualizations and engage deeply with educational theories. Finally, we present a brief research agenda for the field of visual learning analytics based on the findings of our literature review.
Full-text available
This paper presents a systematic literature review of the state-of-the-art of research on learning dashboards in the fields of Learning Analytics and Educational Data Mining. Research on learning dashboards aims to identify what data is meaningful to different stakeholders and how data can be presented to support sense-making processes. Learning dashboards are becoming popular due to the increased use of educational technologies, such as Learning Management Systems (LMS) and Massive Open Online Courses (MOOCs). The initial search of five main academic databases and GScholar resulted in 346 papers out of which 55 papers were included in the final analysis. Our review distinguishes different kinds of research studies as well as various aspects of learning dashboards and their maturity regarding evaluation. As the research field is still relatively young, most studies are exploratory and proof-of-concept. The review concludes by offering a definition for learning dashboards and by outlining open issues and future lines of work in the area of learning dashboards. There is a need for longitudinal research in authentic settings and studies that systematically compare different dashboard designs.
Conference Paper
This paper compares approaches to visualising data for users in educational settings, contrasting visual learning analytics and open learner models. We consider the roots of each, and identify how each field can learn from experiences and approaches of the other, thereby benefiting both.
The SMILI☺ (Student Models that Invite the Learner In) Open Learner Model Framework was created to provide a coherent picture of the many and diverse forms of Open Learner Models (OLMs). The aim was for SMILI☺ to provide researchers with a systematic way to describe, compare and critique OLMs. We expected it to highlight those areas where there had been considerable OLM work, as well as those that had been neglected. However, we observed that SMILI☺ was not used in these ways. We now reflect on the reasons for this, and conclude that it has actually served a broader role in defining the notion of OLM and informing OLM design. Since the initial SMILI☺ paper, much has changed in technology-enhanced learning. Notably, learning technology has become far more pervasive, both in formal and lifelong learning. This provides huge, and still growing amounts of learning data. The fields of Learning Analytics (LA), Learning at Scale (L@S), Educational Data Mining (EDM) and Quantified Self (QS) have emerged. This paper argues that there has also been an important shift in the nature and role of learner models even within Artificial Intelligence in Education and Intelligent Tutoring Systems research. In light of these trends, and reflecting on the use of SMILI☺, this paper presents a revised and simpler version of SMILI☺ alongside the original version. In both cases there are additional categories to encompass new trends, which can be applied, omitted or substituted as required. We now offer this as a guide for designers of interfaces for OLMs, learning analytics and related fields, and we highlight the areas where there is need for more research.
Full-text available
Nell’era di Internet, delle tecnologie mobili e dell’istruzione aperta, la necessità di interventi per migliorare l’efficienza e la qualità dell’istruzione superiore è diventata pressante. I big data e il Learning Analytics possono contribuire a condurre questi interventi, e a ridisegnare il futuro dell’istruzione superiore. Basare le decisioni su dati e sulle evidenze empiriche sembra incredibilmente ovvio. Tuttavia, l’istruzione superiore, un campo che raccoglie una quantità enorme di dati sui propri “clienti”, è stata tradizionalmente inefficiente nell’utilizzo dei dati, spesso operando con notevole ritardo nell’analizzarli, pur essendo questi immediatamente disponibili. In questo articolo, viene evidenziato il valore delle tecniche di analisi dei dati per l’istruzione superiore, e presentato un modello di sviluppo per i dati legati all’apprendimento. Ovviamente, l’apprendimento è un fenomeno complesso, e la sua descrizione attraverso strumenti di analisi non è semplice; pertanto, l’articolo presenta anche le principali problematiche etiche e pedagogiche connesse all’utilizzo delle tecniche di analisi dei dati in ambito educativo. Cionondimeno, il Learning Analytics può penetrare la nebbia di incertezza che avvolge il futuro dell’istruzione superiore, e rendere più evidente come allocare le risorse, come sviluppare vantaggi competitivi e, soprattutto, come migliorare la qualità e il valore dell’esperienza di apprendimento.
Maintaining a model of the learner’s understanding as they interact with an e-learning environment allows adaptation to the learner’s educational needs. An Open Learner Model makes this machine’s representation of the learner available to them. Typically, the state of the learner’s knowledge is presented in some form, ranging from a simple overall mastery score, to a detailed display of how much and what the learner appears to know, their misconceptions and their progress through a course. This means that an Open Learner Model provides a suitable interface onto the learner model for use by the learner, and in some cases for others who support their learning, including peers, parents and teachers. This chapter considers some of the similarities between the goals of supporting and encouraging metacognition in intelligent tutoring systems and learning in general, and the benefits of opening the learner model to the user. We provide examples of two important classes of open learner models: those within a particular teaching system and those that are first-class citizens with value independently of a teaching system. The chapter provides a foundation for understanding the range of ways that Open Learner Models have already been used to support learning as well as directions yet to be explored, with reference to encouraging metacognitive activity and self-directed learning.
This article introduces learning analytics dashboards that visualize learning traces for learners and teachers. We present a conceptual framework that helps to analyze learning analytics applications for these kinds of users. We then present our own work in this area and compare with 15 related dashboard applications for learning. Most evaluations evaluate only part of our conceptual framework and do not assess whether dashboards contribute to behavior change or new understanding, probably also because such assessment requires longitudinal studies.
We have learned from Theorem 2.2.4 that any learning space is a knowledge space, that is, a knowledge structure closed under union. The ∪-closure property is critical for the following reason. Certain knowledge spaces, and in particular the finite ones, can be faithfully summarized by a subfamily of their states. To wit, any state of the knowledge space can be generated by forming the union of some states in the subfamily. When such a subfamily exists and is minimal for inclusion, it is unique and is called the ‘base’ of the knowledge space. In some cases, the base can be considerably smaller than the knowledge space, which results in a substantial economy of storage in a computer memory. The extreme case is the power set of a set of n elements, where the 2n knowledge states can be subsumed by the family of the n singleton sets. This property inspires most of this chapter, beginning with the basic concepts of ‘base’ and ‘atoms’ in Sections 3.4 to 3.6. Other features of knowledge spaces are also important, however, and are dealt with in this chapter.
  • K Verbert
  • E Duval
  • J Klerkx
  • S Govaerts
  • J L Santos
Verbert, K., Duval, E., Klerkx, J., Govaerts, S., Santos, J.L.: Learning Analytics Dashboard Appliclations, American Behavioral Scientist, DOI: 10.1177/0002764213479363, (2013)