ArticlePDF Available

Abstract and Figures

Educational processes take place in physical and digital places. To analyse educational processes, Learning Analytics (LA) enable data collection from the digital learning context. At the same time, to gain more insights, the LA data can be complemented with the data coming from physical spaces enabling Multimodal Learning Analytics (MMLA). To interpret this data, theoretical grounding or contextual information is needed. Learning designs (LDs) can be used for contextualisation, however, in authentic scenarios the availability of machine-readable LD is scarce. We argue that Classroom Observations (COs), traditionally used to understand educational processes taking place in physical space, can provide the missing context and complement the data from the co-located classrooms. This paper reports on a co-design case study from an authentic scenario that used CO to make sense of the digital traces. In this paper we posit that the development of MMLA approaches can benefit from co-design methodologies; through the involvement of the end-users (project managers) in the loop, we illustrate how these data sources can be systematically integrated and analysed to better understand the use of digital resources. Results indicate that CO can drive sense-making of LA data where predefined LD is not available. Furthermore, CO can support layered contextualisation depending on research design, rigour and systematic documentation/data collection efforts.Also, co-designing the MMLA solution with the end-users proved to be a useful approach.
Content may be subject to copyright.
Contextualising Learning Analytics with Classroom
Observations: a Case Study
Maka Eradze12 [0000-0003-0723-1955] María Jesús Rodríguez-Triana1 [0000-0001-8639-1257], Nikola
Milikic3[0000-0002-3976-5647], Mart Laanpere1 [0000-0002-9853-9965]
and Kairit Tammets1 [0000-003-2065-6552]
1 School of Digital Technologies, Tallinn University, Tallinn 10120, Estonia
2 Department of Education and Human Sciences, University of Modena and Reggio Emilia,
Italy
3 University of Belgrade, Studentski trg 1, Belgrade, Serbia
maka@tlu.ee
Abstract. Educational processes take place in physical and digital places. To
analyse educational processes, Learning Analytics (LA) enable data collection
from the digital learning context. At the same time, to gain more insights, the
LA data can be complemented with the data coming from physical spaces ena-
bling Multimodal Learning Analytics (MMLA). To interpret this data, theoreti-
cal grounding or contextual information is needed. Learning designs (LDs) can
be used for contextualisation, however, in authentic scenarios the availability of
machine-readable LD is scarce. We argue that Classroom Observations (COs),
traditionally used to understand educational processes taking place in physical
space, can provide the missing context and complement the data from the co-
located classrooms. This paper reports on a co-design case study from an au-
thentic scenario that used CO to make sense of the digital traces. In this paper
we posit that the development of MMLA approaches can benefit from co-
design methodologies; through the involvement of the end-users (project man-
agers) in the loop, we illustrate how these data sources can be systematically in-
tegrated and analysed to better understand the use of digital resources. Results
indicate that CO can drive sense-making of LA data where predefined LD is not
available. Furthermore, CO can support layered contextualisation depending on
research design, rigour and systematic documentation/data collection efforts.
Also, co-designing the MMLA solution with the end-users proved to be a useful
approach.
Keywords: Classroom Observations, Learning Analytics, Multimodal Learning
Analytics, Blended Learning, Co-located Classrooms, Contextualisation, Learn-
ing Design
1 Introduction
Teaching and learning processes increasingly take place in blended learning settings
and in both, physical and digital spaces. While Learning Analytics (LA) solutions
offer automated means to collect and analyse digital traces, they only provide a partial
view of the whole picture. To cover this gap, the subfield of Multimodal Learning
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
71
Analytics (MMLA) integrates evidence from the physical spaces using other automat-
ed means such as sensors, EEG devices, eye tracking, etc. Despite it, to make sense of
those datasets, pedagogical grounding and/or contextual information may still be
needed [1]. Researchers suggest using learning design (LD) to contextualise the anal-
ysis [2]. However, practitioners do not always produce digital versions of the scripts
or LD that can be automatically interpreted due to technological or LD adoption chal-
lenges [3]. Alternatively, classroom observations have been used in authentic scenari-
os to understand educational practices taking place in the physical space, providing
additional and highly contextual information with other data sources [4][5][6]. Aside
from the abovementioned issues, the complex process of embedding innovation in
authentic contexts was viewed as challenges related to human factors [7], and the co-
design methodology to involve the user in the development of LA solutions is one
way to respond to adoption challenges [8].
This paper reports on a case study in which researchers and end-users co-designed
an MMLA solution where classroom observations were used in combination with
digital traces to better understand the adoption of digital learning resources in authen-
tic learning scenarios. We argue that, in co-located classrooms, systematic CO can
help to understand the context where the digital traces took place in authentic, real-
life scenarios. Moreover, a co-design methodology can help address adoption issues
referred to in previous research, by co-designing the MMLA solution with end-users.
2 Making Sense of Learning Analytics: context and design-aware
observations
LA is a rapidly developing field of research and practice that seeks to analyse learning
processes and their context to optimize, support, challenge and reshape educational
practices [9]. Inherently, it focuses mainly on the data collected through digital
means, providing a strategic way to understand how digital tools are used. However,
in blended learning, without knowing the context where the digital artefacts were
used, it sometimes is difficult to make sense of the available data [2]. To contribute to
the LA sense-making, different solutions have been proposed in the literature; When
the learning theories or the pedagogical approach are known, some authors have sug-
gested adopting theory-driven approaches to obtain meaningful analytics [10, 11].
However, it does not guarantee that the interpretation of the data fits the reality of the
learning context.
Other researchers have proposed that the use of LDs can contribute to the contex-
tualisation of data analysis [2][12]. While the benefits of using the LD to guide the
data analyses have been reported by many authors, access to such design represents
one of the main challenges [13]. Frequently, due to time constraints practitioners may
not even document their lessons plans [14]. In some other cases, the LD may be col-
lected in a format that is not automatically interpretable (e.g., using hand-written dia-
grams, schemes, or lists of steps). In the optimal but less frequent scenario [2, 15], the
practitioners may have registered their designs in an authoring tool. However, even in
this case, the interoperability with the tool is not guaranteed since there is no single
data format to represent the LD [16].
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
72
A different method used to understand learning processes or situations is class-
room observations [17]. While some data collection methods (such as surveys or in-
terviews) target participant views, classroom observations can provide a non-
judgmental description of learning events [18]. CO can gather data on individual be-
haviours, interactions, or the physical setting by watching behaviour, events, artefacts
or noting physical characteristics [17]. Observation types may vary on the continuum
from unstructured, semi-structured to structured (systematic). This means that un-
structured observations produce qualitative data and structured observations quanti-
tative [19]. Some authors argue that CO benefits from qualitative and unstructured
data gathering [17], others advise against it since it may result in big volumes of un-
structured data [20]. On the contrary, while reducing expressivity, systematic (struc-
tured) observations allow for more efficient analysis and data processing [21]. There-
fore, systematic observations are especially suitable to be combined with digital trac-
es, enriching each other to understand learning processes and contexts with the help
of multimodal learning analytics [22].
Traditional classroom observations require human inference and are highly con-
textual; human-mediated labelling is often used in MMLA to relate raw data to more
abstract constructs [23][24]. Observation data integration with LA can happen for
triangulation purposes [25], for observing technology-enhanced learning [26], infer-
ring meaningful learning interaction data through annotations of direct observations
[27] and video annotation to triangulate multimodal datasets, extract learning context
and segment into time intervals has also been suggested [24]. Computer-assisted ob-
servation can help the process of observations through enforcing specific coding
schemes and prevent missing data, speeding up the process of observations [28], en-
hance the validity and reliability of data [29]. Computer-assisted systematic observa-
tion tools have been suggested for recording interactions to study social dynamics at
work [30], to annotate emotions from audio and video for multimodal analysis [31], to
study student emotion and behaviour [29] etc. Most of the abovementioned tools are
based on specific coding protocols or specific dimension of data (for instance,
emotions) or theories (social dynamics), with little flexibility for developing own
coding schemes that may not cater different research needs, cannot be guided by LD
or/and may not be useful for contextualisation of data analysis.
Some authors [32] classify data according to whether collection and interpretation
require human involvement or not. While digital traces could be easily collectable
through automatic means, higher-level interactions taking place in the physical space
may be more challenging to detect and record in the computational format. Thus,
observers can contribute to sense-making, especially when data comes totally or par-
tially from physical spaces [33].
Considering the aforementioned information, based on the lessons learned from
previous studies [12][22], we have proposed the Context-aware Multimodal Learning
Analytics Taxonomy (Fig. 1)[34]. The taxonomy classifies different research designs
depending on how systematic the documentation of the learning design and the data
collection have been:
Ideal - Systematic documentation and data collection: In the most desirable case,
the learning design (including actors, roles, resources, activities, timeline, and learn-
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
73
ing objectives) is set up-front and documented in an authoring tool (e.g., LePlanner1
or WebCollage2). Then, during the enactment, logs are collected automatically from
the digital space and systematic observations from the physical one. During the en-
actment, the lesson structure is also inferred through observations. To ensure the in-
teroperability, actors and objects need to be identifiable (across the learning design,
logs and observations) and timestamps for each event need to be registered [35] Once
the data is aggregated in a multimodal dataset, further analysis can be executed.
Fig. 1. Context-aware MMLA taxonomy
Authentic (baseline) - Non-systematic documentation but systematic data collec-
tion: We regard this level as a compromise between the limitations of authentic set-
tings but still rich in terms of data. Here, the predefined learning design cannot be
automatically used to guide the analysis (either because of its format or because it is
not available). However, the timestamped lesson structure is inferred by the observer.
Therefore, the actors are not identifiable across observations and digital traces. Never-
theless, both structured observations and logs are systematically gathered and collect-
ed in the Learning Record Store using a common format (e.g., xAPI). These condi-
tions will enable the application of contextualised analysis on a more baseline level,
using multimodal analytics.
Limited - Non-systematic documentation or data collection: Data collection hap-
pens non-systematically. As in the previous case, no information about the learning
design is available (i.e., actors are not known). In terms of the design of the data col-
lection, the protocol with corresponding codes may not be predefined, and semi-
structured (non-systematic) observations are used. Thus, even if logs are systematical-
1 https://leplanner.ee
2 https://www.gsic.uva.es/webcollage/
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
74
ly gathered, the lack of systematisation of the observations hinder the application of
multimodal data analysis. Although this is not an advisable scenario, logs and obser-
vations can be analysed independently and still provide an overview of what hap-
pened in the physical and digital planes. Besides, even if observations are done sys-
tematically, if the vocabulary (actors, objects and actions) are not agreed across da-
tasets, then the potential of the multimodal analysis could be limited.
According to some authors, in many fields, the design of the data collection tools
are not discussed, this is especially true in the field of observations [36]. Bearing in
mind the constraints that LD-aware analysis may entail, we hypothesize that focusing
on the baseline scenario case will help us to study and better understand authentic
scenarios in non-experimental settings, without ad-hoc tools, where such innovations
most probably will be applied. We argue that the development of such innovations
through the involvement of the “user in the loop” and research-based design process
is important. In the following sections, through a case study involving a participatory
approach, we illustrate the feasibility of using observations to contextualise the data
analysis in an authentic scenario involving the users in the analysis and interpretation
data. We argue that, providing the alternative of using observations when the design is
not available, more authentic scenarios will benefit from contextualised MMLA solu-
tions. Moreover, through the suggested user involvement in authentic settings, we
extract recommendations for the future development of MMLA solutions.
3 Research methodology and research questions
The overarching methodology of this research is a research-based design process that
relies on the co-design of innovation though participatory approaches and stems from
design-based research [37]. The stages of research are as follows: contextual inquiry,
participatory design, product design, and production of software prototype as a hy-
pothesis. These stages are not strictly separated and the research methodology sug-
gests iteratively alternating between stages. Three stages were covered in the previous
works: contextual inquiry, participatory design, and product design [12, 3841]. This
phase partly goes back to contextual inquiry and product design while also presenting
the software prototype as a hypothesis.
The main goal of this research is to better understand how MMLA can benefit from
classroom observations and what is the value that observations may have for the
sense-making of digital traces gathered from authentic context across physical and
digital spaces. Therefore, the main research questions addressed in the study are:
RQ1: Which aspects of digital-trace based LA could benefit from observations?
RQ2: What is the added value that Observations offer to the user in terms of
meaning, context and quality?
Development and adoption of MMLA solutions that can be used in real-life situa-
tions is a highly complex process and human factors are to be taken into account [42].
To explore the feasibility of using observations for contextualisation of data analysis
and analysis in authentic settings, as well as to gain a deeper understanding of sense-
making processes and alleviate adoption issues, we employ the case study methodol-
ogy “to examine the instance in action” [43] by progressively involving users in a co-
design process. To reach this goal we followed a specifically developed method for
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
75
the design of MMLA solutions, that entails involving the end-users in the loop [8].
This method defines four steps for the co-design of MMLA solutions: a) Understand-
ing the MMLA solution. b) Definition of the questions to be asked by the MMLA
solution. c) Reflection about the contextual constraints and the MMLA affordances.
d) Refinement of the scenario and customisation of the MMLA solution.
Two project managers were involved in the co-design and evaluation of an
MMLA solution. The study is framed within the Digiõpevaramu3 project, where the
main goal was to better understand how digital learning resources were used in the
classroom. To achieve this goal, observations and logs from five lessons were ana-
lysed, also involving visualisation techniques. The study spanned for two iteration.
The first iteration was mainly exploratory. Focusing on a single lesson, exploratory
data analysis was carried out to identify indicators and visualisations that could be of
interest for the project managers. Based on the lessons learnt, in the second iteration,
the analysis of all five lessons was presented to the project managers to gain further
insights about the customisation of the MMLA solution. During this process, mediat-
ed through data analysis, semi-structured questionnaires and interviews (1 interview
per iterations) helped us gather feedback from the users on the further customisation
of the MMLA solution. Questionnaire and interview data were analysed with content
analysis method and are presented in section 4.4.
4 Case study
a. Context of the study
The study was conducted within the project Digiõpevaramu. Task-based [44] digital
materials were co-developed together by the teachers and university experts, and 6000
digital learning resources were made available through an Estonian national level
aggregator. Teachers could re-use the resources and mix different tasks into a collec-
tion to be used in the classroom. Materials were piloted in spring 2018 with 50 teach-
ers and 1200 students from different types of Estonian secondary schools. While the
project collects logs about the usage of the digital materials, this information was
insufficient to understand how those materials were integrated into the teaching prac-
tice. Therefore, observers attended several lessons to collect evidence about class-
room practice.
The case study involved 2 managers of the project who wanted to understand how
the digital materials were used in the pilots. The participants of the study designed the
observation protocol which was used in the different pilots. This paper focuses on the
iterative, exploratory data analysis of 1+5 lessons of these observations. After the
analysis of 1 specific lesson, we analysed 5 more lessons through the involvement of
stakeholders, by introducing different types of data in the data-set.
3 https://vara.e-koolikott.ee/
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
76
b. Observational Data Collection Instrument - Observata
A classroom observation app, Observata (https://observata.leplanner.ee) [41], was
used to design and systematically observe the lessons were the digital resources were
used. Apart from supporting unstructured observations, this tool enables collecting
data through systematic observations based on learning interactions (learning event is
the unit of analysis). While the tool enables the connection with the predefined LD
(automatically imported from LePlanner [45]), it is not compulsory. The tool also
allows for inferring learning activities (emerging plan/observed lesson structure) from
lesson implementation and collecting field notes (unstructured observations) and pho-
tos.
Fig. 2. Observata screens (from left to right): observation view to collect data in xAPI format,
data visualised on the timeline, data visualised on the dashboards.
To aid the observation, the tool enables the user to define the foci of interest, sub-
jects and objects up-front, speeding up the systematic observations. Observations are
modelled as xAPI statements. xAPI is a specification that enables the collection of
digital traces in the form of statements in a subject, verb, object structure that is simi-
lar to an English language sentence structure4 (see the fig 2, left). Data can be stored
and downloaded but also visualised on the timeline in an xAPI format right after the
data collection (middle), and analytics with the structured observations is provided on
a dashboard (right). Aside from this, Observata allows for open coding protocol while
still enabling the systematic data collection.
c. Process: Involving users in the design of MMLA solutions
To better understand the added value of combining observations and digital traces to
contextualise the analysis in an early stage, we followed a method to progressively
involve end-users in the design of MMLA solutions [8]. While this process has only 4
steps (a. Understanding the MMLA solution, b. Define the questions to be answered
4 https://experienceapi.com/overview
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
77
by MMLA solution c. Reflection on contextual constraints and affordances. d. Re-
finement of the scenario and customisation of the MMLA solution), we added an extra
iteration of the last 2 steps. This method allowed us to iteratively analyse the data and
co-design the MMLA solution, identifying indicators and visualisations that better fit
the stakeholdersneeds.
In the first iteration, we analysed a history lesson that took place in May 2018,
lasting 40 minutes, taught by one teacher to 15 students. One observer observed the
lesson. According to the data collected by the observer, the teacher followed a se-
quence of 6 activities, namely: 1. Introduction to the lesson. 2. Presentation of a new
topic. 3. Independent work with digital learning. 4. Feedback on independent work. 5.
A new presentation. 6. Quiz. Since the learning design was not formalised in advance
by the teacher, this inferred structure of the lesson provided us with contextual infor-
mation to understand what happened during the lesson.
Iteration 1. Step 1. Understanding the MMLA solution: Student interactions with the
digital resources were collected in the form of anonymized xAPI statements. Aware
of the limitations of the log analysis, the participants of the study planned observa-
tions to gather evidence about how the materials were integrated into the classroom.
Also, to support the systematic collection of observations in a compatible format for
MMLA analysis (xAPI statements stored in a Learning Record Store (LRS)), the pro-
ject managers provided observers with Observata (section 4.1).
Table 1. Relation of needs posed by the project managers, extracted topics of interest, and
allocation per co-design iteration
Participants’ needs
Topics of interest addressed per itera-
tion
Participant 1.
Overall question: how are resources used?
“What happened between the subjects when
one of the activities started?” (TI1)
Categorize situations that happened in the
classroom, using them as a context for log
data” (TI1, TI2)
Differences of implementation patterns and
using the digital learning resources(TI3)
Lesson level (iteration 1)
TI1. How was the interaction between
the actors according to different activi-
ties?
TI2. How were the interactions with
digital resources according to different
activities?
Participant 2.
Understand how teachers’ integrate new
resources to their pedagogical practices: do
they use it traditionally to replace textbooks,
more for individual work or to enhance new
learning paradigms(TI3)
Project level (iteration 2)
TI3. What are patterns of usage of digital
learning resources?
Iteration 1. Step 2. Define the questions to be answered by MMLA solution. The main
goal of the project managers was to better understand actual practices and patterns of
using digital learning resources used in co-located classrooms and spot what obstacles
teachers face. To this aim, several lessons were studied through systematic coding of
interactions and inferring the lesson structure. In this step, the project managers posed
the main questions they wanted to answer with the MMLA solution (see Table 1)
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
78
taking into account the affordances and contextual constraints (step 3) of the MMLA
solution. Since these questions were of different granularity, in the first iteration we
focused on lesson-level questions. Once we clarified how to study individual lessons,
in the second iteration, we also addressed those questions that entailed analysing mul-
tiple lessons to extract patterns.
Iteration 1, Step 3. Reflection on contextual constraints and the MMLA affordances:
The participants were informed about the limitations and affordances imposed by the
observation design and the technological infrastructure. On one hand, several con-
straints were hindering the multimodal analysis. First, the actors were not identifiable
across datasets, hindering the possibility of merging the data and following individu-
als across spaces. Nevertheless, independent analysis of each dataset was done and
then presented together to provide a more holistic view. Second, the resources used
during the session were not known. Thus, the traces stored in the LRS were manually
selected based on the timeframe and the topic of the session. However, there was no
way to differentiate, as these digital resources were used in another classroom at the
same time. Third, additional observation statements were originally in Estonian and
translated into English for the analysis, introducing potential noise in the data. Fourth,
each dataset used different data values (i.e., different types of actors, verbs, and ob-
jects/artefacts). Therefore, this aspect did not allow us to run the analyses of both
datasets together in a meaningful way, as mentioned in point one.
Fig. 3. Upper: Overview of the amount and type of interactions in the physical (left) and in the
digital space (right) Down: the frequency of each (inter)action type or verb in observations
(physical interactions) and logs (digital interactions). Note the difference in scale of each
graphs
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
79
On the other hand, multimodal dataset offered multiple opportunities. First, obser-
vations and logs complement each other, offering a more holistic picture of the learn-
ing activity. Second, it is possible to analyse data within the context of emerging,
observed lesson structure during the implementation of a lesson (visualised in figure
4). Finally, observation data includes different types of physical artefacts and different
levels of interactions (student-teacher, teacher-student, student-student, teacher-
artefact). Figure 3 provides an overview of the data collected through observations
and logs, as well of the type and frequency of the interactions registered.
Fig. 4. Timeline representation of the interactions registered in observations (physical interac-
tions - first) and digital traces (digital interactions - second). The vertical lines represent the
limits of activities (observed lesson structure) where the interactions took place.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
80
The data were analysed within the context of learning activities and visualised by
plotting the interactions in the sequence of activities inferred by the observer. The
plots were placed on top of each other. The metrics used in the analysis were chosen
to meet the questions posed by the project managers: the frequency of interactions of
participants contextualised within the activities and types of interactions contextual-
ised within the activities across two datasets. Figure 4 illustrates the outcomes ob-
tained from the analysis.
We also applied Social Network Analysis (SNA) to both datasets (eigenvector
centrality measures, betweenness, page-rank, degree, in-degree and with overall net-
work statistics). To transform the xAPI data from observations and digital interactions
into graph data, actors and objects (resources in case of digital traces) were defined as
nodes, and interactions (i.e., verbs) as edges, which could bidirectional (subjects in-
teracting with objects and vice versa) or unidirectional (actors interacting with digital
objects). Only one SNA graph is used to illustrate the results obtained through this
kind of analysis. (see Figure 5).
Fig. 5. SNA (on the left), SNA of logs: visualises the page-rank (colour-coded - the greener the
higher is the page-rank, hence the relative importance) and Eigenvector (bigger the circle, the
more influential is the node)
User feedback. The participants (i.e., the project managers) received a report includ-
ing the main visualisations and brief introductions to the concepts or metrics used (for
instance, for SNA terminology). Based on this report, they filled out a questionnaire5
to collect specific feedback on indicators for further analysis, as well as general feed-
back on the study and datasets based on the analysed lesson. Most of the time two
participants thought it was useful to see both datasets separately and together to un-
derstand the adoption of digital resources. They thought it was somehow useful or
very useful (on a scale of very useful, somehow useful, not useful at all) to have data
from physical and digital spaces to understand the adoption of digital resources, in-
cluding not only the systematic observations and the logs but also the lesson plan
inferred by the observer. SNA was not considered useful since neither actors nor re-
sources could be identified across observations and logs, and this kind of analysis did
5 Link to the questionnaire that includes also visualisations http://bit.ly/MMLAstudyquestionnaire
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
81
not establish the connection to the timeline or the inferred lesson plan. First iteration
results and data challenges (also defined in the constraints in iteration 1. Step 3.) are
reported below, which informed the analysis of the next iteration.
Table 2. List of visualisations and analysis carried out in iteration 1. For each of them, per-
ceived added value and detected challenges are listed.
After the questionnaire, unstructured interviews were also scheduled. The results
from this questionnaire and interview are summarized in Section 5.
Iteration 1. Step 4. Refinement of the scenario and customisation of the MMLA solu-
tion. The feedback obtained from Iteration 1 (see Table 2) informed step 4 and further
analysis. While both participants acknowledged the added value of using observations
to make sense of what happened in the classroom at the physical and digital level,
several ideas emerged to improve the MMLA solution. Apart from the mere integra-
tion of MMLA dashboards with the observation tool, new relevant data sources that
could contribute to the contextualisation were mentioned. This includes: teachers’
reflections and observations (even if they are not systematic), the LD inferred by the
observers, or LD provided a-posteriori. Presenting the visualisations together with
explanations, in a storytelling manner, was well appreciated by the participants of the
study. Based on the study, the project managers would like to explore which (novel)
learning activities were designed around the usage of digital learning resources to
support different learning paradigms.
Visualisation
Analysis
Feedback and val-
ue
Challenge
Plot, time-
based
Separate plots (placed on
top of each other) of
(inter)actions according to
participants within the
context of observed les-
son structure
Somehow useful,
useful but only
observations allow
for distinguishing
actor roles
Student IDs miss-
ing for joint analy-
sis, actors’ roles
not distinguishable
in digital logs
Plot, time-
based
Separate plots (placed on
top of each other), plot-
ting (inter)actions within
the context of observed
lesson structure
Somehow useful,
useful, verbs and
actions complement
each other, the main
value is observed
Lesson Structure and
xAPI
Student IDs miss-
ing for joint analy-
sis
SNA graphs
Two graphs side by side,
different analyses (eigen-
vector centrality
measures, betweenness,
page-rank, degree, in-
degree and overall statis-
tics).
Not useful or some-
how useful, no value
at this stage
Missing IDs, no
context was given
so SNA graphs are
disconnected.
Actor roles are not
distinguishable
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
82
Fig. 6. Examples of visualisations generated during the second iteration: on top - interactions in
digital space, in the middle - interactions from physical space with field notes6 and logs. In the
bottom - the number of the observed actions per actor, the teacher is dark pink) contextualised
in observed lesson structure. 7
6 black boxes on the plot mainly describe additional information as noted by the observer i.e., the last
comment reads: teacher announces that who left earlier will be graded after. Normally, in Observata this
is visualised on the timeline, timestamped.
7 Link to the analysis and questions http://bit.ly/MMLA5morelessons
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
83
Iteration 2. Step 3. Reflection on contextual constraints and the MMLA affordances.
To answer the project level questions defined in iteration 1 (see Table 1), we extracted
the main constraints and affordances of each data analysis and have chosen metrics
and indicators that were meaningful for the stakeholders (Table 2). Five more lessons
were analysed taking into account the lessons learnt from the previous iteration. As
SNA was not regarded as useful, we omitted it this time. In some cases, together with
xAPI statements from observations, logs from LRS and emerging lesson structure, we
used observer field notes and teacher reflections.
User feedback: A semi-structured interview was carried out after providing partici-
pants with a report containing the analysis of the 5 lessons. The goal of the interview
was threefold: to evaluate to what extent the MMLA solution helped the answer their
project-level questions; to understand the value of combining different data sources
and added value of each of the data sources, and finally, to identify further needs in
terms of data collection or analysis to understand patterns of use. The interview data
is analysed and reported in the results section.
As it happened in the first iteration, the participants highlighted the added value
that having an LD could bring. However, in this second iteration, they also acknowl-
edged that teachers did not always agree on documenting and sharing their LDs.
Moreover, participants indicated the importance of having two types of contextual
information together with predefined LD observed lesson structure inferred from
lesson enactment can be layered. It was suggested to use dashboard capabilities for
the sensemaking of data. Different other data sources could help fill in missing infor-
mation, for instance, videos that can be later coded and structured. This raises data
privacy issues that are sometimes difficult to manage (just like in case of this particu-
lar project).
Iteration 2. Step 4. Customisation of an MMLA solution: Several ideas emerged to
improve the MMLA solution. While separate datasets without predefined LD are still
informative to answer the project-level question, predefined LD is necessary to have
richer analysis. Actual implementation patterns extracted through observed lesson
structure can only enrich the data and further contextualise its analysis. It is desirable
to include different data, amongst them qualitative, that through the development of
the MMLA solution could be further quantified. For instance, short videos for later
annotation or post-editing of unstructured field notes. The solution will need MMLA
dashboard development to enable further sense-making of data since several qualita-
tive and quantitative data-sources are regarded as useful by the stakeholders.
d. Results and discussion
This section presents the results of the questionnaire and interview data analysis from
both iterations. The qualitative feedback from the participants from both iterations are
reported together was analysed following the research questions of the paper: the
table 3 (see Appendix 1) summarizes the findings and brings evidence from question-
naires and semi-structured interviews in iterations 1 and 2. Based on the main find-
ings of the research we will interpret the results following two main research ques-
tions:
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
84
RQ1: Which aspects of digital-trace based LA could benefit from the observations?
Following the method, the feedback received from the users led us to the design
ideas for the next version of the MMLA solution. Additionally, the lessons learnt also
helped the project managers to consider the constraints of the context and the af-
fordances of the MMLA solutions, guiding the design of future studies.
Structured Observations: According to the participants, the main benefit of the
observations for the MMLA solution was structured observation data in the form of
xAPI statements which bring different dimensions for the data analysis.
Semantics: Participants noted that data from two realms introduce different seman-
tics: while it may be useful to see same taxonomy in both datasets (xAPI statements in
the logs and observations), it’s not an absolute solution because these two data
streams represent different semantics.
Inclusion of other qualitative data sources: According to the participants, aside
from structured observation data MMLA that can easily be created by annotating
learning events, Multimodal analysis can also benefit from unstructured observations
(field notes, observed lesson structure). While unstructured observations present more
integration challenges that structured ones, they could be of great value to interpret
the quantitative results as well as to triangulate and validate the findings. For instance,
timestamped field notes, photos and videos may provide further qualitative context.
Also, teacher reflections may be used to partly replace missing predefined LD to
understand teacher intentions. This also can be timestamped photos or videos that
can be coded later. Using storytelling approaches to present quantitative and qualita-
tive data could be a promising solution. In this case, the quantitative data analysis
could help to contextualise what was happening when the qualitative evidence was
gathered.
Data analysis, sensemaking and multimodal dashboards: According to the partici-
pants, the data collection, analysis and sensemaking of data can be contextualised
within planned LD. Emergent, observed lesson structure can add another layer of
contextual information. Codification annotating interactions gives context to the log
data. Even if observations are useful for contextualisation, they do not replace the LD.
Having both, the original teacher design and the emerging one inferred from the ob-
servations would add value to the data analysis, enabling the comparison between
plan and implementation, as well as detecting regulation decisions. As qualitative data
was regarded useful and important, some of this data can be post-edited and struc-
tured but some qualitative data (with different semantics) also visualised on the dash-
boards and sensemaking of data can be aided through filtering.
RQ2: What is the added value that observations offer to the user in terms of meaning,
context and quality?
Meaning and complementarity. According to the participants, observations add
value through incorporating additional data on actor roles, actions (verbs) and arte-
facts (objects): it is not possible to make sense of the data without putting logs and
structured observation datasets together. Only the combination of the two contrib-
utes to sensemaking. Data coming from the different spaces complement each other
and are only useful if put together. Different semantics from across-spaces data also
bring complementary information.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
85
Context/theoretical grounding. According to the participants, the contextualisation
of digital data is the main value of classroom observations. This contextualisation can
happen through: unstructured observations (observed lesson structure), coded (in-
ter)actions aggregated through structured, semi-structured xAPI statements or un-
structured field notes later coded/edited and systematized. Participants stressed the
importance of theory-driven coding: theoretical (learning) constructs [32] can be in-
troduced through the pre-defined codes, aligning theory with data to enable confirma-
tory analysis.
Quality. According to the participants, most of the quality issues were related to the
constraints posed by actual research design, that is an authentic, typical scenario. But
at the same time, they relate to privacy issues, mentioned by the stakeholders. There-
fore, the actual data was puzzling, exploratory and incomplete. While it was possible
to gather multimodal data from the digital and the physical space, a joint analysis was
not possible in some cases (actors could not be identified across datasets) and not
meaningful in others. Observations represent small data nevertheless, they bring
different semantics and context in the data set, which is an important issue in LA.
Based on the feedback from the questionnaires and interviews, we have gathered
insights about the value that classroom observations add to the data analysis. Regard-
ing the value of observations, several dimensions were highlighted. First: Context on
the implicit lesson structure can come from unstructured observations, derived from
the enactment of the lesson and inferred by the observer. This reinforces the need for
connection to planned LD that shall be made available through technical means. In
this case, it would be advisable to further contextualise the data collection and analy-
sis within planned LD while not excluding, but complementing it with unplanned,
implicit design decisions through observer inferred patterns. Second, theoretical con-
structs can be introduced through the structured codification of observable learning
events for richer data analysis. Third, the availability of information of different kinds
of artefacts from physical settings enriches the digital data. Fourth, actor roles ob-
servations can provide with more detailed information on actor roles and their ac-
tions in the real world. Fifth, at this stage, two data-sets were presented separately to
look for the value of each one, help define further requirements for the data analysis.
The aim of alignment should not be a complete integration, as these two datasets rep-
resent two different realms, but it has to be complementary, gathering complementary
insights, in this case, learning context. At a technological level, depending on the
analysis or sensemaking aims and methods, the alignment between semantics may or
may not be needed. Nevertheless, learner level analysis can be accomplished by de-
veloping compatible coding schemes for MMLA observations that can introduce the-
ory-based, confirmatory analysis.
First of all, according to the participants, systematic or structured observations al-
low for quantitative analysis of data while still offering richer context derived through
non-automated means. xAPI statements from observations and can be potentially used
for MMLA analysis. Results show that participants have seen the value also in quali-
tative observations, provided that they can be later structured and coded, or recoded to
ensure reliability. Other qualitative data sources such as teacher reflections can pro-
vide increased contextual information where this context is missing: qualitative data
validates and triangulates data gathered through automated means and contextualises
it.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
86
Additional findings: going back to the suggested Context-aware MMLA Taxonomy,
based on the results of the study, balance is needed between user needs and data af-
fordances, and needs for contextualisation for analysis and sensemaking. Depending
on these needs, data can be further structured - for instance, field notes and photos can
be coded later and timestamped). Different data sources can be further included to
enrich the evidence, validate, triangulate findings or contextualise the data. Automat-
ed or human-mediated data brings different semantics and meaning in the datasets.
Each level of the taxonomy can be used for different types of research designs [22],
i.e. the use of highly structured observations based on predefined coding can contrib-
ute confirmatory research and creation of hypothesis space through labelling of learn-
ing constructs within MMLA as indicated by other researchers [32]. Overall, based on
the feedback of the users ideal, authentic or limited scenarios of data collection and
analysis, the benefit of contextualisation for data analysis and sense-making is evi-
dent. However, taking a step further towards an ideal case, we can envision that struc-
tured data gathering can contribute to three-level contextualisation of data through
predefined design, observed lesson structure, and structured observations. Addition-
ally, according to the participants, sense-making can be further supported by the in-
troduction of multimodal dashboards with by making the data sources manipulation
possible, where even qualitative information can be timestamped and visualised.
Overall, our findings indicate the importance of guided data collection and analysis
[25] and contextualisation of LA data [1] on different levels. At the same time, partic-
ipants reported that the need for compliance with data privacy regulations is pushing
the providers of educational technologies to anonymize digital traces by default. This
design issue introduces an extra level of complexity since it is not possible to identify
users across datasets, which is essential for MMLA purposes.
According to participants views, CO can support different layers of contextualisa-
tion (collected with the help of Observata). The figure below (Fig.7) sums up the
contextualisation needs highlighted by the participants, supported by our approach
and afforded by Observata, range from limited to ideal scenarios. Several levels of
contextual information can be layered and obtained from: first, predefined LD, second
- observed lesson structure, and the third - systematic observations MMLA and LA
and CO within LD; MMLA and LA) and HMO within LD and/or inferred lesson
structure, AO within structured observations In ideal scenarios all of they can be
layered to augment the contextualisation efforts. An additional layer of contextualisa-
tion (Fig. 7, in blue) can happen by other qualitative data, which, while is supported
by Observata, goes beyond the scope of this research and claims, can be still collected
qualitatively (photos or fieldnotes) and later structured using Observata post-editing
feature of learning events.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
87
Fig.7. Layered contextualisation levels supported by and afforded by Observata
Reflecting on the methodological approach followed in the study, the co-design
method [8] allowed us to take a closer look at the value of the datasets and customize
the MMLA solution iteratively, that was the direct aim of the study. Through itera-
tive, exploratory approaches we have been able to evaluate and explore challenges
and opportunities of the MMLA solution. Even though involving participants across
the different iterations and steps was tedious and time-consuming, it allowed us to
better understand the needs of the participants, address the challenges they face while
using MMLA solutions, and help them better understand the affordances that these
solutions may bring into their practices. At the same time, their involvement in the
data analysis in the context of the authentic scenario created new avenues for the de-
sign of the MMLA solution.
5 Conclusions and future research
In this paper, we sought to understand the feasibility and added value of contextualis-
ing the analysis of digital traces with classroom observations. To accomplish this aim,
we have presented a case study from an authentic, baseline scenario using data col-
lected from structured and unstructured observations, interaction logs, field notes and
teacher reflections. According to the participants feedback, observations contribute
with contextual information for analysis and sensemaking of digital traces. Case study
results show that both, systematic and unstructured classroom observations contribute
to the contextualisation of the analysis of automatically-collected data (i.e., logs from
the digital learning resources) which represents their main value. While the observa-
tions and observed lesson structure can be useful to contextualise both datasets, it
does not make the LD less valuable for higher-level analysis [12]. To participants’
beliefs, the combination of both predefined and observed designs is an ideal scenario
for more thorough reflections. Also, enabling actor identification or at least differenti-
ating roles across datasets would make the analysis more meaningful. According to
the participants, distinguishing between different taxonomies (verbs) used in observa-
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
88
tions and digital data may be interesting due to different semantics digital and physi-
cal realms entail, but in some cases, it might be also useful to align them.
As already acknowledged in the MMLA context-aware taxonomy, authentic stud-
ies, such as the one presented in this paper, pose multiple limitations in terms of the
data available and its quality. Also, it should be noted that the low number of partici-
pants involved in our case study prevents us from generalizing the results.
To bring authentic scenarios closer to the ideal case, in the future it would be rec-
ommended to include more systematically collected data. Also, for further contextual-
isation of the MMLA data for analysis some methodological, technological and re-
search needs are to be addressed. To reach those goals, the observation tool used in
our study Observata, will be further developed according to the findings of the
study. In addition, aspects such as data reliability and validity as well as data privacy
issue should be addressed in the future both at the technological and methodological
level.
Acknowledgments. This study has been partially funded by Horizon 2020 Research
and Innovation Programme under Grant Agreement No. 731685 (Project CEITER)
and project Digiõpevaramu funded by the Estonian Ministry of Education.
References
1. Gašević, D., Dawson, S., Siemens, G.: Lets not forget: Learning analytics are about
learning. TechTrends. 59, 6471 (2014). https://doi.org/10.1007/s11528-014-0822-x.
2. Lockyer, L., Heathcote, E., Dawson, S.: Informing pedagogical action: Aligning learning
analytics with learning design. Am. Behav. Sci. 57, 14391459 (2013).
3. Ochoa, X., Worsley, M.: Augmenting Learning Analytics with Multimodal Sensory Data.
J. Learn. Anal. 3, 213219 (2016).
4. Wragg, T.: An Introduction to Classroom Observation (Classic Edition). Routledge
(2013).
5. Cohen, L., Manion, L., Morrison, K.: Research methods in education. Routledge (2013).
6. Alison Bryant, J., Liebeskind, K., Gestin, R.: Observational Methods. In: The
International Encyclopedia of Communication Research Methods. pp. 110. John Wiley
& Sons, Inc., Hoboken, NJ, USA (2017).
https://doi.org/10.1002/9781118901731.iecrm0171.
7. Buckingham Shum, S., Ferguson, R., Martinez-Maldonaldo, R.: Human-Centred
Learning Analytics. J. Learn. Anal. 6, 19 (2019). https://doi.org/10.18608/jla.2019.62.1.
8. Rodríguez-Triana, M.J., Prieto, L.P., Martínez-Monés, A., Asensio-Pérez, J.I.,
Dimitriadis, Y.: The teacher in the loop: Customizing multimodal learning analytics for
blended learning. In: ACM International Conference Proceeding Series. pp. 417426.
ACM, New York, NY, USA (2018). https://doi.org/10.1145/3170358.3170364.
9. Knight, S., Buckingham Shum, S.: Theory and Learning Analytics. In: Lang, C.,
Siemens, G., Wise, A.F., and Gaševic, D. (eds.) Handbook of Learning Analytics. pp. 17
22. Society for Learning Analytics Research (SoLAR), Alberta, Canada (2017).
https://doi.org/10.18608/hla17.001.
10. Rodríguez-Triana, M.J., Martínez-Monés, A., Asensio-Pérez, J.I., Dimitriadis, Y.:
Towards a script-aware monitoring process of computer-supported collaborative learning
scenarios. Int. J. Technol. Enhanc. Learn. 5, 151167 (2013).
https://doi.org/10.1504/IJTEL.2013.059082.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
89
11. Gašević, D., Dawson, S., Siemens, G.: Lets not forget: Learning analytics are about
learning. TechTrends. 59, 6471 (2015). https://doi.org/10.1007/s11528-014-0822-x.
12. Eradze, M., Rodríguez-Triana, M.J., Laanpere, M.: Semantically Annotated Lesson
Observation Data in Learning Analytics Datasets: a Reference Model. Interact. Des.
Archit. J. 33, 7591 (2017).
13. Asensio-Pérez, J.I., Dimitriadis, Y., Pozzi, F., Hernández-Leo, D., Prieto, L.P., Persico,
D., Villagrá-Sobrino, S.L.: Towards teaching as design: Exploring the interplay between
full-lifecycle learning design tooling and Teacher Professional Development. Comput.
Educ. (2017). https://doi.org/10.1016/j.compedu.2017.06.011.
14. Dagnino, F.M., Dimitriadis, Y.A., Pozzi, F., Asensio-Pérez, J.I., Rubia-Avi, B.:
Exploring teachers needs and the existing barriers to the adoption of Learning Design
methods and tools: A literature survey. Br. J. Educ. Technol. 49, 9981013 (2018).
https://doi.org/10.1111/bjet.12695.
15. Mangaroska, K., Giannakos, M.N.: Learning analytics for learning design: A systematic
literature review of analytics-driven design to enhance learning. IEEE Trans. Learn.
Technol. 11 (2018). https://doi.org/10.1109/TLT.2018.2868673.
16. Hernández-Leo, D., Asensio-Pérez, J.I., Derntl, M., Pozzi, F., Chacón, J., Prieto, L.P.,
Persico, D.: An Integrated Environment for Learning Design. Front. ICT. 5, (2018).
https://doi.org/10.3389/fict.2018.00009.
17. Marshall, C., Rossman, G.B.: Designing qualitative research. Sage publications (2014).
18. Moses, S.: Language Teaching Awareness. J. English Linguist. 29, 285288 (2001).
https://doi.org/10.1177/00754240122005396.
19. Navarro Sada, A., Maldonado, A.: Research Methods in Education. Sixth Edition - by
Louis Cohen, Lawrence Manion and Keith Morrison. (2007).
https://doi.org/10.1111/j.1467-8527.2007.00388_4.x.
20. Gruba, P., Cárdenas-Claros, M.S., Suvorov, R., Rick, K.: Blended Language Program
Evaluation. Palgrave Macmillan UK, London (2016).
https://doi.org/10.1057/9781137514370_3.
21. Bakeman, R., Gottman, J.M.: Observing interaction. (1997).
https://doi.org/10.1017/CBO9780511527685.
22. Eradze, M., Rodríguez-Triana, M.J., Laanpere, M.: A Conversation between Learning
Design and Classroom Observations: A Systematic Literature Review. Educ. Sci. 9, 91
(2019).
23. Worsley, M., Abrahamson, D., Blikstein, P., Grover, S., Schneider, B., Tissenbaum, M.:
Situating multimodal learning analytics. In: 12th International Conference of the
Learning Sciences: Transforming Learning, Empowering Learners, ICLS 2016. pp.
13461349. International Society of the Learning Sciences (ISLS) (2016).
24. Di Mitri, D., Schneider, J., Klemke, R., Specht, M., Drachsler, H.: Read Between the
Lines: An Annotation Tool for Multimodal Data for Learning. In: Proceedings of the 9th
International Conference on Learning Analytics & Knowledge. pp. 5160. ACM (2019).
25. Rodríguez-Triana, M.J., Martínez-Monés, A., Asensio-rez, J.I., Dimitriadis, Y.:
Scripting and monitoring meet each other: Aligning learning analytics and learning
design to support teachers in orchestrating CSCL situations. Br. J. Educ. Technol. 46,
330343 (2015). https://doi.org/10.1111/bjet.12198.
26. Howard, S.K., Yang, J., Ma, J., Ritz, C., Zhao, J., Wynne, K.: Using Data Mining and
Machine Learning Approaches to Observe Technology-Enhanced Learning. In: 2018
IEEE International Conference on Teaching, Assessment, and Learning for Engineering
(TALE). pp. 788793. IEEE (2018). https://doi.org/10.1109/TALE.2018.8615443.
27. James, A., Kashyap, M., Chua, Y.H.V., Maszczyk, T., Nunez, A.M., Bull, R., Dauwels,
J.: Inferring the Climate in Classrooms from Audio and Video Recordings: A Machine
Learning Approach. In: Proceedings of 2018 IEEE International Conference on Teaching,
Assessment, and Learning for Engineering, TALE 2018. pp. 983988. IEEE (2019).
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
90
https://doi.org/10.1109/TALE.2018.8615327.
28. Kahng, S., Iwata, B.A.: Computerized systems for collecting real-time observational data.
J. Appl. Behav. Anal. 31, 253261 (1998).
29. Ocumpaugh, J., Baker, R.S., Rodrigo, M.M., Salvi, A., van Velsen, M., Aghababyan, A.,
Martin, T.: HART. In: Proceedings of the 33rd Annual International Conference on the
Design of Communication - SIGDOC 15. pp. 16. ACM Press, New York, New York,
USA (2015). https://doi.org/10.1145/2775441.2775480.
30. Klonek, F., Hay, G., Parker, S.: The Big Data of Social Dynamics at Work: A
Technology-based Application. Acad. Manag. Glob. Proc. 185 (2018).
31. ck, R., Siegert, I., Haase, M., Lange, J., Wendemuth, A.: ikannotatea tool for
labelling, transcription, and annotation of emotionally coloured speech. In: International
Conference on Affective Computing and Intelligent Interaction. pp. 2534. Springer
(2011).
32. Di Mitri, D., Schneider, J., Specht, M., Drachsler, H.: From signals to knowledge: A
conceptual model for multimodal learning analytics. J. Comput. Assist. Learn. 34, 338
349 (2018). https://doi.org/10.1111/jcal.12288.
33. Rodríguez-Medina, J., Rodríguez-Triana, M.J., Eradze, M., García-Sastre, S.:
Observational Scaffolding for Learning Analytics: A Methodological Proposal. In:
Pammer-Schindler, V., Pérez-Sanagustín, M., Drachsler, H., Elferink, R., and Scheffel,
M. (eds.) In: Pammer-Schindler V., Pérez-Sanagustín M., Drachsler H., Elferink R.,
Scheffel M. (eds) Lifelong Technology-Enhanced Learning. EC-TEL 2018. Lecture
Notes in Computer Science, vol 11082. Springer, Cham. pp. 617621. Springer
International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-98572-5_58.
34. Eradze, M., Rodriguez Triana, M.J., Laanpere, M.: Context-aware Multimodal Learning
Analytics Taxonomy. In: Companion Proceedings 10th International Conference on
Learning Analytics & Knowledge (LAK20), CEUR Workshop Proceedings (2020).
35. Shankar, S.K., Prieto, L.P., Rodríguez-Triana, M.J., Ruiz-Calleja, A.: A Review of
Multimodal Learning Analytics Architectures. In: 2018 IEEE 18th International
Conference on Advanced Learning Technologies (ICALT). pp. 212214. IEEE (2018).
36. Ocumpaugh, J., Baker, R.S., Rodrigo, M.M., Salvi, A., van Velsen, M., Aghababyan, A.,
Martin, T.: HART: The human affect recording tool. In: Proceedings of the 33rd Annual
International Conference on the Design of Communication. p. 24. ACM, New York, New
York, USA (2015). https://doi.org/10.1145/2775441.2775480.
37. Leinonen, T., Toikkanen, T., Silfvast, K.: Software as hypothesis: research-based design
methodology. In: Proceedings of the Tenth Anniversary Conference on Participatory
Design 2008. pp. 6170. Indiana University (2008).
38. Eradze, M., Pata, K., Laanpere, M.: Analyzing learning flows in digital learning
ecosystems. In: . In: Cao Y., Väljataga T., Tang J., Leung H., Laanpere M. (eds) New
Horizons in Web Based Learning. ICWL 2014. Lecture Notes in Computer Science, vol
8699. Springer, Cham. pp. 6372 (2015). https://doi.org/10.1007/978-3-662-46315-4_7.
39. Eradze, M., Väljataga, T., Laanpere, M.: Observing the use of e-textbooks in the
classroom: Towards offlinelearning analytics. In: Lecture Notes in Computer Science
(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics). pp. 254263 (2014). https://doi.org/10.1007/978-3-319-13296-9_28.
40. Eradze, M., Rodríguez Triana, M.J., Laanpere, M.: How to aggregate lesson observation
data into learning analytics dataset? In: Joint Proceedings of the 6th Multimodal Learning
Analytics (MMLA) Workshop and the 2nd Cross-LAK Workshop co-located with 7th
International Learning Analytics and Knowledge Conference (LAK 2017). Vol. 1828.
No. CONF. CEUR, 2017. pp. 7481. CEUR (2017).
41. Eradze, M., Laanpere, M.: Lesson observation data in learning analytics datasets:
Observata. In: Lavoué É., Drachsler H., Verbert K., Broisin J., Pérez-Sanagustín M. (eds)
Data Driven Approaches in Digital Education. EC-TEL 2017. pp. 504508. Springer
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
91
(2017). https://doi.org/10.1007/978-3-319-66610-5_50.
42. Buckingham Shum, S., Ferguson, R., Martinez-Maldonaldo, R.: Human-Centred
Learning Analytics. J. Learn. Anal. 6, 19 (2019). https://doi.org/10.18608/jla.2019.62.1.
43. MacDonald, B., Walker, R.: Case-study and the social philosophy of educational
research. Cambridge J. Educ. 5, 211 (1975).
44. Merrill, M.D.: First principles of instruction. Educ. Technol. Res. Dev. 50, 4359 (2006).
https://doi.org/10.1007/bf02505024.
45. Pata, K., Beliaev, A., Rõbtšenkov, R., Laanpere, M.: Affordances of the LePlanner for
Sharing Digitally Enhanced Learning Scenarios. In: Advanced Learning Technologies
(ICALT), 2017 IEEE 17th International Conference on. pp. 812. IEEE (2017).
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
92
Appendix 1
Table 3: summary of findings mapped on the evidence from two iterations of co-design and
data analysis. Statements included in the evidence is the summary of key messages from the
evidence. Italics bring quotes from participants.
Qualitative evidence (2 respondents, 2 iterations)
Patterns of usage by using two-datasets:
“Yes, more or less I am able to do it.”
“Yes, Patterns seen on didactical use and some unex-
pected patterns can be definitely seen and guessed from
this data”
Extracting knowledge only based on one data-source:
“No, certainly not.”
“No, definitely no”
“There is definitely an added value here.”
Two data sets complementing each other: “Observa-
tions help me also to see what activities were happen-
ing at the same time in the classroom”.
“Only observations plot made me think about what
happened during the minutes 14-19, but logs data made
me understand that it was independent work probably
with DÕV… probably it was teacher-centred activities”
It can be used for exploratory and confirmatory purpos-
es:
“Only when I see both together. With only one, there is
no question even raised.” “Visual cues that raise more
questions, questioning each data set”. “If the questions
were asked before then we would have theory-based
coding and it would have been more confirmatory”.
Contextualisation and analysis based on (observer-
inferred learning activities):
It is useful to “see interactions per actor in different
phases of a lesson (learning activities that have been
coded by an observer)”.
“For me, it is not important if the homework’s were
checked, but rather how it was checked, did it support
studentsSRL, did they take some responsibility in the
process”
Predefined LD and observed lesson structure:
itgive two layers of contextual information - planned
design vs actual, enacted design not only in terms of
planned vs real duration but in terms of implicit vs
explicit design, emerging design decisions etc. This
should be fed back to the lesson scenario digital repre-
sentation to understand the patterns of actual enact-
ment”. “LD creates the loop to actual activities and
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
93
implementation, and learner actions answer to why
dimension”
Coded actions (observations):
observed and coded (inter)actions represent valuable
information explaining digital interactions: physical
interaction data gives context to the digital interac-
tions, without this context 450 digital interactions data
have no value”. According to the participants, “obser-
vations in physical space enhance the context of digital
interactions”.
Connection to theory:
Observations allow for analysis of social negotiation
of meaning in the classroom and intentionality behind
pedagogical decisions of the teacher while online (au-
tomatically harvested) traces only a fact of interac-
tion.”
“While it is important to link activities with lesson
goals/tasks, their duration and curriculum objectives,
sometimes it is useful to link them with some theoretical
constructs (e.g., communication acts or taxonomy of
objectives/adoption/acceptance), aligning learning
theories with data”.
Qualitative, unstructured data:
“It [unstructured observations, field notes] enriches the
context remarkably, I understand better some levels of
interactions.”
“Field notes in our case contain spatial information
(potentially can contain notes on discipline), photos
also help, they have a timestamp, so they can help you
make sense in case of missing information”.
Structured observations are preferable:
“Unstructured observations can be used for emerging
patterns, to post-edit it and code them to make them
structured.
“Data can come as unstructured and then coded and
structured in xAPI statements.”
Other data sources such as teacher reflections or field
notes (unstructured observations) add more to the con-
text and validate and triangulate the data:
“It gives the final touch what happened in the class-
room and why”.
“Two datasets together - logs and observations It helps
you to raise questions but does not validate. Validated
by reflections, or field notes. Triangulation of data”.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
94
Need for more data sources:
Easily captured data, for instance, noise to give more
contextual information”
“Video that may be related to legal issues, can be
solved by recording only audio. Automatically generat-
ed events on interactions in the classroom media use.
Completeness of data from online settings is necessary”
“Photos and videos to be later coded and integrated”
Sensemaking and analysis level:
LD and data in a way I could understand if it was
more student-centred or teacher-led”
“dashboards with different data streams customizable
by the user for sensemaking.”
Data integration and semantics:
“it was very interesting to see this figure where xAPI
verbs and Observatataxonomy” were demonstrated
together - seeing them based on one lesson would be
extremely interesting”.
“It is obvious that two realms bring on different seman-
tics, in some cases, it may be useful to see the same
taxonomy in both datasets” in some cases, “it would be
confusing”.
Data can be puzzling and incomplete:
“The amount of coinciding physical vs digital interac-
tions is puzzling”. “I would expect the digital interac-
tions increase when physical interactions decrease
(teacher stops talking), but according to this graph, this
is not always the case”.
The records of actions in physical space are clearly
incomplete due to time constraints to annotate the
within-group and between-group activities”.
Learner identification is important in enabling learner
level analysis:
Usefulness increases significantly when learners are
identified across both physical and digital spaces”.
the quantity and variety of traces are significantly
smaller in physical space”.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
95
... The frameworks by [12,16,21] focused on specific kinds of LDs, which limits the generalizability of these frameworks. Context-aware MMLA [32] fits well in a physical setting rather than in Where Is the Teacher in Data Analytics in Education? Evaluating the Maturity of Analytics Solutions and Frameworks Supporting Teachers an online or blended context. ...
... The authors of the OOPB model created four taxonomies of design patterns, including directed learning, explorative learning, productive learning, and reflective learning; thereby, instructional designers or teachers can freely refer to according to their design plans. Additionally, it is uncertain if the frameworks described by [6,8,32] function in higher education as they have been evaluated in secondary and middle schools. Five frameworks [8,12,28,31,40] can handle small datasets with a small number of students (<50), while the rest manage large datasets or do not mention this information explicitly. ...
Article
Full-text available
COVID-19 has changed the mindset of many teachers from traditional education to online education. The increased use of learning management systems is leveraging opportunities for increased use of learner data to draw insights about the learners and the learning environment. However, typically learners are the primary beneficiaries, while teachers are quite invisible in the research of data analytics in education, although both are equally important. Thus, this paper aims to position teachers in the spotlight by differentiating between these current two definitions of learning analytics (LA) and teaching analytics (TA) and evaluating the applicability and maturity of existing analytics solutions to support teachers in making decisions on teaching and learning. A systematic literature review was conducted in relevant scientific fields. The results showed clear evidence to distinguish TA from LA and that there are only a few TA solutions and frameworks that can be applied widely or in reality. Evaluating TA solutions and frameworks needs to be attentively considered. This paper also contributes a comprehensive TA process framework that encapsulates the missing elements in the previous models and adds the recent highlights raised in the fields. The implications for research and practice are also discussed.
... Interaction logs are critical for tracking learners' engagement with digital platforms. These include mouse clicks, keystrokes, scrolls, touch events, or system events recorded during interaction with educational software [16,56,57]. Logs enable the detection of problem-solving strategies, learning paths, or system navigation patterns. ...
Article
Full-text available
Multimodal learning analytics (MMLA) has become a prominent approach for capturing the complexity of learning by integrating diverse data sources such as video, audio, physiological signals, and digital interactions. This comprehensive review synthesises findings from 177 peer-reviewed studies to examine the foundations, methodologies, tools, and applications of MMLA in education. It provides a detailed analysis of data collection modalities, feature extraction pipelines, modelling techniques—including machine learning, deep learning, and fusion strategies—and software frameworks used across various educational settings. Applications are categorised by pedagogical goals, including engagement monitoring, collaborative learning, simulation-based environments, and inclusive education. The review identifies key challenges, such as data synchronisation, model interpretability, ethical concerns, and scalability barriers. It concludes by outlining future research directions, with emphasis on real-world deployment, longitudinal studies, explainable artificial intelligence, emerging modalities, and cross-cultural validation. This work aims to consolidate current knowledge, address gaps in practice, and offer practical guidance for researchers and practitioners advancing multimodal approaches in education.
... Due to the nature of learning in primary schools (i.e., blended learning, the teacher's role, and the pupils' heterogeneous skills), understanding the learning context is crucial to interpreting other sources of data (Baker et al., 2020). To understand better how different LDs are reflected in log data, we need more contextualized information and, therefore, can benefit from classroom observations (Eradze et al., 2019(Eradze et al., , 2020. This also offers an understanding of how to collect relevant LA data that enables learning support in diverse LDs (Bergdahl et al., 2020). ...
Article
Full-text available
In digitalized learning processes, learning analytics (LA) can help teachers make pedagogically sound decisions and support pupils’ self-regulated learning (SRL). However, research on the role of the pedagogical dimensions of learning design (LD) in influencing the possibilities of LA remains scarce. Primary school presents a unique LA context characterized by blended learning environments and pupils’ various abilities to regulate their learning, underscoring teachers’ vital importance. This study explores how pedagogically diverse LDs influence pupils’ SRL behaviors and learning management system (LMS) usage, as well as how this is reflected in LA visualizations. Two LDs were designed and implemented in two primary school classes of fifth (n = 30) and sixth (n = 22) graders within authentic pedagogical and technological contexts. We used sequence analysis to examine the pupils’ SRL actions during LDs, using LMS log data and observation data to contextualize these actions. The results show that LA offers less accurate feedback in more open, collaborative LDs as pupils tend to rely less on the LMS to regulate their learning. Furthermore, the teacher powerfully influences LMS usage in blended primary school classrooms. To maximize the potential of using LA to support SRL, its design needs to be grounded in the LD through an understanding of how the regulation of learning is promoted in diverse learning processes.
... Kaliisa and Dolonen [4] followed five iterative design stages of DBR to develop CADA-a Canvas analytics dashboard-together with teachers. The demo version of Observata was developed according to DBR with the co-design with end users (project managers) to explore the effects of combining learning analytics and classroom observation [21]. The LA4LD tool was also developed based on DSR with the co-creation of students and teachers to provide them with on-demand feedback on learning activities during the run-time of a course [22]. ...
Article
Full-text available
The benefits of teacher-facing dashboards are incontestable, yet their evidence is finite in terms of long-term use, meaningful usability, and maturity level. Thus, this paper uses design science research and critical theory to design and develop TEADASH to support teachers in making decisions on teaching and learning. Three cycles of design science research and multiple small loops were implemented to develop the dashboard. The tool was then deployed and evaluated in real time with the authentic courses. Five courses from two Swedish universities were included in this study. The co-design with teachers is crucial to the applicability of this dashboard, while letting teachers use the tool during their courses is more important to help them to recognize the features they actually use and the tool’s usefulness for their teaching practices. TEADASH can address the prior matters, align with the learning design, and meet teachers’ needs. The technical and co-design aspects, as well as the advantages and challenges of applying TEADASH in practice, are also discussed here.
Chapter
The three Baltic countries have a long history of teaching computing at all levels, from preschool to doctoral studies. Although computing education in Estonia, Latvia and Lithuania was quite similar when the Soviet Union collapsed in 1991, subsequently the paths of school informatics in these three countries departed significantly. In Estonia, informatics as a separate school subject changed into digital literacy course, then became an elective course, just to disappear altogether from the curriculum of 2001, and now finally making its way back to schools and research in recent years. In Latvia, a pragmatic approach was introduced in 1990s by including school informatics the digital skills requested by employers and uniformed by European Computer Driving License. In Lithuania, the development of school informatics has been more systematic, also the CER (Computing Education Research) community has been the most active compared to the other two countries. In this chapter, we present the development trajectories of Computing Education and related research in Estonia, Latvia and Lithuania, along with key milestones, achievements, similarities and differences.
Chapter
Full-text available
Computer science education has been researched in Israel for a few decades, both at the K-12 and the undergraduate levels. The rich variety of the investigated topics addressed from the very beginning issues beyond the introductory course and programming, including the nature of the discipline and its fundamental ideas and concepts, which are stable, unlike the more technological aspects. Understanding the nature of the discipline and mapping its fundamental ideas and concepts constitute the basis on which curricula stand. Therefore, we chose to organize this chapter around ideas and concepts of CS. In line with this perspective, we will discuss research of all age levels: K-12, undergraduate, and even the graduate level, as well as research relating to teachers. We will present design-based research, which accompanied the design of new curricula, as well as studies aiming at identifying phenomena, or investigating educational hypotheses. We will also point out current challenges and possible future directions.
Chapter
When MultiModal Learning Analytics (MMLA) are applied in authentic educational scenarios, multiple stakeholders (such as teachers, researchers and developers) often communicate to specify the requirements of the envisioned MMLA solution. Later on, developers instantiate the software solution for the MMLA data processing needed, as per the stakeholders' specification, to fit the concrete setting of implementation (e.g., a set of classrooms with a certain technological setup). Current MMLA development practice, however, is relatively young and there is still a dearth of standardized practices at different phases of the development lifecycle. Such standardization may lead to interoperability among solutions that the current ad-hoc and tailor-made solutions lack. This chapter presents the Contextualized Data Model for MultiModal Learning Analytics (CDM4MMLA), a data model to represent, organize, and structure contextualized MMLA process specifications, to be later used by MMLA solutions. To illustrate the model's expressivity and exibility, the CDM4MMLA has been applied to three authentic MMLA scenarios. While not a definitive and universal proposal yet, this kind of common computer-interpretable models can not only help in specification reusability (e.g., if the underlying processing technologies change in the future), but also serve as a sort of `lingua franca' within the MMLA research and development community to more consistently specify its processes and accumulate knowledge.
Chapter
Full-text available
Learning Analytics (LA) is a relatively novel method for automated data collection and analysis with promising opportunities to improve teaching and learning processes, widely used in educational research and practice. Moreover, with the elevated use of videos in teaching and learning processes the importance of the analysis of video data increases. In turn, video analytics presents us with opportunities as well as challenges. However, to make full use of its potential often additional data is needed from multiple other sources. On the other hand, existing data also requires context and design-awareness for the analysis. Based on the existing landscape in LA, namely in video-analytics, this article presents a proof-of-concept study connecting cognitive theory-driven analysis of videos and semi-automated student feedback to enable further inclusion of interaction data and learning outcomes to inform video design but also to build teacher dashboards. This paper is an exploratory study analysing relationship between semi-automated student feedback (on several scales on the perceived educational value of videos), video engagement, video duration and theory-driven video annotations. Results did not indicate a significant relationship between different video designs and student feedback; however, findings show some correlation between the number of visualisations and video designs. The results can design implications as well as inform the researchers and practitioners in the field.
Conference Paper
Full-text available
Analysis of learning interactions can happen for different purposes. As educational practices increasingly take place in hybrid settings, data from both spaces are needed. At the same time, to analyse and make sense of machine aggregated data afforded by Technology-Enhanced Learning (TEL) environments, contextual information is needed. We posit that human labelled (classroom observations) and automated observations (multimodal learning data) can enrich each other. Researchers have suggested learning design (LD) for contextualisation, the availability of which is often limited in authentic settings. This paper proposes a Context-aware MMLA Taxonomy, where we categorize systematic documentation and data collection within different research designs and scenarios, paying special attention to authentic classroom contexts. Finally, we discuss further research directions and challenges.
Article
Full-text available
Learning Design, as a field of research, provides practitioners with guidelines towards more effective teaching and learning. In parallel, observational methods (manual or automated) have been used in the classroom to reflect on and refine teaching and learning, often in combination with other data sources (such as surveys and interviews). Despite the fact that both Learning Design and classroom observation aim to support teaching and learning practices (respectively a priori or a posteriori), they are not often aligned. To better understand the potential synergies between these two strategies, this paper reports on a systematic literature review based on 24 works that connect learning design and classroom observations. The review analyses the purposes of the studies, the stakeholders involved, the methodological aspects of the studies, and how design and observations are connected. This review reveals the need for computer-interpretable documented designs; the lack of reported systematic approaches and technological support to connect the (multimodal) observations with the corresponding learning designs; and, the predominance of human-mediated observations of the physical space, whose applicability and scalability are limited by the human resources available. The adoption of ICT tools to support the design process would contribute to extracting the context of the observations and the pedagogical framework for the analysis. Moreover, extending the traditional manual observations with Multimodal Learning Analytic techniques, would not only reduce the observation burden but also support the systematic data collection, integration, and analysis, especially in semi-structured and structured studies.
Article
Full-text available
Learning Design (LD) research is oriented to support teachers in designing their teaching with the aim to provide a sound pedagogical background and to make effective use of resources and technologies. In spite of the significant number of LD approaches and tools proposed so far, their adoption is still very limited and this represents an unsolved challenge in the field of LD. This paper presents a systematic review of the literature about learning design tools, tackling the issue of adoption from two points of view: teachers’ needs in relation to LD tools and methods and possible barriers to their adoption. The review includes only research papers where teachers’ behaviours and opinions are directly explored and not purely theoretical papers. The search included five main academic databases in Technology‐Enhanced Learning (TEL) plus a search on Google about project reports; the resulting corpus included 423 papers: 26 of these, plus 3 reports were included in the final list for the analysis. The review provides a systematic overview of the knowledge developed in the LD field, focusing on a set of research gaps that need further exploration in the future.
Article
Full-text available
As the fields of learning analytics and learning design mature, the convergence and synergies between these two fields became an important area for research. This paper intends to summarize the main outcomes of a systematic literature review of empirical evidence on learning analytics for learning design. Moreover, this paper presents an overview of what and how learning analytics have been used to inform learning design decisions and in what contexts. The search was performed in seven academic databases, resulting in 43 papers included in the main analysis. The results from the review depict the ongoing design patterns and learning phenomena that emerged from the synergy that learning analytics and learning design impose on the current status of learning technologies. Finally, this review stresses that future research should consider developing a framework on how to capture and systematize learning design data grounded in learning analytics and learning theory, and document what learning design choices made by educators influence subsequent learning activities and performances over time.
Article
Full-text available
Multimodality in learning analytics and learning science is under the spotlight. The landscape of sensors and wearable trackers that can be used for learning support is evolving rapidly, as well as data collection and analysis methods. Multimodal data can now be collected and processed in real time at an unprecedented scale. With sensors, it is possible to capture observable events of the learning process such as learner's behaviour and the learning context. The learning process, however, consists also of latent attributes, such as the learner's cognitions or emotions. These attributes are unobservable to sensors and need to be elicited by human-driven interpretations. We conducted a literature survey of experiments using multimodal data to frame the young research field of multimodal learning analytics. The survey explored the multimodal data used in related studies (the input space) and the learning theories selected (the hypothesis space). The survey led to the formulation of the Multimodal Learning Analytics Model whose main objectives are of (O1) mapping the use of multimodal data to enhance the feedback in a learning context; (O2) showing how to combine machine learning with multimodal data; and (O3) aligning the terminology used in the field of machine learning and learning science. © 2018 The Authors. Journal of Computer Assisted Learning Published by John Wiley & Sons, Ltd.
Conference Paper
This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.