Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Contextualising Learning Analytics with Classroom
Observations: a Case Study
Maka Eradze12 [0000-0003-0723-1955] María Jesús Rodríguez-Triana1 [0000-0001-8639-1257], Nikola
Milikic3[0000-0002-3976-5647], Mart Laanpere1 [0000-0002-9853-9965]
and Kairit Tammets1 [0000-003-2065-6552]
1 School of Digital Technologies, Tallinn University, Tallinn 10120, Estonia
2 Department of Education and Human Sciences, University of Modena and Reggio Emilia,
Italy
3 University of Belgrade, Studentski trg 1, Belgrade, Serbia
maka@tlu.ee
Abstract. Educational processes take place in physical and digital places. To
analyse educational processes, Learning Analytics (LA) enable data collection
from the digital learning context. At the same time, to gain more insights, the
LA data can be complemented with the data coming from physical spaces ena-
bling Multimodal Learning Analytics (MMLA). To interpret this data, theoreti-
cal grounding or contextual information is needed. Learning designs (LDs) can
be used for contextualisation, however, in authentic scenarios the availability of
machine-readable LD is scarce. We argue that Classroom Observations (COs),
traditionally used to understand educational processes taking place in physical
space, can provide the missing context and complement the data from the co-
located classrooms. This paper reports on a co-design case study from an au-
thentic scenario that used CO to make sense of the digital traces. In this paper
we posit that the development of MMLA approaches can benefit from co-
design methodologies; through the involvement of the end-users (project man-
agers) in the loop, we illustrate how these data sources can be systematically in-
tegrated and analysed to better understand the use of digital resources. Results
indicate that CO can drive sense-making of LA data where predefined LD is not
available. Furthermore, CO can support layered contextualisation depending on
research design, rigour and systematic documentation/data collection efforts.
Also, co-designing the MMLA solution with the end-users proved to be a useful
approach.
Keywords: Classroom Observations, Learning Analytics, Multimodal Learning
Analytics, Blended Learning, Co-located Classrooms, Contextualisation, Learn-
ing Design
1 Introduction
Teaching and learning processes increasingly take place in blended learning settings
and in both, physical and digital spaces. While Learning Analytics (LA) solutions
offer automated means to collect and analyse digital traces, they only provide a partial
view of the whole picture. To cover this gap, the subfield of Multimodal Learning
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
71
Analytics (MMLA) integrates evidence from the physical spaces using other automat-
ed means such as sensors, EEG devices, eye tracking, etc. Despite it, to make sense of
those datasets, pedagogical grounding and/or contextual information may still be
needed [1]. Researchers suggest using learning design (LD) to contextualise the anal-
ysis [2]. However, practitioners do not always produce digital versions of the scripts
or LD that can be automatically interpreted due to technological or LD adoption chal-
lenges [3]. Alternatively, classroom observations have been used in authentic scenari-
os to understand educational practices taking place in the physical space, providing
additional and highly contextual information with other data sources [4][5][6]. Aside
from the abovementioned issues, the complex process of embedding innovation in
authentic contexts was viewed as challenges related to human factors [7], and the co-
design methodology to involve the user in the development of LA solutions is one
way to respond to adoption challenges [8].
This paper reports on a case study in which researchers and end-users co-designed
an MMLA solution where classroom observations were used in combination with
digital traces to better understand the adoption of digital learning resources in authen-
tic learning scenarios. We argue that, in co-located classrooms, systematic CO can
help to understand the context where the digital traces took place in authentic, real-
life scenarios. Moreover, a co-design methodology can help address adoption issues
referred to in previous research, by co-designing the MMLA solution with end-users.
2 Making Sense of Learning Analytics: context and design-aware
observations
LA is a rapidly developing field of research and practice that seeks to analyse learning
processes and their context to optimize, support, challenge and reshape educational
practices [9]. Inherently, it focuses mainly on the data collected through digital
means, providing a strategic way to understand how digital tools are used. However,
in blended learning, without knowing the context where the digital artefacts were
used, it sometimes is difficult to make sense of the available data [2]. To contribute to
the LA sense-making, different solutions have been proposed in the literature; When
the learning theories or the pedagogical approach are known, some authors have sug-
gested adopting theory-driven approaches to obtain meaningful analytics [10, 11].
However, it does not guarantee that the interpretation of the data fits the reality of the
learning context.
Other researchers have proposed that the use of LDs can contribute to the contex-
tualisation of data analysis [2][12]. While the benefits of using the LD to guide the
data analyses have been reported by many authors, access to such design represents
one of the main challenges [13]. Frequently, due to time constraints practitioners may
not even document their lessons plans [14]. In some other cases, the LD may be col-
lected in a format that is not automatically interpretable (e.g., using hand-written dia-
grams, schemes, or lists of steps). In the optimal but less frequent scenario [2, 15], the
practitioners may have registered their designs in an authoring tool. However, even in
this case, the interoperability with the tool is not guaranteed since there is no single
data format to represent the LD [16].
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
72
A different method used to understand learning processes or situations is class-
room observations [17]. While some data collection methods (such as surveys or in-
terviews) target participant views, classroom observations can provide a non-
judgmental description of learning events [18]. CO can gather data on individual be-
haviours, interactions, or the physical setting by watching behaviour, events, artefacts
or noting physical characteristics [17]. Observation types may vary on the continuum
from unstructured, semi-structured to structured (systematic). This means that un-
structured observations produce qualitative data and structured observations – quanti-
tative [19]. Some authors argue that CO benefits from qualitative and unstructured
data gathering [17], others advise against it since it may result in big volumes of un-
structured data [20]. On the contrary, while reducing expressivity, systematic (struc-
tured) observations allow for more efficient analysis and data processing [21]. There-
fore, systematic observations are especially suitable to be combined with digital trac-
es, enriching each other to understand learning processes and contexts with the help
of multimodal learning analytics [22].
Traditional classroom observations require human inference and are highly con-
textual; human-mediated labelling is often used in MMLA to relate raw data to more
abstract constructs [23][24]. Observation data integration with LA can happen for
triangulation purposes [25], for observing technology-enhanced learning [26], infer-
ring meaningful learning interaction data through annotations of direct observations
[27] and video annotation to triangulate multimodal datasets, extract learning context
and segment into time intervals has also been suggested [24]. Computer-assisted ob-
servation can help the process of observations through enforcing specific coding
schemes and prevent missing data, speeding up the process of observations [28], en-
hance the validity and reliability of data [29]. Computer-assisted systematic observa-
tion tools have been suggested for recording interactions to study social dynamics at
work [30], to annotate emotions from audio and video for multimodal analysis [31], to
study student emotion and behaviour [29] etc. Most of the abovementioned tools are
based on specific coding protocols or specific dimension of data (for instance,
emotions) or theories (social dynamics), with little flexibility for developing own
coding schemes that may not cater different research needs, cannot be guided by LD
or/and may not be useful for contextualisation of data analysis.
Some authors [32] classify data according to whether collection and interpretation
require human involvement or not. While digital traces could be easily collectable
through automatic means, higher-level interactions taking place in the physical space
may be more challenging to detect and record in the computational format. Thus,
observers can contribute to sense-making, especially when data comes totally or par-
tially from physical spaces [33].
Considering the aforementioned information, based on the lessons learned from
previous studies [12][22], we have proposed the Context-aware Multimodal Learning
Analytics Taxonomy (Fig. 1)[34]. The taxonomy classifies different research designs
depending on how systematic the documentation of the learning design and the data
collection have been:
Ideal - Systematic documentation and data collection: In the most desirable case,
the learning design (including actors, roles, resources, activities, timeline, and learn-
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
73
ing objectives) is set up-front and documented in an authoring tool (e.g., LePlanner1
or WebCollage2). Then, during the enactment, logs are collected automatically from
the digital space and systematic observations from the physical one. During the en-
actment, the lesson structure is also inferred through observations. To ensure the in-
teroperability, actors and objects need to be identifiable (across the learning design,
logs and observations) and timestamps for each event need to be registered [35] Once
the data is aggregated in a multimodal dataset, further analysis can be executed.
Fig. 1. Context-aware MMLA taxonomy
Authentic (baseline) - Non-systematic documentation but systematic data collec-
tion: We regard this level as a compromise between the limitations of authentic set-
tings but still rich in terms of data. Here, the predefined learning design cannot be
automatically used to guide the analysis (either because of its format or because it is
not available). However, the timestamped lesson structure is inferred by the observer.
Therefore, the actors are not identifiable across observations and digital traces. Never-
theless, both structured observations and logs are systematically gathered and collect-
ed in the Learning Record Store using a common format (e.g., xAPI). These condi-
tions will enable the application of contextualised analysis on a more baseline level,
using multimodal analytics.
Limited - Non-systematic documentation or data collection: Data collection hap-
pens non-systematically. As in the previous case, no information about the learning
design is available (i.e., actors are not known). In terms of the design of the data col-
lection, the protocol with corresponding codes may not be predefined, and semi-
structured (non-systematic) observations are used. Thus, even if logs are systematical-
1 https://leplanner.ee
2 https://www.gsic.uva.es/webcollage/
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
74
ly gathered, the lack of systematisation of the observations hinder the application of
multimodal data analysis. Although this is not an advisable scenario, logs and obser-
vations can be analysed independently and still provide an overview of what hap-
pened in the physical and digital planes. Besides, even if observations are done sys-
tematically, if the vocabulary (actors, objects and actions) are not agreed across da-
tasets, then the potential of the multimodal analysis could be limited.
According to some authors, in many fields, the design of the data collection tools
are not discussed, this is especially true in the field of observations [36]. Bearing in
mind the constraints that LD-aware analysis may entail, we hypothesize that focusing
on the baseline scenario case will help us to study and better understand authentic
scenarios in non-experimental settings, without ad-hoc tools, where such innovations
most probably will be applied. We argue that the development of such innovations
through the involvement of the “user in the loop” and research-based design process
is important. In the following sections, through a case study involving a participatory
approach, we illustrate the feasibility of using observations to contextualise the data
analysis in an authentic scenario involving the users in the analysis and interpretation
data. We argue that, providing the alternative of using observations when the design is
not available, more authentic scenarios will benefit from contextualised MMLA solu-
tions. Moreover, through the suggested user involvement in authentic settings, we
extract recommendations for the future development of MMLA solutions.
3 Research methodology and research questions
The overarching methodology of this research is a research-based design process that
relies on the co-design of innovation though participatory approaches and stems from
design-based research [37]. The stages of research are as follows: contextual inquiry,
participatory design, product design, and production of software prototype as a hy-
pothesis. These stages are not strictly separated and the research methodology sug-
gests iteratively alternating between stages. Three stages were covered in the previous
works: contextual inquiry, participatory design, and product design [12, 38–41]. This
phase partly goes back to contextual inquiry and product design while also presenting
the software prototype as a hypothesis.
The main goal of this research is to better understand how MMLA can benefit from
classroom observations and what is the value that observations may have for the
sense-making of digital traces gathered from authentic context across physical and
digital spaces. Therefore, the main research questions addressed in the study are:
RQ1: Which aspects of digital-trace based LA could benefit from observations?
RQ2: What is the added value that Observations offer to the user in terms of
meaning, context and quality?
Development and adoption of MMLA solutions that can be used in real-life situa-
tions is a highly complex process and human factors are to be taken into account [42].
To explore the feasibility of using observations for contextualisation of data analysis
and analysis in authentic settings, as well as to gain a deeper understanding of sense-
making processes and alleviate adoption issues, we employ the case study methodol-
ogy “to examine the instance in action” [43] by progressively involving users in a co-
design process. To reach this goal we followed a specifically developed method for
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
75
the design of MMLA solutions, that entails involving the end-users in the loop [8].
This method defines four steps for the co-design of MMLA solutions: a) Understand-
ing the MMLA solution. b) Definition of the questions to be asked by the MMLA
solution. c) Reflection about the contextual constraints and the MMLA affordances.
d) Refinement of the scenario and customisation of the MMLA solution.
Two project managers were involved in the co-design and evaluation of an
MMLA solution. The study is framed within the Digiõpevaramu3 project, where the
main goal was to better understand how digital learning resources were used in the
classroom. To achieve this goal, observations and logs from five lessons were ana-
lysed, also involving visualisation techniques. The study spanned for two iteration.
The first iteration was mainly exploratory. Focusing on a single lesson, exploratory
data analysis was carried out to identify indicators and visualisations that could be of
interest for the project managers. Based on the lessons learnt, in the second iteration,
the analysis of all five lessons was presented to the project managers to gain further
insights about the customisation of the MMLA solution. During this process, mediat-
ed through data analysis, semi-structured questionnaires and interviews (1 interview
per iterations) helped us gather feedback from the users on the further customisation
of the MMLA solution. Questionnaire and interview data were analysed with content
analysis method and are presented in section 4.4.
4 Case study
a. Context of the study
The study was conducted within the project Digiõpevaramu. Task-based [44] digital
materials were co-developed together by the teachers and university experts, and 6000
digital learning resources were made available through an Estonian national level
aggregator. Teachers could re-use the resources and mix different tasks into a collec-
tion to be used in the classroom. Materials were piloted in spring 2018 with 50 teach-
ers and 1200 students from different types of Estonian secondary schools. While the
project collects logs about the usage of the digital materials, this information was
insufficient to understand how those materials were integrated into the teaching prac-
tice. Therefore, observers attended several lessons to collect evidence about class-
room practice.
The case study involved 2 managers of the project who wanted to understand how
the digital materials were used in the pilots. The participants of the study designed the
observation protocol which was used in the different pilots. This paper focuses on the
iterative, exploratory data analysis of 1+5 lessons of these observations. After the
analysis of 1 specific lesson, we analysed 5 more lessons through the involvement of
stakeholders, by introducing different types of data in the data-set.
3 https://vara.e-koolikott.ee/
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
76
b. Observational Data Collection Instrument - Observata
A classroom observation app, Observata (https://observata.leplanner.ee) [41], was
used to design and systematically observe the lessons were the digital resources were
used. Apart from supporting unstructured observations, this tool enables collecting
data through systematic observations based on learning interactions (learning event is
the unit of analysis). While the tool enables the connection with the predefined LD
(automatically imported from LePlanner [45]), it is not compulsory. The tool also
allows for inferring learning activities (emerging plan/observed lesson structure) from
lesson implementation and collecting field notes (unstructured observations) and pho-
tos.
Fig. 2. Observata screens (from left to right): observation view to collect data in xAPI format,
data visualised on the timeline, data visualised on the dashboards.
To aid the observation, the tool enables the user to define the foci of interest, sub-
jects and objects up-front, speeding up the systematic observations. Observations are
modelled as xAPI statements. xAPI is a specification that enables the collection of
digital traces in the form of statements in a subject, verb, object structure that is simi-
lar to an English language sentence structure4 (see the fig 2, left). Data can be stored
and downloaded but also visualised on the timeline in an xAPI format right after the
data collection (middle), and analytics with the structured observations is provided on
a dashboard (right). Aside from this, Observata allows for open coding protocol while
still enabling the systematic data collection.
c. Process: Involving users in the design of MMLA solutions
To better understand the added value of combining observations and digital traces to
contextualise the analysis in an early stage, we followed a method to progressively
involve end-users in the design of MMLA solutions [8]. While this process has only 4
steps (a. Understanding the MMLA solution, b. Define the questions to be answered
4 https://experienceapi.com/overview
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
77
by MMLA solution c. Reflection on contextual constraints and affordances. d. Re-
finement of the scenario and customisation of the MMLA solution), we added an extra
iteration of the last 2 steps. This method allowed us to iteratively analyse the data and
co-design the MMLA solution, identifying indicators and visualisations that better fit
the stakeholders’ needs.
In the first iteration, we analysed a history lesson that took place in May 2018,
lasting 40 minutes, taught by one teacher to 15 students. One observer observed the
lesson. According to the data collected by the observer, the teacher followed a se-
quence of 6 activities, namely: 1. Introduction to the lesson. 2. Presentation of a new
topic. 3. Independent work with digital learning. 4. Feedback on independent work. 5.
A new presentation. 6. Quiz. Since the learning design was not formalised in advance
by the teacher, this inferred structure of the lesson provided us with contextual infor-
mation to understand what happened during the lesson.
Iteration 1. Step 1. Understanding the MMLA solution: Student interactions with the
digital resources were collected in the form of anonymized xAPI statements. Aware
of the limitations of the log analysis, the participants of the study planned observa-
tions to gather evidence about how the materials were integrated into the classroom.
Also, to support the systematic collection of observations in a compatible format for
MMLA analysis (xAPI statements stored in a Learning Record Store (LRS)), the pro-
ject managers provided observers with Observata (section 4.1).
Table 1. Relation of needs posed by the project managers, extracted topics of interest, and
allocation per co-design iteration
Participants’ needs
Topics of interest addressed per itera-
tion
Participant 1.
Overall question: how are resources used?
“What happened between the subjects when
one of the activities started?” (TI1)
“Categorize situations that happened in the
classroom, using them as a context for log
data” (TI1, TI2)
“Differences of implementation patterns and
using the digital learning resources” (TI3)
Lesson level (iteration 1)
TI1. How was the interaction between
the actors according to different activi-
ties?
TI2. How were the interactions with
digital resources according to different
activities?
Participant 2.
“Understand how teachers’ integrate new
resources to their pedagogical practices: do
they use it traditionally to replace textbooks,
more for individual work or to enhance new
learning paradigms” (TI3)
Project level (iteration 2)
TI3. What are patterns of usage of digital
learning resources?
Iteration 1. Step 2. Define the questions to be answered by MMLA solution. The main
goal of the project managers was to better understand actual practices and patterns of
using digital learning resources used in co-located classrooms and spot what obstacles
teachers face. To this aim, several lessons were studied through systematic coding of
interactions and inferring the lesson structure. In this step, the project managers posed
the main questions they wanted to answer with the MMLA solution (see Table 1)
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
78
taking into account the affordances and contextual constraints (step 3) of the MMLA
solution. Since these questions were of different granularity, in the first iteration we
focused on lesson-level questions. Once we clarified how to study individual lessons,
in the second iteration, we also addressed those questions that entailed analysing mul-
tiple lessons to extract patterns.
Iteration 1, Step 3. Reflection on contextual constraints and the MMLA affordances:
The participants were informed about the limitations and affordances imposed by the
observation design and the technological infrastructure. On one hand, several con-
straints were hindering the multimodal analysis. First, the actors were not identifiable
across datasets, hindering the possibility of merging the data and following individu-
als across spaces. Nevertheless, independent analysis of each dataset was done and
then presented together to provide a more holistic view. Second, the resources used
during the session were not known. Thus, the traces stored in the LRS were manually
selected based on the timeframe and the topic of the session. However, there was no
way to differentiate, as these digital resources were used in another classroom at the
same time. Third, additional observation statements were originally in Estonian and
translated into English for the analysis, introducing potential noise in the data. Fourth,
each dataset used different data values (i.e., different types of actors, verbs, and ob-
jects/artefacts). Therefore, this aspect did not allow us to run the analyses of both
datasets together in a meaningful way, as mentioned in point one.
Fig. 3. Upper: Overview of the amount and type of interactions in the physical (left) and in the
digital space (right) Down: the frequency of each (inter)action type or verb in observations
(physical interactions) and logs (digital interactions). Note the difference in scale of each
graphs
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
79
On the other hand, multimodal dataset offered multiple opportunities. First, obser-
vations and logs complement each other, offering a more holistic picture of the learn-
ing activity. Second, it is possible to analyse data within the context of emerging,
observed lesson structure during the implementation of a lesson (visualised in figure
4). Finally, observation data includes different types of physical artefacts and different
levels of interactions (student-teacher, teacher-student, student-student, teacher-
artefact). Figure 3 provides an overview of the data collected through observations
and logs, as well of the type and frequency of the interactions registered.
Fig. 4. Timeline representation of the interactions registered in observations (physical interac-
tions - first) and digital traces (digital interactions - second). The vertical lines represent the
limits of activities (observed lesson structure) where the interactions took place.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
80
The data were analysed within the context of learning activities and visualised by
plotting the interactions in the sequence of activities inferred by the observer. The
plots were placed on top of each other. The metrics used in the analysis were chosen
to meet the questions posed by the project managers: the frequency of interactions of
participants contextualised within the activities and types of interactions contextual-
ised within the activities across two datasets. Figure 4 illustrates the outcomes ob-
tained from the analysis.
We also applied Social Network Analysis (SNA) to both datasets (eigenvector
centrality measures, betweenness, page-rank, degree, in-degree and with overall net-
work statistics). To transform the xAPI data from observations and digital interactions
into graph data, actors and objects (resources in case of digital traces) were defined as
nodes, and interactions (i.e., verbs) as edges, which could bidirectional (subjects in-
teracting with objects and vice versa) or unidirectional (actors interacting with digital
objects). Only one SNA graph is used to illustrate the results obtained through this
kind of analysis. (see Figure 5).
Fig. 5. SNA (on the left), SNA of logs: visualises the page-rank (colour-coded - the greener the
higher is the page-rank, hence the relative importance) and Eigenvector (bigger the circle, the
more influential is the node)
User feedback. The participants (i.e., the project managers) received a report includ-
ing the main visualisations and brief introductions to the concepts or metrics used (for
instance, for SNA terminology). Based on this report, they filled out a questionnaire5
to collect specific feedback on indicators for further analysis, as well as general feed-
back on the study and datasets based on the analysed lesson. Most of the time two
participants thought it was useful to see both datasets separately and together to un-
derstand the adoption of digital resources. They thought it was somehow useful or
very useful (on a scale of very useful, somehow useful, not useful at all) to have data
from physical and digital spaces to understand the adoption of digital resources, in-
cluding not only the systematic observations and the logs but also the lesson plan
inferred by the observer. SNA was not considered useful since neither actors nor re-
sources could be identified across observations and logs, and this kind of analysis did
5 Link to the questionnaire that includes also visualisations http://bit.ly/MMLAstudyquestionnaire
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
81
not establish the connection to the timeline or the inferred lesson plan. First iteration
results and data challenges (also defined in the constraints in iteration 1. Step 3.) are
reported below, which informed the analysis of the next iteration.
Table 2. List of visualisations and analysis carried out in iteration 1. For each of them, per-
ceived added value and detected challenges are listed.
After the questionnaire, unstructured interviews were also scheduled. The results
from this questionnaire and interview are summarized in Section 5.
Iteration 1. Step 4. Refinement of the scenario and customisation of the MMLA solu-
tion. The feedback obtained from Iteration 1 (see Table 2) informed step 4 and further
analysis. While both participants acknowledged the added value of using observations
to make sense of what happened in the classroom at the physical and digital level,
several ideas emerged to improve the MMLA solution. Apart from the mere integra-
tion of MMLA dashboards with the observation tool, new relevant data sources that
could contribute to the contextualisation were mentioned. This includes: teachers’
reflections and observations (even if they are not systematic), the LD inferred by the
observers, or LD provided a-posteriori. Presenting the visualisations together with
explanations, in a storytelling manner, was well appreciated by the participants of the
study. Based on the study, the project managers would like to explore which (novel)
learning activities were designed around the usage of digital learning resources to
support different learning paradigms.
Visualisation
Analysis
Feedback and val-
ue
Challenge
Plot, time-
based
Separate plots (placed on
top of each other) of
(inter)actions according to
participants within the
context of observed les-
son structure
Somehow useful,
useful but only
observations allow
for distinguishing
actor roles
Student IDs miss-
ing for joint analy-
sis, actors’ roles
not distinguishable
in digital logs
Plot, time-
based
Separate plots (placed on
top of each other), plot-
ting (inter)actions within
the context of observed
lesson structure
Somehow useful,
useful, verbs and
actions complement
each other, the main
value is observed
Lesson Structure and
xAPI
Student IDs miss-
ing for joint analy-
sis
SNA graphs
Two graphs side by side,
different analyses (eigen-
vector centrality
measures, betweenness,
page-rank, degree, in-
degree and overall statis-
tics).
Not useful or some-
how useful, no value
at this stage
Missing IDs, no
context was given
so SNA graphs are
disconnected.
Actor roles are not
distinguishable
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
82
Fig. 6. Examples of visualisations generated during the second iteration: on top - interactions in
digital space, in the middle - interactions from physical space with field notes6 and logs. In the
bottom - the number of the observed actions per actor, the teacher is dark pink) contextualised
in observed lesson structure. 7
6 black boxes on the plot mainly describe additional information as noted by the observer i.e., the last
comment reads: teacher announces that who left earlier will be graded after. Normally, in Observata this
is visualised on the timeline, timestamped.
7 Link to the analysis and questions http://bit.ly/MMLA5morelessons
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
83
Iteration 2. Step 3. Reflection on contextual constraints and the MMLA affordances.
To answer the project level questions defined in iteration 1 (see Table 1), we extracted
the main constraints and affordances of each data analysis and have chosen metrics
and indicators that were meaningful for the stakeholders (Table 2). Five more lessons
were analysed taking into account the lessons learnt from the previous iteration. As
SNA was not regarded as useful, we omitted it this time. In some cases, together with
xAPI statements from observations, logs from LRS and emerging lesson structure, we
used observer field notes and teacher reflections.
User feedback: A semi-structured interview was carried out after providing partici-
pants with a report containing the analysis of the 5 lessons. The goal of the interview
was threefold: to evaluate to what extent the MMLA solution helped the answer their
project-level questions; to understand the value of combining different data sources
and added value of each of the data sources, and finally, to identify further needs in
terms of data collection or analysis to understand patterns of use. The interview data
is analysed and reported in the results section.
As it happened in the first iteration, the participants highlighted the added value
that having an LD could bring. However, in this second iteration, they also acknowl-
edged that teachers did not always agree on documenting and sharing their LDs.
Moreover, participants indicated the importance of having two types of contextual
information – together with predefined LD observed lesson structure inferred from
lesson enactment can be layered. It was suggested to use dashboard capabilities for
the sensemaking of data. Different other data sources could help fill in missing infor-
mation, for instance, videos that can be later coded and structured. This raises data
privacy issues that are sometimes difficult to manage (just like in case of this particu-
lar project).
Iteration 2. Step 4. Customisation of an MMLA solution: Several ideas emerged to
improve the MMLA solution. While separate datasets without predefined LD are still
informative to answer the project-level question, predefined LD is necessary to have
richer analysis. Actual implementation patterns extracted through observed lesson
structure can only enrich the data and further contextualise its analysis. It is desirable
to include different data, amongst them qualitative, that through the development of
the MMLA solution could be further quantified. For instance, short videos for later
annotation or post-editing of unstructured field notes. The solution will need MMLA
dashboard development to enable further sense-making of data since several qualita-
tive and quantitative data-sources are regarded as useful by the stakeholders.
d. Results and discussion
This section presents the results of the questionnaire and interview data analysis from
both iterations. The qualitative feedback from the participants from both iterations are
reported together was analysed following the research questions of the paper: the
table 3 (see Appendix 1) summarizes the findings and brings evidence from question-
naires and semi-structured interviews in iterations 1 and 2. Based on the main find-
ings of the research we will interpret the results following two main research ques-
tions:
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
84
RQ1: Which aspects of digital-trace based LA could benefit from the observations?
Following the method, the feedback received from the users led us to the design
ideas for the next version of the MMLA solution. Additionally, the lessons learnt also
helped the project managers to consider the constraints of the context and the af-
fordances of the MMLA solutions, guiding the design of future studies.
Structured Observations: According to the participants, the main benefit of the
observations for the MMLA solution was structured observation data in the form of
xAPI statements which bring different dimensions for the data analysis.
Semantics: Participants noted that data from two realms introduce different seman-
tics: while it may be useful to see same taxonomy in both datasets (xAPI statements in
the logs and observations), it’s not an absolute solution because these two data
streams represent different semantics.
Inclusion of other qualitative data sources: According to the participants, aside
from structured observation data MMLA that can easily be created by annotating
learning events, Multimodal analysis can also benefit from unstructured observations
(field notes, observed lesson structure). While unstructured observations present more
integration challenges that structured ones, they could be of great value to interpret
the quantitative results as well as to triangulate and validate the findings. For instance,
timestamped field notes, photos and videos may provide further qualitative context.
Also, teacher reflections may be used to partly replace missing predefined LD to
understand teacher intentions. This also can be timestamped photos or videos that
can be coded later. Using storytelling approaches to present quantitative and qualita-
tive data could be a promising solution. In this case, the quantitative data analysis
could help to contextualise what was happening when the qualitative evidence was
gathered.
Data analysis, sensemaking and multimodal dashboards: According to the partici-
pants, the data collection, analysis and sensemaking of data can be contextualised
within planned LD. Emergent, observed lesson structure can add another layer of
contextual information. Codification – annotating interactions gives context to the log
data. Even if observations are useful for contextualisation, they do not replace the LD.
Having both, the original teacher design and the emerging one inferred from the ob-
servations would add value to the data analysis, enabling the comparison between
plan and implementation, as well as detecting regulation decisions. As qualitative data
was regarded useful and important, some of this data can be post-edited and struc-
tured but some qualitative data (with different semantics) also visualised on the dash-
boards and sensemaking of data can be aided through filtering.
RQ2: What is the added value that observations offer to the user in terms of meaning,
context and quality?
Meaning and complementarity. According to the participants, observations add
value through incorporating additional data on actor roles, actions (verbs) and arte-
facts (objects): it is not possible to make sense of the data without putting logs and
structured observation datasets together. Only the combination of the two contrib-
utes to sensemaking. Data coming from the different spaces complement each other
and are only useful if put together. Different semantics from across-spaces data also
bring complementary information.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
85
Context/theoretical grounding. According to the participants, the contextualisation
of digital data is the main value of classroom observations. This contextualisation can
happen through: unstructured observations (observed lesson structure), coded (in-
ter)actions aggregated through structured, semi-structured xAPI statements or un-
structured field notes later coded/edited and systematized. Participants stressed the
importance of theory-driven coding: theoretical (learning) constructs [32] can be in-
troduced through the pre-defined codes, aligning theory with data to enable confirma-
tory analysis.
Quality. According to the participants, most of the quality issues were related to the
constraints posed by actual research design, that is an authentic, typical scenario. But
at the same time, they relate to privacy issues, mentioned by the stakeholders. There-
fore, the actual data was puzzling, exploratory and incomplete. While it was possible
to gather multimodal data from the digital and the physical space, a joint analysis was
not possible in some cases (actors could not be identified across datasets) and not
meaningful in others. Observations represent small data – nevertheless, they bring
different semantics and context in the data set, which is an important issue in LA.
Based on the feedback from the questionnaires and interviews, we have gathered
insights about the value that classroom observations add to the data analysis. Regard-
ing the value of observations, several dimensions were highlighted. First: Context on
the implicit lesson structure can come from unstructured observations, derived from
the enactment of the lesson and inferred by the observer. This reinforces the need for
connection to planned LD that shall be made available through technical means. In
this case, it would be advisable to further contextualise the data collection and analy-
sis within planned LD while not excluding, but complementing it with unplanned,
implicit design decisions through observer inferred patterns. Second, theoretical con-
structs can be introduced through the structured codification of observable learning
events for richer data analysis. Third, the availability of information of different kinds
of artefacts from physical settings enriches the digital data. Fourth, actor roles – ob-
servations can provide with more detailed information on actor roles and their ac-
tions in the real world. Fifth, at this stage, two data-sets were presented separately to
look for the value of each one, help define further requirements for the data analysis.
The aim of alignment should not be a complete integration, as these two datasets rep-
resent two different realms, but it has to be complementary, gathering complementary
insights, in this case, learning context. At a technological level, depending on the
analysis or sensemaking aims and methods, the alignment between semantics may or
may not be needed. Nevertheless, learner level analysis can be accomplished by de-
veloping compatible coding schemes for MMLA observations that can introduce the-
ory-based, confirmatory analysis.
First of all, according to the participants, systematic or structured observations al-
low for quantitative analysis of data while still offering richer context derived through
non-automated means. xAPI statements from observations and can be potentially used
for MMLA analysis. Results show that participants have seen the value also in quali-
tative observations, provided that they can be later structured and coded, or recoded to
ensure reliability. Other qualitative data sources such as teacher reflections can pro-
vide increased contextual information where this context is missing: qualitative data
validates and triangulates data gathered through automated means and contextualises
it.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
86
Additional findings: going back to the suggested Context-aware MMLA Taxonomy,
based on the results of the study, balance is needed between user needs and data af-
fordances, and needs for contextualisation for analysis and sensemaking. Depending
on these needs, data can be further structured - for instance, field notes and photos can
be coded later and timestamped). Different data sources can be further included to
enrich the evidence, validate, triangulate findings or contextualise the data. Automat-
ed or human-mediated data brings different semantics and meaning in the datasets.
Each level of the taxonomy can be used for different types of research designs [22],
i.e. the use of highly structured observations based on predefined coding can contrib-
ute confirmatory research and creation of hypothesis space through labelling of learn-
ing constructs within MMLA as indicated by other researchers [32]. Overall, based on
the feedback of the users ideal, authentic or limited scenarios of data collection and
analysis, the benefit of contextualisation for data analysis and sense-making is evi-
dent. However, taking a step further towards an ideal case, we can envision that struc-
tured data gathering can contribute to three-level contextualisation of data through
predefined design, observed lesson structure, and structured observations. Addition-
ally, according to the participants, sense-making can be further supported by the in-
troduction of multimodal dashboards with by making the data sources manipulation
possible, where even qualitative information can be timestamped and visualised.
Overall, our findings indicate the importance of guided data collection and analysis
[25] and contextualisation of LA data [1] on different levels. At the same time, partic-
ipants reported that the need for compliance with data privacy regulations is pushing
the providers of educational technologies to anonymize digital traces by default. This
design issue introduces an extra level of complexity since it is not possible to identify
users across datasets, which is essential for MMLA purposes.
According to participants views, CO can support different layers of contextualisa-
tion (collected with the help of Observata). The figure below (Fig.7) sums up the
contextualisation needs highlighted by the participants, supported by our approach
and afforded by Observata, range from limited to ideal scenarios. Several levels of
contextual information can be layered and obtained from: first, predefined LD, second
- observed lesson structure, and the third - systematic observations MMLA and LA
and CO within LD; MMLA and LA) and HMO within LD and/or inferred lesson
structure, AO within structured observations In ideal scenarios all of they can be
layered to augment the contextualisation efforts. An additional layer of contextualisa-
tion (Fig. 7, in blue) can happen by other qualitative data, which, while is supported
by Observata, goes beyond the scope of this research and claims, can be still collected
qualitatively (photos or fieldnotes) and later structured using Observata post-editing
feature of learning events.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
87
Fig.7. Layered contextualisation levels supported by and afforded by Observata
Reflecting on the methodological approach followed in the study, the co-design
method [8] allowed us to take a closer look at the value of the datasets and customize
the MMLA solution iteratively, that was the direct aim of the study. Through itera-
tive, exploratory approaches we have been able to evaluate and explore challenges
and opportunities of the MMLA solution. Even though involving participants across
the different iterations and steps was tedious and time-consuming, it allowed us to
better understand the needs of the participants, address the challenges they face while
using MMLA solutions, and help them better understand the affordances that these
solutions may bring into their practices. At the same time, their involvement in the
data analysis in the context of the authentic scenario created new avenues for the de-
sign of the MMLA solution.
5 Conclusions and future research
In this paper, we sought to understand the feasibility and added value of contextualis-
ing the analysis of digital traces with classroom observations. To accomplish this aim,
we have presented a case study from an authentic, baseline scenario using data col-
lected from structured and unstructured observations, interaction logs, field notes and
teacher reflections. According to the participants’ feedback, observations contribute
with contextual information for analysis and sensemaking of digital traces. Case study
results show that both, systematic and unstructured classroom observations contribute
to the contextualisation of the analysis of automatically-collected data (i.e., logs from
the digital learning resources) which represents their main value. While the observa-
tions and observed lesson structure can be useful to contextualise both datasets, it
does not make the LD less valuable for higher-level analysis [12]. To participants’
beliefs, the combination of both predefined and observed designs is an ideal scenario
for more thorough reflections. Also, enabling actor identification or at least differenti-
ating roles across datasets would make the analysis more meaningful. According to
the participants, distinguishing between different taxonomies (verbs) used in observa-
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
88
tions and digital data may be interesting due to different semantics digital and physi-
cal realms entail, but in some cases, it might be also useful to align them.
As already acknowledged in the MMLA context-aware taxonomy, authentic stud-
ies, such as the one presented in this paper, pose multiple limitations in terms of the
data available and its quality. Also, it should be noted that the low number of partici-
pants involved in our case study prevents us from generalizing the results.
To bring authentic scenarios closer to the ideal case, in the future it would be rec-
ommended to include more systematically collected data. Also, for further contextual-
isation of the MMLA data for analysis some methodological, technological and re-
search needs are to be addressed. To reach those goals, the observation tool used in
our study — Observata, will be further developed according to the findings of the
study. In addition, aspects such as data reliability and validity as well as data privacy
issue should be addressed in the future both at the technological and methodological
level.
Acknowledgments. This study has been partially funded by Horizon 2020 Research
and Innovation Programme under Grant Agreement No. 731685 (Project CEITER)
and project Digiõpevaramu funded by the Estonian Ministry of Education.
References
1. Gašević, D., Dawson, S., Siemens, G.: Let’s not forget: Learning analytics are about
learning. TechTrends. 59, 64–71 (2014). https://doi.org/10.1007/s11528-014-0822-x.
2. Lockyer, L., Heathcote, E., Dawson, S.: Informing pedagogical action: Aligning learning
analytics with learning design. Am. Behav. Sci. 57, 1439–1459 (2013).
3. Ochoa, X., Worsley, M.: Augmenting Learning Analytics with Multimodal Sensory Data.
J. Learn. Anal. 3, 213–219 (2016).
4. Wragg, T.: An Introduction to Classroom Observation (Classic Edition). Routledge
(2013).
5. Cohen, L., Manion, L., Morrison, K.: Research methods in education. Routledge (2013).
6. Alison Bryant, J., Liebeskind, K., Gestin, R.: Observational Methods. In: The
International Encyclopedia of Communication Research Methods. pp. 1–10. John Wiley
& Sons, Inc., Hoboken, NJ, USA (2017).
https://doi.org/10.1002/9781118901731.iecrm0171.
7. Buckingham Shum, S., Ferguson, R., Martinez-Maldonaldo, R.: Human-Centred
Learning Analytics. J. Learn. Anal. 6, 1–9 (2019). https://doi.org/10.18608/jla.2019.62.1.
8. Rodríguez-Triana, M.J., Prieto, L.P., Martínez-Monés, A., Asensio-Pérez, J.I.,
Dimitriadis, Y.: The teacher in the loop: Customizing multimodal learning analytics for
blended learning. In: ACM International Conference Proceeding Series. pp. 417–426.
ACM, New York, NY, USA (2018). https://doi.org/10.1145/3170358.3170364.
9. Knight, S., Buckingham Shum, S.: Theory and Learning Analytics. In: Lang, C.,
Siemens, G., Wise, A.F., and Gaševic, D. (eds.) Handbook of Learning Analytics. pp. 17–
22. Society for Learning Analytics Research (SoLAR), Alberta, Canada (2017).
https://doi.org/10.18608/hla17.001.
10. Rodríguez-Triana, M.J., Martínez-Monés, A., Asensio-Pérez, J.I., Dimitriadis, Y.:
Towards a script-aware monitoring process of computer-supported collaborative learning
scenarios. Int. J. Technol. Enhanc. Learn. 5, 151–167 (2013).
https://doi.org/10.1504/IJTEL.2013.059082.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
89
11. Gašević, D., Dawson, S., Siemens, G.: Let’s not forget: Learning analytics are about
learning. TechTrends. 59, 64–71 (2015). https://doi.org/10.1007/s11528-014-0822-x.
12. Eradze, M., Rodríguez-Triana, M.J., Laanpere, M.: Semantically Annotated Lesson
Observation Data in Learning Analytics Datasets: a Reference Model. Interact. Des.
Archit. J. 33, 75–91 (2017).
13. Asensio-Pérez, J.I., Dimitriadis, Y., Pozzi, F., Hernández-Leo, D., Prieto, L.P., Persico,
D., Villagrá-Sobrino, S.L.: Towards teaching as design: Exploring the interplay between
full-lifecycle learning design tooling and Teacher Professional Development. Comput.
Educ. (2017). https://doi.org/10.1016/j.compedu.2017.06.011.
14. Dagnino, F.M., Dimitriadis, Y.A., Pozzi, F., Asensio-Pérez, J.I., Rubia-Avi, B.:
Exploring teachers’ needs and the existing barriers to the adoption of Learning Design
methods and tools: A literature survey. Br. J. Educ. Technol. 49, 998–1013 (2018).
https://doi.org/10.1111/bjet.12695.
15. Mangaroska, K., Giannakos, M.N.: Learning analytics for learning design: A systematic
literature review of analytics-driven design to enhance learning. IEEE Trans. Learn.
Technol. 1–1 (2018). https://doi.org/10.1109/TLT.2018.2868673.
16. Hernández-Leo, D., Asensio-Pérez, J.I., Derntl, M., Pozzi, F., Chacón, J., Prieto, L.P.,
Persico, D.: An Integrated Environment for Learning Design. Front. ICT. 5, (2018).
https://doi.org/10.3389/fict.2018.00009.
17. Marshall, C., Rossman, G.B.: Designing qualitative research. Sage publications (2014).
18. Moses, S.: Language Teaching Awareness. J. English Linguist. 29, 285–288 (2001).
https://doi.org/10.1177/00754240122005396.
19. Navarro Sada, A., Maldonado, A.: Research Methods in Education. Sixth Edition - by
Louis Cohen, Lawrence Manion and Keith Morrison. (2007).
https://doi.org/10.1111/j.1467-8527.2007.00388_4.x.
20. Gruba, P., Cárdenas-Claros, M.S., Suvorov, R., Rick, K.: Blended Language Program
Evaluation. Palgrave Macmillan UK, London (2016).
https://doi.org/10.1057/9781137514370_3.
21. Bakeman, R., Gottman, J.M.: Observing interaction. (1997).
https://doi.org/10.1017/CBO9780511527685.
22. Eradze, M., Rodríguez-Triana, M.J., Laanpere, M.: A Conversation between Learning
Design and Classroom Observations: A Systematic Literature Review. Educ. Sci. 9, 91
(2019).
23. Worsley, M., Abrahamson, D., Blikstein, P., Grover, S., Schneider, B., Tissenbaum, M.:
Situating multimodal learning analytics. In: 12th International Conference of the
Learning Sciences: Transforming Learning, Empowering Learners, ICLS 2016. pp.
1346–1349. International Society of the Learning Sciences (ISLS) (2016).
24. Di Mitri, D., Schneider, J., Klemke, R., Specht, M., Drachsler, H.: Read Between the
Lines: An Annotation Tool for Multimodal Data for Learning. In: Proceedings of the 9th
International Conference on Learning Analytics & Knowledge. pp. 51–60. ACM (2019).
25. Rodríguez-Triana, M.J., Martínez-Monés, A., Asensio-Pérez, J.I., Dimitriadis, Y.:
Scripting and monitoring meet each other: Aligning learning analytics and learning
design to support teachers in orchestrating CSCL situations. Br. J. Educ. Technol. 46,
330–343 (2015). https://doi.org/10.1111/bjet.12198.
26. Howard, S.K., Yang, J., Ma, J., Ritz, C., Zhao, J., Wynne, K.: Using Data Mining and
Machine Learning Approaches to Observe Technology-Enhanced Learning. In: 2018
IEEE International Conference on Teaching, Assessment, and Learning for Engineering
(TALE). pp. 788–793. IEEE (2018). https://doi.org/10.1109/TALE.2018.8615443.
27. James, A., Kashyap, M., Chua, Y.H.V., Maszczyk, T., Nunez, A.M., Bull, R., Dauwels,
J.: Inferring the Climate in Classrooms from Audio and Video Recordings: A Machine
Learning Approach. In: Proceedings of 2018 IEEE International Conference on Teaching,
Assessment, and Learning for Engineering, TALE 2018. pp. 983–988. IEEE (2019).
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
90
https://doi.org/10.1109/TALE.2018.8615327.
28. Kahng, S., Iwata, B.A.: Computerized systems for collecting real-time observational data.
J. Appl. Behav. Anal. 31, 253–261 (1998).
29. Ocumpaugh, J., Baker, R.S., Rodrigo, M.M., Salvi, A., van Velsen, M., Aghababyan, A.,
Martin, T.: HART. In: Proceedings of the 33rd Annual International Conference on the
Design of Communication - SIGDOC ’15. pp. 1–6. ACM Press, New York, New York,
USA (2015). https://doi.org/10.1145/2775441.2775480.
30. Klonek, F., Hay, G., Parker, S.: The Big Data of Social Dynamics at Work: A
Technology-based Application. Acad. Manag. Glob. Proc. 185 (2018).
31. Böck, R., Siegert, I., Haase, M., Lange, J., Wendemuth, A.: ikannotate–a tool for
labelling, transcription, and annotation of emotionally coloured speech. In: International
Conference on Affective Computing and Intelligent Interaction. pp. 25–34. Springer
(2011).
32. Di Mitri, D., Schneider, J., Specht, M., Drachsler, H.: From signals to knowledge: A
conceptual model for multimodal learning analytics. J. Comput. Assist. Learn. 34, 338–
349 (2018). https://doi.org/10.1111/jcal.12288.
33. Rodríguez-Medina, J., Rodríguez-Triana, M.J., Eradze, M., García-Sastre, S.:
Observational Scaffolding for Learning Analytics: A Methodological Proposal. In:
Pammer-Schindler, V., Pérez-Sanagustín, M., Drachsler, H., Elferink, R., and Scheffel,
M. (eds.) In: Pammer-Schindler V., Pérez-Sanagustín M., Drachsler H., Elferink R.,
Scheffel M. (eds) Lifelong Technology-Enhanced Learning. EC-TEL 2018. Lecture
Notes in Computer Science, vol 11082. Springer, Cham. pp. 617–621. Springer
International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-98572-5_58.
34. Eradze, M., Rodriguez Triana, M.J., Laanpere, M.: Context-aware Multimodal Learning
Analytics Taxonomy. In: Companion Proceedings 10th International Conference on
Learning Analytics & Knowledge (LAK20), CEUR Workshop Proceedings (2020).
35. Shankar, S.K., Prieto, L.P., Rodríguez-Triana, M.J., Ruiz-Calleja, A.: A Review of
Multimodal Learning Analytics Architectures. In: 2018 IEEE 18th International
Conference on Advanced Learning Technologies (ICALT). pp. 212–214. IEEE (2018).
36. Ocumpaugh, J., Baker, R.S., Rodrigo, M.M., Salvi, A., van Velsen, M., Aghababyan, A.,
Martin, T.: HART: The human affect recording tool. In: Proceedings of the 33rd Annual
International Conference on the Design of Communication. p. 24. ACM, New York, New
York, USA (2015). https://doi.org/10.1145/2775441.2775480.
37. Leinonen, T., Toikkanen, T., Silfvast, K.: Software as hypothesis: research-based design
methodology. In: Proceedings of the Tenth Anniversary Conference on Participatory
Design 2008. pp. 61–70. Indiana University (2008).
38. Eradze, M., Pata, K., Laanpere, M.: Analyzing learning flows in digital learning
ecosystems. In: . In: Cao Y., Väljataga T., Tang J., Leung H., Laanpere M. (eds) New
Horizons in Web Based Learning. ICWL 2014. Lecture Notes in Computer Science, vol
8699. Springer, Cham. pp. 63–72 (2015). https://doi.org/10.1007/978-3-662-46315-4_7.
39. Eradze, M., Väljataga, T., Laanpere, M.: Observing the use of e-textbooks in the
classroom: Towards “offline” learning analytics. In: Lecture Notes in Computer Science
(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics). pp. 254–263 (2014). https://doi.org/10.1007/978-3-319-13296-9_28.
40. Eradze, M., Rodríguez Triana, M.J., Laanpere, M.: How to aggregate lesson observation
data into learning analytics dataset? In: Joint Proceedings of the 6th Multimodal Learning
Analytics (MMLA) Workshop and the 2nd Cross-LAK Workshop co-located with 7th
International Learning Analytics and Knowledge Conference (LAK 2017). Vol. 1828.
No. CONF. CEUR, 2017. pp. 74–81. CEUR (2017).
41. Eradze, M., Laanpere, M.: Lesson observation data in learning analytics datasets:
Observata. In: Lavoué É., Drachsler H., Verbert K., Broisin J., Pérez-Sanagustín M. (eds)
Data Driven Approaches in Digital Education. EC-TEL 2017. pp. 504–508. Springer
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
91
(2017). https://doi.org/10.1007/978-3-319-66610-5_50.
42. Buckingham Shum, S., Ferguson, R., Martinez-Maldonaldo, R.: Human-Centred
Learning Analytics. J. Learn. Anal. 6, 1–9 (2019). https://doi.org/10.18608/jla.2019.62.1.
43. MacDonald, B., Walker, R.: Case-study and the social philosophy of educational
research. Cambridge J. Educ. 5, 2–11 (1975).
44. Merrill, M.D.: First principles of instruction. Educ. Technol. Res. Dev. 50, 43–59 (2006).
https://doi.org/10.1007/bf02505024.
45. Pata, K., Beliaev, A., Rõbtšenkov, R., Laanpere, M.: Affordances of the LePlanner for
Sharing Digitally Enhanced Learning Scenarios. In: Advanced Learning Technologies
(ICALT), 2017 IEEE 17th International Conference on. pp. 8–12. IEEE (2017).
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
92
Appendix 1
Table 3: summary of findings mapped on the evidence from two iterations of co-design and
data analysis. Statements included in the evidence is the summary of key messages from the
evidence. Italics bring quotes from participants.
Findings
Qualitative evidence (2 respondents, 2 iterations)
It is possible to extract knowledge
from two data-sets (classroom
observations and digital logs)
Patterns of usage by using two-datasets:
“Yes, more or less I am able to do it.”
“Yes, Patterns seen on didactical use and some unex-
pected patterns can be definitely seen and guessed from
this data”
The complementarity of the phys-
ical and digital traces was consid-
ered an added value.
Extracting knowledge only based on one data-source:
“No, certainly not.”
“No, definitely no”
“There is definitely an added value here.”
Two data sets complementing each other: “Observa-
tions help me also to see what activities were happen-
ing at the same time in the classroom”.
“Only observations plot made me think about what
happened during the minutes 14-19, but logs data made
me understand that it was independent work probably
with DÕV… probably it was teacher-centred activities”
Both, exploratory or confirmatory
analysis is possible.
It can be used for exploratory and confirmatory purpos-
es:
“Only when I see both together. With only one, there is
no question even raised.” “Visual cues that raise more
questions, questioning each data set”. “If the questions
were asked before then we would have theory-based
coding and it would have been more confirmatory”.
Observations enable contextuali-
sation while connection to theory
is equally important
1. Emergent/observed lesson
structure fills the gaps for missing
predefined LD information for
contextualisation of data. Makes
differences between implicit and
explicit LD evident by providing
two layers of contextual infor-
mation – Predefined LD and ob-
served lesson structure.
2. Coded (inter)actions themselves
explain digital interactions, at the
same time, bring another layer of
context through theoretical con-
cepts
Contextualisation and analysis based on (observer-
inferred learning activities):
It is useful to “see interactions per actor in different
phases of a lesson (learning activities that have been
coded by an observer)”.
“For me, it is not important if the homework’s were
checked, but rather how it was checked, did it support
students’ SRL, did they take some responsibility in the
process”
Predefined LD and observed lesson structure:
it “give two layers of contextual information - planned
design vs actual, enacted design not only in terms of
planned vs real duration but in terms of implicit vs
explicit design, emerging design decisions etc. This
should be fed back to the lesson scenario digital repre-
sentation to understand the patterns of actual enact-
ment”. “LD creates the loop to actual activities and
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
93
implementation, and learner actions answer to why
dimension”
Coded actions (observations):
“observed and coded (inter)actions represent valuable
information explaining digital interactions: physical
interaction data gives context to the digital interac-
tions, without this context 450 digital interactions data
have no value”. According to the participants, “obser-
vations in physical space enhance the context of digital
interactions”.
Connection to theory:
“Observations allow for analysis of social negotiation
of meaning in the classroom and intentionality behind
pedagogical decisions of the teacher while online (au-
tomatically harvested) traces only a fact of interac-
tion.”
“While it is important to link activities with lesson
goals/tasks, their duration and curriculum objectives,
sometimes it is useful to link them with some theoretical
constructs (e.g., communication acts or taxonomy of
objectives/adoption/acceptance), aligning learning
theories with data”.
Unstructured, qualitative data
such as field notes or teacher
reflections enrich the data-set
further with context.
Structured data is preferred for
the analysis: all the observations
are to be systematized (struc-
tured), edited and merged.
Qualitative, unstructured data:
“It [unstructured observations, field notes] enriches the
context remarkably, I understand better some levels of
interactions.”
“Field notes in our case contain spatial information
(potentially can contain notes on discipline), photos
also help, they have a timestamp, so they can help you
make sense in case of missing information”.
Structured observations are preferable:
“Unstructured observations can be used for emerging
patterns, to post-edit it and code them to make them
structured.”
“Data can come as unstructured and then coded and
structured in xAPI statements.”
There is a need for further valida-
tion and triangulation
Other data sources such as teacher reflections or field
notes (unstructured observations) add more to the con-
text and validate and triangulate the data:
“It gives the final touch what happened in the class-
room and why”.
“Two datasets together - logs and observations It helps
you to raise questions but does not validate. Validated
by reflections, or field notes. Triangulation of data”.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
94
There is a further need for data
collection and analysis.
For instance: easy to capture data
such as short videos (in case of
privacy issues can be replaced by
audio), classroom media usage
automated data, reliable and com-
plete online interaction data, prede-
fined LD, data visualisation tech-
niques- dashboards
Need for more data sources:
“Easily captured data, for instance, noise to give more
contextual information”
“Video that may be related to legal issues, can be
solved by recording only audio. Automatically generat-
ed events on interactions in the classroom media use.
Completeness of data from online settings is necessary”
“Photos and videos to be later coded and integrated”
Sensemaking and analysis level:
“LD and data in a way I could understand if it was
more student-centred or teacher-led”
“dashboards with different data streams customizable
by the user for sensemaking.”
Two datasets bring different se-
mantics from different realms and
dimensions
Data integration and semantics:
“it was very interesting to see this figure where xAPI
verbs and Observata “taxonomy” were demonstrated
together - seeing them based on one lesson would be
extremely interesting”.
“It is obvious that two realms bring on different seman-
tics, in some cases, it may be useful to see the same
taxonomy in both datasets” in some cases, “it would be
confusing”.
Quality issues on data collection
and analysis level: some infor-
mation is missing
Data can be puzzling and incomplete:
“The amount of coinciding physical vs digital interac-
tions is puzzling”. “I would expect the digital interac-
tions increase when physical interactions decrease
(teacher stops talking), but according to this graph, this
is not always the case”.
“The records of actions in physical space are clearly
incomplete due to time constraints to annotate the
within-group and between-group activities”.
Learner identification is important in enabling learner
level analysis:
“Usefulness increases significantly when learners are
identified across both physical and digital spaces”.
“the quantity and variety of traces are significantly
smaller in physical space”.
Interaction Design and Architecture(s) Journal - IxD&A, N.44, 2020, pp. 71 - 95
95