Conference PaperPDF Available

Context-aware Multimodal Learning Analytics Taxonomy

Authors:

Abstract

Analysis of learning interactions can happen for different purposes. As educational practices increasingly take place in hybrid settings, data from both spaces are needed. At the same time, to analyse and make sense of machine aggregated data afforded by Technology-Enhanced Learning (TEL) environments, contextual information is needed. We posit that human labelled (classroom observations) and automated observations (multimodal learning data) can enrich each other. Researchers have suggested learning design (LD) for contextualisation, the availability of which is often limited in authentic settings. This paper proposes a Context-aware MMLA Taxonomy, where we categorize systematic documentation and data collection within different research designs and scenarios, paying special attention to authentic classroom contexts. Finally, we discuss further research directions and challenges.
Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
1
Context-aware Multimodal Learning Analytics Taxonomy
Maka Eradze
School of Digital Technologies, Tallinn University, Estonia
Faculty of Engineering, University of Naples Federico II, Italy
maka@tlu.ee
María Jesús Rodríguez Triana, Mart Laanpere
School of Digital Technologies, Tallinn University, Estonia
mjrt@tlu.ee, martlaa@tlu.ee
ABSTRACT: Analysis of learning interactions can happen for different purposes. As
educational practices increasingly take place in hybrid settings, data from both spaces are
needed. At the same time, to analyse and make sense of machine aggregated data afforded
by Technology-Enhanced Learning (TEL) environments, contextual information is needed. We
posit that human labelled (classroom observations) and automated observations
(multimodal learning data) can enrich each other. Researchers have suggested learning
design (LD) for contextualisation, the availability of which is often limited in authentic
settings. This paper proposes a Context-aware MMLA Taxonomy, where we categorize
systematic documentation and data collection within different research designs and
scenarios, paying special attention to authentic classroom contexts. Finally, we discuss
further research directions and challenges.
Keywords: multimodal learning analytics, human-labelled observations, automated
observations, classroom observations, technology-enhanced classrooms, learning design,
context
1 INTRODUCTION AND BACKGROUND
As teaching and learning processes most often take place blended learning settings, to create a
holistic picture of educational context and analyse these processes for different purposes, different
data sources and collection methods are needed. Learning interaction (between people and/or with
artefacts) has been an important part of educational research. While some decades ago, researchers
focused on face-to-face interactions and used traditional data-collection techniques such as
observations, technological advancements led attention to Technology-enhanced Learning (TEL)
researchers towards digital interactions, as it is illustrated by the appearance of the Learning
Analytics (LA) community. Thus, both research trends often cover only one part of the educational
process due to the data available. The Multimodal Learning Analytics (MMLA) field emerged as a
response to this need, combining different data-sources from different spaces, e.g., with the help of
Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
2
sensors, EEG devices etc. At the same time, to guide the data collection and analysis process, human
inference and contextual information (such as learning designs where teachers report about their
intentions, actors, roles, media use and other information about the learning context) are often
needed (Hernández-Leo, Rodriguez Triana, Inventado, & Mor, 2017). To address this need, previous
research proposes to benefit from the synergistic LD and LA relationship, where LD contextualizes
data analysis and LA informs LD.
The Learning Analytics (LA) community emerged with the widespread adoption of digital learning
platforms, mainly focusing on the analysis of digital interactions (Ochoa & Worsley, 2016). However,
depending on the learning activity, meaningful interactions may also not be tracked in theses
spaces, narrowing down the interaction analysis to the data available in the digital platforms that
can lead to a street-light effect (Freedman, 2010). To respond to this limitation, a new wave of
Multimodal Learning Analytics (MMLA) community promotes the collection and analysis of different
data sources across spaces (Blikstein & Worsley, 2016). Typically, MMLA datasets include not only
log data, but also data generated by sensors located in mobile and wearable devices (Ochoa &
Worsley, 2016). To make sense of the MMLA data, input from humans is often used; human-
mediated labelling is often used to relate raw data to more abstract constructs (Worsley et al.,
2016)(Di Mitri, Schneider, Klemke, Specht, & Drachsler, 2019). At the same time, analytics
approaches need theory (Joksimović, Kovanović, & Dawson, 2019) to create a hypothesis space (Di
Mitri, Schneider, Specht, & Drachsler, 2018). Moreover, contextual information such as the learning
design can guide the data collection and interpretation (Lockyer & Dawson, 2011)(Rodríguez-Triana,
Martínez-Monés, Asensio-Pérez, & Dimitriadis, 2013). However, it is worth noting that in authentic
settings LD may not be available due to different limitations and adoption problems (Dagnino,
Dimitriadis, Pozzi, Asensio-Pérez, & Rubia-Avi, 2018)(Lockyer, Heathcote, & Dawson, 2013;
Mangaroska & Giannakos, 2018).
Traditional human-mediated data collection methods, such as observations can also respond to the
aforementioned need for contextual information, as they are inherently highly contextual. Through
observational methods, quantitative and qualitative data can be systematically collected and
analysed (Cohen, Manion, & Morrison, 2018)(Eradze, Rodríguez Triana, & Laanpere, 2017). However,
despite the richness of observational data, several constraints prevent researchers and practitioners
from applying them (e.g., time-consuming data collection and analysis, intrusive approach,
difficulties registering fine-grain events or multiple events at the same time, etc). Therefore,
educational research and practice may benefit from aligning traditional (human-labelled) and
modern (automated) classroom observations; thanks to the evidence collected from the physical
space, they can support the triangulation, contextualization and sensemaking of MMLA data. On the
one hand, observations could aid the MMLA contextual and methodological needs, and on the other
MMLA could alleviate the complexity and workload of human-driven observations: enrich the data,
speed up the observation process by automatization or gather evidence on indicators unobservable
to the human eye, as already indicated by previous authors (Anguera, Portell, Chacón-Moscoso, &
Sanduvete-Chaves, 2018)(Bryant et al., 2017). Furthermore, technological solutions may further
reinforce the use of specific coding schemas, contributing to the quality and availability of the data;
speed up the process of observations (Kahng & Iwata, 1998), and enhance validity and reliability of
data (Ocumpaugh et al., 2015).
Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
3
Based on the overviewed community challenges and concerns rooted in previous research, to
provide a holistic picture on teaching and learning processes and with a systematic picture on the
use of MMLA in different scenarios, this research has connected two research paradigms (traditional
and modern) based on systematic, human-labelled and automated observations. More concretely,
we explore synergies between these two approaches in authentic, blended, TEL classroom settings.
Also, to reinforce the contextualization, whenever available, we propose to use the LD, reflecting the
pedagogical grounding and the teacher intentions leading to that activity. Connecting these three
factors: human-mediated, automated observations and LD contextualization is not a trivial task, and
special attention needs to be paid to the specificities, meaning, affordances, constraints and quality
of the data sources, as well as LD availability challenges.
To envision the data collection and documentation process, we propose a Context-Aware MMLA
Taxonomy. The presented taxonomy classifies different research designs depending on how
systematic the documentation of the learning design and the data collection have been. The
following section will overview the taxonomy and the final chapter of the paper will close with a
discussion detailing further research directions and challenges.
2 CONTEXT-AWARE MULTIMODAL LEARNING ANALYTICS TAXONOMY
To provide a contextualized and holistic view of the teaching and learning activities taking place in
TEL classrooms, connecting two research paradigms (Daniel, 2019), this paper proposes a Context-
aware MMLA Taxonomy to support the alignment of LD, human and automated observations
(MMLA). In this taxonomy, in line with previous research indicating to LA adoption challenges
(Buckingham Shum, Ferguson, & Martinez-Maldonaldo, 2019), we regard authentic learning
contexts as a baseline, anchoring scenario. The taxonomy (Figure 1) classifies human-labelled and
automated data collection on two axes: systematic documentation and data-collection, viewing
authentic cases as a baseline for data collection and analysis. These two axes represent context-
awareness (systematic documentation) and rigorous quantitative classroom observation data
collection (systematic data collection) to enable alignment of data sources and rich MMLA analysis.
Ideal - Systematic documentation and data collection: In the most desirable case, the learning
design (including actors, roles, resources, activities, timeline, and learning objectives) is set up-front
and documented in an authoring tool. Then, during the enactment, logs are collected automatically
from the digital space and systematic observations from the physical one. During the enactment, the
additional layer of enactment lesson structure is inferred through unstructured observations. To
ensure the interoperability, actors and objects should be identifiable (across the learning design,
logs and observations) and timestamps for each event should be registered. Once the data is
aggregated in a multimodal dataset, further analysis can be executed.
Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
4
Figure 1: Context-Aware MMLA Taxonomy
Authentic (baseline) - Non-systematic documentation but systematic data collection: We regard
this level as a compromise between the limitations of authentic settings but still rich in terms of
data. Here, the predefined learning design cannot be automatically used to guide the analysis (either
because of its format or because it is not available). However, the timestamped lesson structure is
inferred by the observer. Therefore, the actors are not identifiable across observations and digital
traces. Nevertheless, both structured observations and logs are systematically gathered and
collected in the Learning Record Store using a common format (e.g., xAPI). These conditions will
enable the application contextualized analysis on a more baseline level, using multimodal analytics.
Limited - Non-systematic documentation or data collection: Data collection happens non-
systematically. As in the previous case, no information about the learning design is available (i.e.,
actors are not known). In terms of the design of the data collection, the protocol with corresponding
codes may not be predefined, and semi-structured (non-systematic) observations are used. Thus,
even if logs are systematically gathered, the lack of systematization of the observations hinder the
application of multimodal data analysis. Although this is not an advisable scenario, logs and
observations can be analysed independently and still provide an overview of what happened in the
physical and digital planes. Besides, even if observations are done systematically, if the vocabulary
(actors, objects and actions) are not agreed across datasets, then the potential of the multimodal
analysis could be limited.
3 DISCUSSION, CHALLENGES AND FUTURE RESEARCH
This paper overviewed modern challenges in MMLA community underlying data contextualization
and sense-making needs, especially in authentic learning scenarios. Based on these challenges and
problems we suggested aligning modern and traditional data collection methods (human-labelled
and automated) and LD. As researchers and practitioners need to take into account authentic
learning settings in MMLA data collection, we proposed the Context-aware Multimodal Taxonomy to
classify different levels of data collection and documentation, for different research designs. It is
Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
5
worth noting that we also created specific conceptual and technological tools (Eradze & Laanpere,
2017; Eradze, Rodríguez-Triana, & Laanpere, 2017). Both, the taxonomy and tools have been
evaluated in authentic settings (corresponding to the baseline scenario) through an iterative analysis
of multimodal data (human-labelled and automated observations) involving different qualitative
sources such as teacher reflections and qualitative observations. Preliminary results show that, in
authentic settings, the baseline scenario was useful for two-level contextualization: observed lesson
structure, human-labelled observations. At the same time, in this specific case, systematic human-
labelled observations introduced additional semantics, pedagogical constructs, and indicate to the
potential of using theoretical constructs in the automated observation data-sets through (validated)
coding schemas. This factor further contributes to the creation of hypothesis space.
However, to enable alignment of MMLA observations and LD, in ideal scenarios (see Figure 1) and to
facilitate the adoption of MMLA in the context of classroom observations by final users, there is a
need for further reinforcement for sense-making and analysis to enable actionable insights based on
MMLA data. To reach that goal, it would be necessary to create MMLA architectures and pipelines to
integrate MMLA data and visualize it in a dashboard. In this regard, the on-going MMLA research
efforts (Schneider, Di Mitri, Limbu, & Drachsler, 2018; Shankar et al., 2019) look very promising. At
the same time, further research is needed for the pedagogically-grounded and theory-driven
analysis of data and understanding how the Context-aware MMLA taxonomy and the related
solutions can inform the teaching practice.
REFERENCES
Alison Bryant, J., Liebeskind, K., & Gestin, R. (2017). Observational Methods. In The International
Encyclopedia of Communication Research Methods (pp. 110).
https://doi.org/10.1002/9781118901731.iecrm0171
Anguera, M. T., Portell, M., Chacón-Moscoso, S., & Sanduvete-Chaves, S. (2018). Indirect observation
in everyday contexts: Concepts and methodological guidelines within a mixed methods
framework. Frontiers in Psychology, 9(JAN), 13. https://doi.org/10.3389/fpsyg.2018.00013
Blikstein, P., & Worsley, M. (2016). Multimodal Learning Analytics and Education Data Mining: using
computational technologies to measure complex learning tasks. Journal of Learning Analytics,
3(2), 220238. https://doi.org/http://dx.doi.org/10.18608/jla.2016.32.11
Buckingham Shum, S., Ferguson, R., & Martinez-Maldonaldo, R. (2019). Human-Centred Learning
Analytics. Journal of Learning Analytics, 6(2), 19. https://doi.org/10.18608/jla.2019.62.1
Cohen, L., Manion, L., & Morrison, K. (2018). Research methods in education, 8th ed. In Routledge.
https://doi.org/10.1080/19415257.2011.643130
Dagnino, F. M., Dimitriadis, Y. A., Pozzi, F., Asensio-Pérez, J. I., & Rubia-Avi, B. (2018). Exploring
teachersneeds and the existing barriers to the adoption of Learning Design methods and
tools: A literature survey. British Journal of Educational Technology, 49(6), 9981013.
https://doi.org/10.1111/bjet.12695
Daniel, B. K. (2019). Big Data and data science: A critical review of issues for educational research.
British Journal of Educational Technology. https://doi.org/10.1111/bjet.12595
Di Mitri, D., Schneider, J., Klemke, R., Specht, M., & Drachsler, H. (2019). Read Between the Lines: An
Annotation Tool for Multimodal Data for Learning. Proceedings of the 9th International
Conference on Learning Analytics & Knowledge, 5160. ACM.
Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018). From signals to knowledge: A
conceptual model for multimodal learning analytics. Journal of Computer Assisted Learning,
34(4), 338349. https://doi.org/10.1111/jcal.12288
Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
6
Eradze, M., & Laanpere, M. (2017). Lesson observation data in learning analytics datasets:
Observata. In: Lavoué É., Drachsler H., Verbert K., Broisin J., Pérez-Sanagustín M. (Eds) Data
Driven Approaches in Digital Education. EC-TEL 2017. Lecture Notes in Computer Science, Vol
10474. Springer, Cham, 10474 LNCS, 504508. https://doi.org/10.1007/978-3-319-66610-5_50
Eradze, M., Rodríguez-Triana, M. J., & Laanpere, M. (2017). Semantically Annotated Lesson
Observation Data in Learning Analytics Datasets: a Reference Model. Interaction Design and
Architecture(s) Journal, 33(7591), 7591. Retrieved from
http://www.mifav.uniroma2.it/inevent/events/idea2010/doc/33_4.pdf
Eradze, M., Rodríguez Triana, Jesús, M., & Laanpere, M. (2017). How to aggregate lesson observation
data into learning analytics dataset? Joint Proceedings of the 6th Multimodal Learning Analytics
(MMLA) Workshop and the 2nd Cross-LAK Workshop Co-Located with 7th International
Learning Analytics and Knowledge Conference (LAK 2017). Vol. 1828. No. CONF. CEUR, 2017.,
1828, 7481. CEUR.
Freedman, D. H. (2010). Why scientific studies are so often wrong: The streetlight effect. Discover
Magazine, 26.
Hernández-Leo, D., Rodriguez Triana, M. J., Inventado, P. S., & Mor, Y. (2017). Preface: Connecting
Learning Design and Learning Analytics. Interaction Design and Architecture(s) Journal, 33(Ld),
3–8. Retrieved from https://infoscience.epfl.ch/record/231720
Joksimović, S., Kovanović, V., & Dawson, S. (2019). The Journey of Learning Analytics. HERDSA
Review of Higher Education, 6, 2763.
Kahng, S., & Iwata, B. A. (1998). Computerized systems for collecting real-time observational data.
Journal of Applied Behavior Analysis, 31(2), 253261.
Lockyer, L., & Dawson, S. (2011). Learning designs and learning analytics. Proceedings of the 1st
International Conference on Learning Analytics and Knowledge - LAK 11, 153.
https://doi.org/10.1145/2090116.2090140
Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing pedagogical action: Aligning learning
analytics with learning design. American Behavioral Scientist, 57(10), 14391459.
Mangaroska, K., & Giannakos, M. N. (2018). Learning analytics for learning design: A systematic
literature review of analytics-driven design to enhance learning. IEEE Transactions on Learning
Technologies, 11. https://doi.org/10.1109/TLT.2018.2868673
Ochoa, X., & Worsley, M. (2016). Augmenting Learning Analytics with Multimodal Sensory Data.
Journal of Learning Analytics, 3(2), 213219.
Ocumpaugh, J., Baker, R. S., Rodrigo, M. M., Salvi, A., van Velsen, M., Aghababyan, A., & Martin, T.
(2015). HART. Proceedings of the 33rd Annual International Conference on the Design of
Communication - SIGDOC 15, 16. https://doi.org/10.1145/2775441.2775480
Rodríguez-Triana, M. J., Martínez-Monés, A., Asensio-Pérez, J. I., & Dimitriadis, Y. (2013). Towards a
script-aware monitoring process of computer-supported collaborative learning scenarios.
International Journal of Technology Enhanced Learning, 5(2), 151167.
https://doi.org/10.1504/IJTEL.2013.059082
Schneider, J., Di Mitri, D., Limbu, B., & Drachsler, H. (2018). Multimodal learning hub: A tool for
capturing customizable multimodal learning experiences. European Conference on Technology
Enhanced Learning, 4558. Springer.
Shankar, S. K., Ruiz-Calleja, A., Serrano-Iglesias, S., Ortega-Arranz, A., Topali, P., & Martınez-Monés,
A. (2019). A Data Value Chain to Model the Processing of Multimodal Evidence in Authentic
Learning Scenarios.
Worsley, M., Abrahamson, D., Blikstein, P., Grover, S., Schneider, B., & Tissenbaum, M. (2016).
Situating multimodal learning analytics. 12th International Conference of the Learning Sciences:
Transforming Learning, Empowering Learners, ICLS 2016, 13461349. International Society of
the Learning Sciences (ISLS).
... Learning Cases surfaced as a significant emergent theme. All participants emphasized the need to consider diverse learning cases, accounting for various subjects, age groups, and educational contexts (Eradze et al., 2020). Designing MMLA systems that can adapt to different learning scenarios and disciplines became a focal point of discussion. ...
Article
Full-text available
Multimodal Learning Analytics (MMLA) systems integrate diverse data to provide real-time insights into student learning, yet their design faces the challenge of limited established guidelines. This study investigates essential design considerations for MMLA systems during the research and development phase, aiming to enhance their effectiveness in educational settings. A qualitative approach employing semi-structured interviews was conducted with a diverse group of researchers in the MMLA field. Deductive and thematic analysis were used to identify key design considerations, including technology integration, constraints and learning scenarios. The analysis further revealed intersections between various design considerations, both confirming existing themes and highlighting new emergent ones. Based on the findings, the MMLA Design Framework (MDF) was developed to provide a structured approach to guide the design and development of MMLA systems. This framework, along with the identified design considerations, addresses the lack of conventional practices in MMLA design and offers practical insights for practitioners and researchers. The results of this study have the potential to significantly impact both research and educational applications of MMLA systems, paving the way for more effective and informed designs.
... Considering the aforementioned information, based on the lessons learned from previous studies [12] [22], we have proposed the Context-aware Multimodal Learning Analytics Taxonomy (Fig. 1) [34]. The taxonomy classifies different research designs depending on how systematic the documentation of the learning design and the data collection have been: Ideal -Systematic documentation and data collection: In the most desirable case, the learning design (including actors, roles, resources, activities, timeline, and learn-ing objectives) is set up-front and documented in an authoring tool (e.g., LePlanner 1 or WebCollage 2 ). ...
Article
Full-text available
Educational processes take place in physical and digital places. To analyse educational processes, Learning Analytics (LA) enable data collection from the digital learning context. At the same time, to gain more insights, the LA data can be complemented with the data coming from physical spaces enabling Multimodal Learning Analytics (MMLA). To interpret this data, theoretical grounding or contextual information is needed. Learning designs (LDs) can be used for contextualisation, however, in authentic scenarios the availability of machine-readable LD is scarce. We argue that Classroom Observations (COs), traditionally used to understand educational processes taking place in physical space, can provide the missing context and complement the data from the co-located classrooms. This paper reports on a co-design case study from an authentic scenario that used CO to make sense of the digital traces. In this paper we posit that the development of MMLA approaches can benefit from co-design methodologies; through the involvement of the end-users (project managers) in the loop, we illustrate how these data sources can be systematically integrated and analysed to better understand the use of digital resources. Results indicate that CO can drive sense-making of LA data where predefined LD is not available. Furthermore, CO can support layered contextualisation depending on research design, rigour and systematic documentation/data collection efforts.Also, co-designing the MMLA solution with the end-users proved to be a useful approach.
... LA has many promises, one of which is the capability to contribute to the awareness and reflection on learning processes. However, among one of the critical issues with learning analytics are the dimension of data (mainly click-based) and the connection of the data with context: theory and design [4,5]. For this reason, LA is rarely used on its own and it usually is combined with other types of data collection and analysis methods -such as self-report data, annotations for sense-making, observations, multimodal data etc. ...
Chapter
Full-text available
Learning Analytics (LA) is a relatively novel method for automated data collection and analysis with promising opportunities to improve teaching and learning processes, widely used in educational research and practice. Moreover, with the elevated use of videos in teaching and learning processes the importance of the analysis of video data increases. In turn, video analytics presents us with opportunities as well as challenges. However, to make full use of its potential often additional data is needed from multiple other sources. On the other hand, existing data also requires context and design-awareness for the analysis. Based on the existing landscape in LA, namely in video-analytics, this article presents a proof-of-concept study connecting cognitive theory-driven analysis of videos and semi-automated student feedback to enable further inclusion of interaction data and learning outcomes to inform video design but also to build teacher dashboards. This paper is an exploratory study analysing relationship between semi-automated student feedback (on several scales on the perceived educational value of videos), video engagement, video duration and theory-driven video annotations. Results did not indicate a significant relationship between different video designs and student feedback; however, findings show some correlation between the number of visualisations and video designs. The results can design implications as well as inform the researchers and practitioners in the field.
... Considering the aforementioned information, based on the lessons learned from previous studies [12] [22], we have proposed the Context-aware Multimodal Learning Analytics Taxonomy (Fig. 1) [34]. The taxonomy classifies different research designs depending on how systematic the documentation of the learning design and the data collection have been: Ideal -Systematic documentation and data collection: In the most desirable case, the learning design (including actors, roles, resources, activities, timeline, and learning objectives) is set up-front and documented in an authoring tool (e.g., LePlanner 1 or WebCollage 2 ). ...
Preprint
Full-text available
Educational processes take place in physical and digital places. To analyse educational processes, Learning Analytics (LA) enable data collection from the digital learning context. At the same time, to gain more insights, the LA data can be complemented with the data coming from physical spaces enabling Multimodal Learning Analytics (MMLA). To interpret this data, theoretical grounding or con-textual information is needed. Learning designs (LDs) can be used for contextualisation, however, in authentic scenarios the availability of machine-readable LD is scarce. We argue that Classroom Observations (COs), traditionally used to understand educational processes taking place in physical space, can provide the missing context and complement the data from the co-located classrooms. This paper reports on a co-design case study from an authentic scenario that used CO to make sense of the digital traces. In this paper we posit that the development of MMLA approaches can benefit from co-design methodologies; through the involvement of the end-users (project managers) in the loop, we illustrate how these data sources can be systematically integrated and analysed to better understand the use of digital resources. Results indicate that CO can drive sense-making of LA data where pre-defined LD is not available. Furthermore, CO can support layered contextualisation depending on research design, rigour and systematic documentation/data collection efforts. Also, co-designing the MMLA solution with the end-users proved to be a useful approach.
Article
Understanding and improving education are critical goals of learning analytics. However, learning is not always mediated or aided by a digital system that can capture digital traces. Learning in such environments can be studied by recording, processing, and analyzing different signals, including video and audio, so that traces of actors’ actions and interactions are captured. Multimodal Learning Analytics refers to analyzing these signals through the use and integration of these multiple modes. However, a need exists to evaluate how research is conducted in the emerging field of multimodal learning analytics to aid and evaluate how these systems work. With the growth of multimodal learning analytics, research trends and technologies are needed to support its development. We conducted a systematic mapping study based on established systematic literature practices to identify multimodal learning analytics research types, methodologies, and trending research themes. Most mapped papers presented different solutions and used evaluation-based research methods to demonstrate an increasing interest in multimodal learning analytics technologies. In addition, we identified 14 topics under four themes––learning context, learning process, systems and modality, and technologies––that can contribute to the growth of multimodal learning analytics.
Thesis
Full-text available
Teaching and learning processes take place in blended learning settings. To create a holistic picture of educational context and analyse these processes for different purposes, different data sources and collection methods come into play. Learning interaction analysis has been an important part of the Technology-enhanced Learning (TEL) research; the data collection and analysis can happen through traditional or modern data-collection methods, gathering insights from physical and digital spaces. Technological advancements brought the need for analysis of digital interactions (Learning Analytics, LA), covering only one part of the educational process. To respond to the problem of so-called street-light effect and one-dimensional data sources, in recent years Multimodal Learning Analytics (MMLA) field emerged, combining different data-sources from traditional or modern data collection techniques coming from across space interactions, also from physical settings: sensors, EEG devices etc. At the same time, to guide the data collection process or to analyse digital traces and data collected through automated means, contextual information such as learning design (LD) with teacher intentions, actors, roles, media use and other information is needed. Traditional data collection methods, especially qualitative methods, can respond to this need as they often contain highly contextual information. Traditional classroom observational methods are relevant and useful sources to include in the analysis for different purposes: to gain evidence from physical space, triangulate the findings, contextualise data analysis and support sensemaking of digital traces. On the other hand, human-mediated classroom observation methods also benefit from automated observations (MMLA data) and can enrich the data, speed up the observation process or gather evidence on indicators unobservable to the human eye. Aligning traditional (human-labelled) and modern (automated) classroom observations, therefore, is beneficial for educational research and practice. Previous research indicates that the fields of LD and LA have a synergetic relationship, where LD contextualises data analysis and LA informs LD. At the same time, connecting these three factors: human-mediated, automated observations and contextualisation of data analysis with LD is not a trivial task and special attention needs to be given to the specificities, meaning, affordances, constraints and quality of the data sources. To provide with a holistic picture on teaching and learning processes this research has connected two research paradigms and focused on the development of conceptual and technological tools to create links between different sources of data and the contextualisation through the development of The Framework for Contextualised Multimodal Observations. The Framework was developed through research-based design methodology and is implemented through a classroom observation app Observata — Classroom Observation tool that produces LA compliant data with specific context and LD (or without). The Framework consists of accompanying three contributions: the model and the protocol for MMLA observational process, the Model for Contextualised MMLA Observations, Context-aware MMLA Taxonomy.
Article
Full-text available
Learning Design (LD) research is oriented to support teachers in designing their teaching with the aim to provide a sound pedagogical background and to make effective use of resources and technologies. In spite of the significant number of LD approaches and tools proposed so far, their adoption is still very limited and this represents an unsolved challenge in the field of LD. This paper presents a systematic review of the literature about learning design tools, tackling the issue of adoption from two points of view: teachers’ needs in relation to LD tools and methods and possible barriers to their adoption. The review includes only research papers where teachers’ behaviours and opinions are directly explored and not purely theoretical papers. The search included five main academic databases in Technology‐Enhanced Learning (TEL) plus a search on Google about project reports; the resulting corpus included 423 papers: 26 of these, plus 3 reports were included in the final list for the analysis. The review provides a systematic overview of the knowledge developed in the LD field, focusing on a set of research gaps that need further exploration in the future.
Article
Full-text available
As the fields of learning analytics and learning design mature, the convergence and synergies between these two fields became an important area for research. This paper intends to summarize the main outcomes of a systematic literature review of empirical evidence on learning analytics for learning design. Moreover, this paper presents an overview of what and how learning analytics have been used to inform learning design decisions and in what contexts. The search was performed in seven academic databases, resulting in 43 papers included in the main analysis. The results from the review depict the ongoing design patterns and learning phenomena that emerged from the synergy that learning analytics and learning design impose on the current status of learning technologies. Finally, this review stresses that future research should consider developing a framework on how to capture and systematize learning design data grounded in learning analytics and learning theory, and document what learning design choices made by educators influence subsequent learning activities and performances over time.
Article
Full-text available
Multimodality in learning analytics and learning science is under the spotlight. The landscape of sensors and wearable trackers that can be used for learning support is evolving rapidly, as well as data collection and analysis methods. Multimodal data can now be collected and processed in real time at an unprecedented scale. With sensors, it is possible to capture observable events of the learning process such as learner's behaviour and the learning context. The learning process, however, consists also of latent attributes, such as the learner's cognitions or emotions. These attributes are unobservable to sensors and need to be elicited by human-driven interpretations. We conducted a literature survey of experiments using multimodal data to frame the young research field of multimodal learning analytics. The survey explored the multimodal data used in related studies (the input space) and the learning theories selected (the hypothesis space). The survey led to the formulation of the Multimodal Learning Analytics Model whose main objectives are of (O1) mapping the use of multimodal data to enhance the feedback in a learning context; (O2) showing how to combine machine learning with multimodal data; and (O3) aligning the terminology used in the field of machine learning and learning science. © 2018 The Authors. Journal of Computer Assisted Learning Published by John Wiley & Sons, Ltd.
Article
Full-text available
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts.
Article
Full-text available
Big Data refers to large and disparate volumes of data generated by people, applications and machines. It is gaining increasing attention from a variety of domains, including education. What are the challenges of engaging with Big Data research in education? This paper identifies a wide range of critical issues that researchers need to consider when working with Big Data in education. The issues identified include diversity in the conception and meaning of Big Data in education, ontological, epistemological disparity, technical challenges, ethics and privacy, digital divide and digital dividend, lack of expertise and academic development opportunities to prepare educational researchers to leverage opportunities afforded by Big Data. The goal of this paper is to raise awareness on these issues and initiate a dialogue. The paper was inspired partly by insights drawn from the literature but mostly informed by experience researching into Big Data in education.
Article
Full-text available
Learning analytics (LA) and lesson observations are two approaches frequently used to study teaching and learning processes. In both cases, in order to extract meaningful data interpretations, there is a need for contextualization. Previous works propose to enrich LA datasets with observation data and to use the learning design as a framework to guide the data gathering and the later analysis. However, the majority of lesson observation tools collect data that is not compliant with LA datasets. Moreover, the connection between the learning design and the data gathered is not straightforward. This study reflects upon our research-based design towards an LA model for context-aware semantically annotated lesson observations that may be integrated in multimodal LA datasets. Six teachers (out of which 2 were also researchers) with previous experience in lesson observation were engaged in a focus group interview and participatory design session that helped us to evaluate the LA model through the conceptual design of Observata (a lesson observation tool that implements our model). The findings show the feasibility and usefulness of the proposal as well as the potential limitations in terms of adoption.
Chapter
Full-text available
Observational data can be used to illuminate different areas of teaching and learning process and enrich Learning Analytics data. Majority of lesson observation tools provide observational data that is not compliant with LA datasets. The paper presents Observata – a tablet computer application for context-aware semantic annotations of significant events during real time lesson observations. During the demo-session we expect the participants to engage in the discussion and provide feedback on the prototype.
Article
The design of effective learning analytics extends beyond sound technical and pedagogical principles. If these analytics are to be adopted and used successfully to support learning and teaching, their design process needs to take into account a range of human factors, including why and how they will be used. In this editorial, we introduce principles of human-centred design developed in other, related fields that can be adopted and adapted to support the development of Human-Centred Learning Analytics (HCLA). We draw on the papers in this special section, together with the wider literature, to define human-centred design in the field of learning analytics and to identify the benefits and challenges that this approach offers. We conclude by suggesting that HCLA will enable the community to achieve more impact, more quickly, with tools that are fit for purpose and a pleasure to use.
Conference Paper
This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.
Chapter
Observational research utilizes a naturalistic setting, where the researcher gathers data by watching events or conversations unfold. There is a range of approaches to observation, including covert versus overt, participant and naturalistic versus controlled. Although the strength of this methodology lies in the participants being more likely to behave and speak in ways that are more candid, it has some ethical concerns when the researcher does not make their intentions to collect data known.