Conference Paper

How to Complement Learning Analytics with Smartwatches?: Fusing Physical Activities, Environmental Context, and Learning Activities

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... It is not surprising, then, that this same principle should carry over into how feedback or multimodal data should be shared with researchers and participants. Development of multimodal feedback has been prevalent within the HCI community (Ciordas-Hertel, 2020;Freeman et al., 2017;Limbu, Jarodzka, Klemke, & Specht, 2019) but has scarcely been explored within the MMLA community (Worsley & Ochoa, 2020). Instead, there has been a tendency to forget about multimodality as soon as the data have been collected and analyzed, resorting to traditional charts and figures in a dynamic dashboard in order to display data. ...
... While seemingly far-fetched, current capabilities in HCI make this a possibility (Lopes & Baudisch, 2017). As a somewhat less extreme example, consider the opportunity to provide real-time feedback to students during a group collaboration session using vibrations on a phone or smartwatch (e.g., Ciordas-Hertel, 2020). Instead of highlighting student over-participation or distracting behaviour in a shared group display, the student might receive an individual notification in the form of a vibration or text-based notification. ...
Article
Full-text available
Multimodal learning analytics (MMLA) has increasingly been a topic of discussion within the learning analytics community. The Society of Learning Analytics Research is home to the CrossMMLA Special Interest Group and regularly hosts workshops on MMLA during the Learning Analytics Summer Institute (LASI). In this paper, we articulate a set of 12 commitments that we believe are critical for creating effective MMLA innovations. Moreover, as MMLA grows in use, it is important to articulate a set of core commitments that can help guide both MMLA researchers and the broader learning analytics community. The commitments that we describe are deeply rooted in the origins of MMLA and also reflect the ways that MMLA has evolved over the past 10 years. We organize the 12 commitments in terms of (i) data collection, (ii) analysis and inference, and (iii) feedback and data dissemination and argue why these commitments are important for conducting ethical, high-quality MMLA research. Furthermore, in using the language of commitments, we emphasize opportunities for MMLA research to align with established qualitative research methodologies and important concerns from critical studies.
Preprint
Full-text available
Learning personalization has proven its effectiveness in enhancing learner performance. Therefore, modern digital learning platforms have been increasingly depending on recommendation systems to offer learners personalized suggestions of learning materials. Learners can utilize those recommendations to acquire certain skills for the labor market or for their formal education. Personalization can be based on several factors, such as personal preference, social connections or learning context. In an educational environment, the learning context plays an important role in generating sound recommendations, which not only fulfill the preferences of the learner, but also correspond to the pedagogical goals of the learning process. This is because a learning context describes the actual situation of the learner at the moment of requesting a learning recommendation. It provides information about the learner current state of knowledge, goal orientation, motivation, needs, available time, and other factors that reflect their status and may influence how learning recommendations are perceived and utilized. Context aware recommender systems have the potential to reflect the logic that a learning expert may follow in recommending materials to students with respect to their status and needs. In this paper, we review the state-of-the-art approaches for defining a user learning-context. We provide an overview of the definitions available, as well as the different factors that are considered when defining a context. Moreover, we further investigate the links between those factors and their pedagogical foundations in learning theories. We aim to provide a comprehensive understanding of contextualized learning from both pedagogical and technical points of view. By combining those two viewpoints, we aim to bridge a gap between both domains, in terms of contextualizing learning recommendations.
Article
Full-text available
Learning personalization has proven its effectiveness in enhancing learner performance. Therefore, modern digital learning platforms have been increasingly depending on recommendation systems to offer learners personalized suggestions of learning materials. Learners can utilize those recommendations to acquire certain skills for the labor market or for their formal education. Personalization can be based on several factors, such as personal preference, social connections or learning context. In an educational environment, the learning context plays an important role in generating sound recommendations, which not only fulfill the preferences of the learner, but also correspond to the pedagogical goals of the learning process. This is because a learning context describes the actual situation of the learner at the moment of requesting a learning recommendation. It provides information about the learner’s current state of knowledge, goal orientation, motivation, needs, available time, and other factors that reflect their status and may influence how learning recommendations are perceived and utilized. Context-aware recommender systems have the potential to reflect the logic that a learning expert may follow in recommending materials to students with respect to their status and needs. During the last decade, several approaches have emerged in the literature to define the learning context and the factors that may capture it. Those approaches led to different definitions of contextualized learner-profiles. In this paper, we review the state-of-the-art approaches for defining a user’s learning-context. We provide an overview of the definitions available, as well as the different factors that are considered when defining a context. Moreover, we further investigate the links between those factors and their pedagogical foundations in learning theories. We aim to provide a comprehensive understanding of contextualized learning from both pedagogical and technical points of view. By combining those two viewpoints, we aim to bridge a gap between both domains, in terms of contextualizing learning recommendations.
Chapter
Full-text available
Collaboration is an important 21st century skill; it can take place in a remote or co-located setting. Co-located collaboration (CC) is a very complex process which involves subtle human interactions that can be described with multimodal indicators (MI) like gaze, speech and social skills. In this paper, we first give an overview of related work that has identified indicators during CC. Then, we look into the state-of-the-art studies on feedback during CC which also make use of MI. Finally, we describe a Wizard of Oz (WOz) study where we design a privacy-preserving research prototype with the aim to facilitate real-time collaboration in-the-wild during three co-located group PhD meetings (of 3-7 members). Here, human observers stationed in another room act as a substitute for sensors to track different speech-based cues (like speaking time and turn taking); this drives a real-time visualization dashboard on a public shared display. With this research prototype, we want to pave way for design-based research to track other multimodal indicators of CC by extending this prototype design using both humans and sensors.
Article
Full-text available
Multimodality in learning analytics and learning science is under the spotlight. The landscape of sensors and wearable trackers that can be used for learning support is evolving rapidly, as well as data collection and analysis methods. Multimodal data can now be collected and processed in real time at an unprecedented scale. With sensors, it is possible to capture observable events of the learning process such as learner's behaviour and the learning context. The learning process, however, consists also of latent attributes, such as the learner's cognitions or emotions. These attributes are unobservable to sensors and need to be elicited by human-driven interpretations. We conducted a literature survey of experiments using multimodal data to frame the young research field of multimodal learning analytics. The survey explored the multimodal data used in related studies (the input space) and the learning theories selected (the hypothesis space). The survey led to the formulation of the Multimodal Learning Analytics Model whose main objectives are of (O1) mapping the use of multimodal data to enhance the feedback in a learning context; (O2) showing how to combine machine learning with multimodal data; and (O3) aligning the terminology used in the field of machine learning and learning science. © 2018 The Authors. Journal of Computer Assisted Learning Published by John Wiley & Sons, Ltd.
Conference Paper
Full-text available
Ecological Momentary Assessment (EMA) is a method of in situ data collection for assessment of behaviors, states, and contexts. Questions are prompted during everyday life using an individual's mobile device, thereby reducing recall bias and increasing validity over other self-report methods such as retrospective recall. We describe a microinteraction-based EMA method ("micro" EMA, or μEMA) using smartwatches, where all EMA questions can be answered with a quick glance and a tap -- nearly as quickly as checking the time on a watch. A between-subjects, 4-week pilot study was conducted where μEMA on a smartwatch (n=19) was compared with EMA on a phone (n=14). Despite an =8 times increase in the number of interruptions, μEMA had a significantly higher compliance rate, completion rate, and first prompt response rate, and μEMA was perceived as less distracting. The temporal density of data collection possible with μEMA could prove useful in ubiquitous computing studies.
Article
Full-text available
Background The just-in-time adaptive intervention (JITAI) is an intervention design aiming to provide the right type/amount of support, at the right time, by adapting to an individual?s changing internal and contextual state. The availability of increasingly powerful mobile and sensing technologies underpins the use of JITAIs to support health behavior, as in such a setting an individual?s state can change rapidly, unexpectedly, and in his/her natural environment. PurposeDespite the increasing use and appeal of JITAIs, a major gap exists between the growing technological capabilities for delivering JITAIs and research on the development and evaluation of these interventions. Many JITAIs have been developed with minimal use of empirical evidence, theory, or accepted treatment guidelines. Here, we take an essential first step towards bridging this gap. Methods Building on health behavior theories and the extant literature on JITAIs, we clarify the scientific motivation for JITAIs, define their fundamental components, and highlight design principles related to these components. Examples of JITAIs from various domains of health behavior research are used for illustration. Conclusion As we enter a new era of technological capacity for delivering JITAIs, it is critical that researchers develop sophisticated and nuanced health behavior theories capable of guiding the construction of such interventions. Particular attention has to be given to better understanding the implications of providing timely and ecologically sound support for intervention adherence and retention.
Article
Full-text available
The goal of Learning Analytics is to understand and improve learning. However, learning does not always occur through or mediated by a technological system that can collect digital traces. To be able to study learning in non-technology centered environments, several signals, such as video and audio, should be captured, processed and analyzed to produce traces of the actions and interactions of the actors of the learning process. The use and integration of the different modalities present in those signals is known as Multimodal Learning Analytics. This editorial presents a brief introduction to this new variation of Learning Analytics and summarizes the four representative articles included in this special issue. The editorial closes with a small discussion about the current opportunities and challenges in multimodal learning analytics.
Article
One key factor for the successful outcome of a Learning Analytics (LA) infrastructure is the ability to decide which software architecture concept is necessary. Big Data can be used to face the challenges LA holds. Additional challenges on privacy rights are introduced to the Europeans by the General Data Protection Regulation (GDPR). Beyond that, the challenge of how to gain the trust of the users remains. We found diverse architectural concepts in the domain of LA. Selecting an appropriate solution is not straightforward. Therefore, we conducted a structured literature review to assess the state-of-the-art and provide an overview of Big Data architectures used in LA. Based on the examination of the results, we identify common architectural components and technologies and present them in the form of a mind map. Linking the findings, we are proposing an initial approach towards a Trusted and Interoperable Learning Analytics Infrastructure (TIILA).
Article
Interactive ambulatory assessment (IAA) provides a new approach to investigate and promote self-regulation directly in a student's daily learning routine. A total of 89 students were randomly assigned to the intervention (IG, n = 43) and control group (CG, n = 46). During preparation for an academic deadline, all participants answered questions related to their learning behaviors; the questions were presented daily via electronic diaries. The smartphones of the IG were additionally equipped with intervening features for overcoming procrastination. The IG participants were provided automated, individualized feedback daily regarding their learning behaviors and procrastination tendencies. Additionally, they received suggestions related to strategies to foster self-regulation based on their individual reasons for procrastination. Multilevel model analyses revealed decreased procrastination and increased completed workload for the IG compared to the CG. In a follow-up measurement during the preparation for a second deadline, the IG maintained these positive effects and increased the effectiveness of its study time.
Article
Effective time management is essential for us all, whether students or anyone else. There are many factors which affect how well students manage their time and in what ways. As with everything, some are excellent at managing their time and others are not. As faculty, we can assist our learners to better manage their time, whether this is in the online learning environment or any other. However, studies reveal that the effect of time management training on time management practices varies, and there is therefore a need to explore this further. This study investigates how the practice of time management, an important self-regulated learning enabler, affects learning in the online learning environment. An automated adaptive time management enabling system was used to guide students in managing their time more effectively. The system assisted students in their time management through visual reinforcement, adaptive release, learning monitors and learning motivators. The findings showed that the use of the time management enabling system facilitated and guided the students in studying the course in a consistent manner and aided students in practising more effective time management thus impacting performance. In summary, positive changes were made to their time management behaviours and these subsequently improved their self-regulation.
Article
Drug addiction is a chronic brain-based disorder that affects a person's behavior and leads to an inability to control drug usage. Ubiquitous physiological sensing technologies to detect illicit drug use have been well studied and understood for different types of drugs. However, we currently lack the ability to continuously and passively measure the user state in ways that might shed light on the complex relationships between cocaine-induced subjective states (e.g., craving and euphoria) and compulsive drug-seeking behavior. More specifically, the applicability of wearable sensors to detect drug-related states is underexplored. In the current work, we take an initial step in the modeling of cocaine craving, euphoria and drug-seeking behavior using electrocardiographic (ECG) and respiratory signals unobtrusively collected from a wearable chest band. Ten experienced cocaine users were studied using a human laboratory paradigm of self-regulated (i.e., "binge") cocaine administration, during which self-reported visual analog scale (VAS) ratings of cocaine-induced subjective effects (i.e., craving and euphoria) and behavioral measures of drug-seeking behavior (i.e., button clicks for drug infusions) are collected. Our results are encouraging and show that self-reported VAS Craving scores are predicted with a normalized root-mean-squared error (NRMSE) of 17.6% and a Pearson correlation coefficient of 0.49. Similarly, for VAS Euphoria prediction, an NRMSE of 16.7% and a Pearson correlation coefficient of 0.73 were achieved. We further analyze the relative importance of different morphology-related ECG and respiratory features for craving and euphoria prediction. Demographic factor analysis reveals how one single factor (i.e., average dollar ($) per cocaine use) can help to further boost the performance of our craving and euphoria models. Lastly, we model drug-seeking behavior using cardiac and respiratory signals. Specifically, we demonstrate that the latter signals can predict participant button clicks with an F1 score of 0.80 and estimate different levels of click density with a correlation coefficient of 0.85 and an NRMSE of 17.9%.
Conference Paper
Capturing fine-grained hand activity could make computational experiences more powerful and contextually aware. Indeed, philosopher Immanuel Kant argued, "the hand is the visible part of the brain." However, most prior work has focused on detecting whole-body activities, such as walking, running and bicycling. In this work, we explore the feasibility of sensing hand activities from commodity smartwatches, which are the most practical vehicle for achieving this vision. Our investigations started with a 50 participant, in-the-wild study, which captured hand activity labels over nearly 1000 worn hours. We then studied this data to scope our research goals and inform our technical approach. We conclude with a second, in-lab study that evaluates our classification stack, demonstrating 95.2% accuracy across 25 hand activities. Our work highlights an underutilized, yet highly complementary contextual channel that could unlock a wide range of promising applications.
Conference Paper
This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.
Article
The papers in ths special section focus on the topic of learning analytics.
Book
This book focuses on the uses of big data in the context of higher education. The book describes a wide range of administrative and operational data gathering processes aimed at assessing institutional performance and progress in order to predict future performance, and identifies potential issues related to academic programming, research, teaching and learning?. Big data refers to data which is fundamentally too big and complex and moves too fast for the processing capacity of conventional database systems. The value of big data is the ability to identify useful data and turn it into useable information by identifying patterns and deviations from patterns.
Chapter
Our interactions with digital technologies in various spaces and time continue to generate a large amount of data. Big Data describes the significant growth in volume and variety of data that is no longer possible to manage using traditional databases. With the help of analytics, these seemingly disparate and heterogeneous quantities of data can be processed for patterns, which can in turn engender useful insights critical for decision-making. Business organisations are starting to systematically understand and explore how to process and analyse these vast array of data to improve decision-making.
Conference Paper
Various wearable sensors capturing body vibration, jaw movement, hand gesture, etc., have shown promise in detecting when one is currently eating. However, based on existing literature and user surveys conducted in this study, we argue that a Just-in-Time eating intervention, triggered upon detecting a current eating event, is sub-optimal. An eating intervention triggered at "About-to-Eat" moments could provide users with a further opportunity to adopt a better and healthier eating behavior. In this work, we present a wearable sensing framework that predicts "About-to-Eat" moments and the "Time until the Next Eating Event". The wearable sensing framework consists of an array of sensors that capture physical activity, location, heart rate, electrodermal activity, skin temperature and caloric expenditure. Using signal processing and machine learning on this raw multimodal sensor stream, we train an "About-to-Eat" moment classifier that reaches an average recall of 77%. The "Time until the Next Eating Event" regression model attains a correlation coefficient of 0.49. Personalization further increases the performance of both of the models to an average recall of 85% and correlation coefficient of 0.65. The contributions of this paper include user surveys related to this problem, the design of a system to predict about to eat moments and a regression model used to train multimodal sensory data in real time for potential eating interventions for the user.
Article
Context recognition is an indispensable functionality of context-aware applications that deals with automatic determination and inference of contextual information from a set of observations captured by sensors. It enables developing applications that can respond and adapt to user's situations. Thus, much attention has been paid to developing innovative context recognition capabilities into context-aware systems. However, some existing studies rely on wearable sensors for context recognition and this practice has limited the incorporation of contexts into practical applications. Additionally, contexts are usually provided as low-level data, which are not suitable for more advanced mobile applications. This article explores and evaluates the use of smartphone's built-in sensors and classification algorithms for context recognition. To realize this goal, labeled sensor data were collected as training and test datasets from volunteers’ smartphones while performing daily activities. Time series features were then extracted from the collected data, summarizing user's contexts with 50% overlapping slide windows. Context recognition is achieved by inducing a set of classifiers with the extracted features. Using cross validation, experimental results show that instance-based learners and decision trees are best suitable for smartphone-based context recognition, achieving over 90% recognition accuracy. Nevertheless, using leave-one-subject-out validation, the performance drops to 79%. The results also show that smartphone's orientation and rotation data can be used to recognize user contexts. Furthermore, using data from multiple sensors, our results indicate improvement in context recognition performance between 1.5% and 5%. To demonstrate its applicability, the context recognition system has been incorporated into a mobile application to support context-aware personalized media recommendations.