Conference Paper

The validity and utility of activity logs as a measure of student engagement

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Learning management system (LMS) web logs provide granular, near-real-time records of student behavior as learners interact with online course materials in digital learning environments. However, it remains unclear whether LMS activity indeed reflects behavioral properties of student engagement, and it also remains unclear how to deal with variability in LMS usage across a diversity of courses. In this study, we evaluate whether instructors' subjective ratings of their students' engagement are related to features of LMS activity for 9,021 students enrolled in 473 for-credit courses. We find that estimators derived from LMS web logs are closely related to instructor ratings of engagement, however, we also observe that there is not a single generic relationship between activity and engagement, and what constitutes the behavioral components of "engagement" will be contingent on course structure. However, for many of these courses, modeled engagement scores are comparable to instructors' ratings in their sensitivity for predicting academic performance. As long as they are tuned to the differences between courses, activity indices from LMS web logs can provide a valid and useful proxy measure of student engagement.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... LMS software, a type of CBES, covers course delivery capabilities such as administration, documentation, tracking, and reporting of programs, classroom and online events, course content, and e-learning. LMS logs "provide granular, near-real-time records of student behavior as learners interact with online course materials in digital learning environments" [10]. An LMS records student activities, such as reading, writing, taking tests, performing specific tasks, and commenting on events with peers [15]. ...
... Activity indicators in LMS logs can serve as proxies of student engagement under certain conditions [10]. A learning analytics information system (LAIS) architecture proposed by [11] involves a data pipeline covering the following stages: 1) logging services and event trackers from learning and teaching service sources, 2) batch extraction-transformation-loading (ETL) and events transformation through event adapters and services, and 3) learning analytics services. ...
... LMS logs capture user interactions within the LMS, such as session durations, page views, assignment submissions, and others [10]. They offer valuable insights into student behavior, engagement, and learning activity. ...
Conference Paper
Full-text available
Educational data mining (EDM) can be used to design better and smarter learning technology by finding and predicting aspects of learners. Insights from EDM are based on data collected from educational environments. Among these educational environments are computer-based educational systems (CBES) such as learning management systems (LMS) and conversational intelligent tutoring systems (CITS). The use of Large Language Models (LLMs) to power a CITS holds promise due to their advanced natural language understanding capabilities. These systems offer opportunities for enriching management and entrepreneurship education. Collecting data from classes experimenting with these new technologies raises some ethical challenges. This paper presents an EDM framework for analyzing and evaluating the impact of these LLM-based CITS on learning experiences in management and entrepreneurship courses and also places strong emphasis on ethical considerations. The different learning experience aspects to be tracked are 1) learning outcomes and 2) emotions or affect and sentiments. Data sources comprise Learning Management System (LMS) logs, pre-post tests, and reflection papers gathered at multiple time points. This framework aims to deliver actionable insights for course and curriculum design and development through design science research (DSR), shedding light on the LLM-based system's influence on student learning, engagement, and overall course efficacy. Classes targeted to apply this framework have 30-40 students on average , grouped between 2-6 members. They will involve sophomore to senior students aged 18 to 22 years. One entire semester takes about 14 weeks. Designed for broad application across diverse courses in management and entrepreneurship, the framework aims to ensure that the utilization of LLMs in education is not only effective but also ethically sound.
... While, the complexity of digital environments demands broader and more dynamic digital skills from teachers, this complexity also brings with it new possibilities for data analysis. The clickstream data, collected from every click on the services in the digital ecosystem, can be analyzed and used to deepen our understanding of how students learn (Gašević et al., 2015;Motz et al., 2019), which in turn can inspire changes in teaching methods. ...
... Nevertheless, this feature engineering process (Verdonck et al., 2021) is limited because indicators of activity frequency or interactions volume do not directly imply high quality learning (Gašević et al., 2015). However, digital ecosystems logs and activity data have been found to be a valid and useful proxy to measure behavioral aspects of student engagement, which are important predictors of academic success (Motz et al., 2019). Behavioral aspects of engagement can be studied in varied ways, for example, the temporal evolution and consistency of interaction sequences (Deeva et al., 2022) and usage of educational materials (López Flores et al., 2023). ...
... Changes on LMS usage (Figure 4): Although interaction with digital ecosystems is not a direct measure of learning, clickstream data in these systems can be seen as a proxy for what elements the students focus on when studying, and how they distribute their time (Motz et al., 2019). To understand how students' activity changed during the pandemic, we looked for significant differences at the level of interaction with the LMS components. ...
... In addition, there is a growing interest in analyzing LMS log data to measure student engagement in online learning environments, especially from a behavioral perspective (Umer et al., 2018;Wong and Chong, 2018;Motz et al., 2019;Lu and Cutumisu, 2022). According to the theoretical framework of student engagement proposed by Fredricks et al. (2004), student engagement can be conceptualized in three dimensions: behavioral, cognitive, and emotional. ...
... Compared to emotional and cognitive engagement, behavioral engagement may be more easily and objectively measured through analyzing LMS activity logs (Wang, 2017). Motz et al. (2019) also showed that LMS log data can be a valid proxy measure of behavioral engagement. Since behavioral engagement is a crucial element for promoting academic achievement and preventing dropouts (Fredricks et al., 2004;Archambault et al., 2009;Wang and Holcombe, 2010;Wang and Eccles, 2012), LMS log data can provide important information in assisting students academically. ...
... Unlike self-reports, which can be affected by social desirability or impression management (Paulhus, 1984), LMS logs can be used to objectively measure students' actual engagement. Motz et al. (2019) suggested that LMS log data can be a useful and valid measure of student engagement by demonstrating that instructors' subjective ratings of engagement were closely associated with student engagement derived from LMS log data. Our data also revealed that the three types of indicators of student engagement (i.e., Log-Count, Log-Entropy, and Access-Rate) were positively correlated with academic performance and negatively with self-reported academic distress. ...
Article
Full-text available
The COVID-19 pandemic has led to an abrupt transition from face-to-face learning to online learning, which has also affected the mental health of college students. In this study, we examined the relationship between students’ adjustment to online learning and their mental health by using the Dual-Continua Model. The model assumes that mental disorder and mental well-being are related yet distinct factors of mental health. For this purpose, 2,933 college students completed an online survey around the beginning of the Fall semester of 2020 (N = 1,724) and the Spring semester of 2021 (N = 1,209). We assessed participants’ mental well-being, mental disorders, and academic distress by means of the online survey. In addition, we incorporated grades and log data accumulated in the Learning Management System (LMS) as objective learning indicators of academic achievement and engagement in online learning. Results revealed that two dimensions of mental health (i.e., mental well-being and mental disorder) were independently associated with all objective and subjective online learning indicators. Specifically, languishing (i.e., low levels of mental well-being) was negatively associated with student engagement derived from LMS log data and academic achievement and was positively associated with self-reported academic distress even after we controlled for the effects of mental disorder. In addition, mental disorder was negatively related to student engagement and academic achievement and was positively related to academic distress even after we controlled for the effects of mental well-being. These results remained notable even when we controlled for the effects of sociodemographic variables. Our findings imply that applying the Dual-Continua Model contributes to a better understanding of the relationship between college students’ mental health and their adaptation to online learning. We suggest that it is imperative to implement university-wide interventions that promote mental well-being and alleviate psychological symptoms for students’ successful adjustment to online learning.
... The prevalent acceptance of online learning has incited many researchers [3][4][5][6][7] to scrutinize the impact of online learning technology on the student's engagement and student's achievement. The construct of student engagement has been studied for many decades and still grasps the interest of many researchers in either traditional or online education settings [8,9]. Many a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Question 1: Is it possible to classify a student as Actively Engaged (AE), Passively Engaged (PE) or Not Engaged (NE) based on students' activities in the LMS? ...
... Some studies focused on a number of clicks [17] that could not reflect the real engagement level of the students. Others focused on data related to specific activities on an LMS (i.e., assignment submission activity [9]). Based on the prediction results, the proposed system sends feedback to the students and alerts the instructor once a student's engagement level decreases. ...
... They developed a dashboard for the instructor to provide additional interventions for students in advance of their final exam. Motz et al. [9] examined some student activities on LMS, especially those related to assignments. They used the K-means clustering approach to classify the courses. ...
Article
Full-text available
The educational research is increasingly emphasizing the potential of student engagement and its impact on performance, retention and persistence. This construct has emerged as an important paradigm in the higher education field for many decades. However, evaluating and predicting the student’s engagement level in an online environment remains a challenge. The purpose of this study is to suggest an intelligent predictive system that predicts the student’s engagement level and then provides the students with feedback to enhance their motivation and dedication. Three categories of students are defined depending on their engagement level (Not Engaged, Passively Engaged, and Actively Engaged). We applied three different machine-learning algorithms, namely Decision Tree, Support Vector Machine and Artificial Neural Network, to students’ activities recorded in Learning Management System reports. The results demonstrate that machine learning algorithms could predict the student’s engagement level. In addition, according to the performance metrics of the different algorithms, the Artificial Neural Network has a greater accuracy rate (85%) compared to the Support Vector Machine (80%) and Decision Tree (75%) classification techniques. Based on these results, the intelligent predictive system sends feedback to the students and alerts the instructor once a student’s engagement level decreases. The instructor can identify the students’ difficulties during the course and motivate them through e-mail reminders, course messages, or scheduling an online meeting.
... in virtual learning environments. Motz et al., (2019) applied logistic regression model through a clustering technique to predict student engagement from interaction in learning management system which is Canvas. Cocea and Weibelzahl (2011) developed a disengagement prediction model on data of an e-learning system called HTML tutor. ...
... The prediction time the authors applied was weekly, which was very long time scale. Motz et al. (2019) investigated the relationship between features of student activity derived from log files of LMS called Canvas and instructors ratings of student engagement. The authors applied logistic regression model through clustering technique to predict student engagement. ...
... According to Husain et al. (2018), content viewing, discussion forum and quiz are significantly correlated with engagement. Assignment is the most used indicator of engagement according to Motz et al. (2019). We have also applied discussion forum to be used as one of the tasks to detect collaborative engagement. ...
Article
Full-text available
Students in the online learning who have other responsibilities of life such as work and family face attrition. Constructing a model of engagement with smallest granule of time has not been implemented widely, but implementing it is important as it allows to uncover more subtle patterns. We built a student engagement prediction model using 9 features that were significant out of 13 features to affect the levels of student engagement and emerged in the final model. The student engagement prediction model was built using non-linear regression technique from three factors: behavioral, collaboration and emotional factors across micro level time scale such as 5 minutes to identify at risk students as quickly as possible before they disengage. The accuracy of the model was found to be 83.3%. The results of the study will give teachers the chance to provide early interventions and guidelines for designing online learning activities.
... Another consideration for the utility and validity of analytic behavioral measures is the overarching function of such measures in terms of what it is being compared with. Depending on the context of comparison, a feature such as "submission counts" could be indicative of more engagement if the determination is understood in terms of passing or failing a course (e.g., [32]). In contrast, when measured against perceived SRL processes, higherthan-average submission counts may reflect hasty corrections, extra attempts, "do-overs," or other behaviors representative of poor regulatory skills (see Figure 3). ...
... Examination of the relationship between these metrics therefore represents a viable path to further understand these measures in context. However, middle ground approaches also exists in the alignment of instructor perceptions of student engagement and online behaviors (see [32]). In either case, these avenues represent an array of methodological choices with which to align self-regulated learning processes. ...
... This null finding merits some attention. Past research commonly observes that the frequency and duration of activity in an LMS reflects positive evidence of student engagement (e.g., [27,32]), so it may be the case that these positive associations exists, but do not generalize across courses (see [9]), thus diluting the aggregate measures in the current study. ...
Conference Paper
A central concern in learning analytics specifically and educational research more generally is the alignment of robust, coherent measures to well-developed conceptual and theoretical frameworks. Capturing and representing processes of learning remains an ongoing challenge in all areas of educational inquiry and presents substantive considerations on the nature of learning, knowledge, and assessment & measurement that have been continuously refined in various areas of education and pedagogical practice. Learning analytics as a still developing method of inquiry has yet to substantively navigate the alignment of measurement, capture, and representation of learning to theoretical frameworks despite being used to identify various practical concerns such as at risk students. This study seeks to address these concerns by comparing behavioral measurements from learning management systems to established measurements of components of learning as understood through self-regulated learning frameworks. Using several prominent and robustly supported self-reported survey measures designed to identify dimensions of self-regulated learning, as well as typical behavioral features extracted from a learning management system, we conducted descriptive and exploratory analyses on the relational structures of these data. With the exception of learners' self-reported time management strategies and level of motivation, the current results indicate that behavioral measures were not well correlated with survey measurements. Possibilities and recommendations for learning analytics as measurements for self-regulated learning are discussed.
... Taken together, we found that regarding achievements, the halt-and-re-run persistence mechanism (portrayed by Some Incomplete and Single Complete, Multiple Incomplete and No Complete) is productive overall, while the complete-and-rerun persistence mechanism (portrayed by Multiple Complete and No Incomplete) is potentially counterproductive. This is an important addition to the existing literature, which recently suggested that micro-persistence showed a somewhat positive contribution to learning (Fang et al., 2017;Israel-Fishelson & Hershkovitz, 2021;Auvinen et al., 2015;Delen & Liew, 2016;Kovanović et al., 2016;Motz et al., 2019;Nguyen, 2020). ...
... Our study also has important methodological implications, adding to the cumulative evidence on the need in refining the ways by which engagement in online learning is measured (Kovanović et al., 2016;Motz et al., 2019;Nguyen, 2020). Overall, it makes us reconsider the very term "persistence" and highlight that "leaving" is not necessarily an antonym of it. ...
Article
Full-text available
We report on a large-scale, log-based study of the associations between persistence and success in an online game-based learning environment for elementary school mathematics. While working with applets, learners can rerun a task after completing it or can halt before completing and rerun it again; both of these mechanisms may improve the score. We analyzed about 3.1 million applet runs by N=44,323 1st–6th-grade students to have a nuanced understanding of persistence patterns, by identifying sequences of consecutive single applet runs (SoCSARs). Overall, we analyzed 2,249,647 SoCSARs and identified six patterns, based on halting and rerunning tasks, and their completion: 1) Single Complete, 2) Single Incomplete, 3) Some Incomplete and Single Complete, 4) Multiple Incomplete and No Complete, 5) Multiple Complete and No Incomplete, and 6) Multiple Complete and Some Incomplete. Expectedly, we found a positive correlation between SoCSAR length and success. Some patterns demonstrate low to medium positive associations with success, while others demonstrate low to medium negative associations. Furthermore, the associations between the type of persistence and success vary by grade level. We discuss these complex relationships and suggest metacognitive and motivational factors that may explain why some patterns are productive and others are not.
... They studied 25 unique LMS variables, and their random forest model, measured until mid-term, explained 67% of the variance in students' final grades in the top 10% of courses, 66% in the top 20%, and 62% in the remaining large courses. Motz et al. (2019) investigated the relationship between features from activity logs within a nine-campus university system's Canvas LMS and faculty-submitted course engagement feedback. Nineteen LMS features of student activity were individually examined. ...
... These variables were chosen based on their known predictive value in the literature (Calvert, 2014;Feild et el. 2018;Morris et al., 2015;Motz et al., 2019;Sandoval et al., 2018;You, 2015) and their ease of comprehension for end users. The data extract defining how these variables were pulled from the LMS are publicly available (Quick et al., 2020). ...
Preprint
Full-text available
This paper presents two studies examining the effectiveness of using learning analytics to inform targeted, proactive advising interventions aimed at improving student success. Study 1 validates a simple learning management system (LMS) learning analytic as predictive of end-of-term outcomes and persistence. Results suggest that this analytic measure, based on students’ activity in the LMS, has predictive utility for identifying students who might benefit from a proactive advising intervention. In Study 2, a randomized experiment with 458 undergraduate pre-major students, we test the hypothesis that an LMS-informed proactive advising intervention would improve end-of-term outcomes and persistence. Students in the treatment group exhibited, on average, an increase of nearly one-third of a grade point in their term GPAs, a reduction in DFWs earned, and an 80% higher likelihood of persisting compared to the control group. These findings provide strong evidence for the effectiveness of proactive advising interventions, where advisors’ efforts are targeted using learning analytics. They suggest that by transparently providing advisors with comprehensible insights, institutions might improve student outcomes and promote the use of data-informed interventions in academic advising.
... The massive amounts of educational data and digital traces from LMSs and interactive learning environments are a common source of data in learning analytics research and have been widely used with the objective of investigating several elements of learning and teaching processes (Nguyen et al., 2020;Tsai et al., 2020). Previous research based on these data has highlighted learning platforms' data are a helpful resource that allows to investigate the students' engagement, self-regulation, and time management skills (Jovanović et al., 2021;Motz et al., 2019;Sher et al., 2020). ...
... Previous research has confirmed that activity indices from LMSs' web logs provide a reliable representation of learner behaviour (Quick et al., 2020) and student engagement (Motz et al., 2019) in varied learning environments. Joksimović et al. (2015) used trace data to examine the effect that the number and duration of four interaction types had on the students' final grades. ...
Conference Paper
Full-text available
The provision of educational material in higher education takes place through learning management systems (LMS) and other learning platforms. However, little is known yet about how and when the students access the educational materials provided to perform better. In this paper, we aim to answer the research question: 'How do the high achievers use the educational material provided to get better grades?'. To answer this question, the data from two educational platforms were merged: a LMS, and a lecture capture platform. We based our analysis on a series of quizzes to understand the differences between high and non high achievers regarding the use of lecture recordings and slides at different moments: (1) before and (2) while solving the quizzes, and (3) after their submission. Our analysis shows significant differences between both groups and highlights the value of considering all the educational platforms instead of limiting the analyses to a single data source.
... A growing body of the literature reports a number of online activities that are indicators of engagement and can be derived from student activity. These activities are metrics that typically consist of the time spent and the number of visits to learning resources as well as the interactions with videos and forums -we expand on these in Section 2. While it is acknowledged that activity logs provide valid engagement indicators that are associated with subjective engagement reports and student performance [34], there are no universal models of engagement as different courses have different norms, students and structure. This calls for for adjusting models to each particular platform. ...
... Since there are no universal models as students, learning contents and course designs differ [34], it is well known that the models defined for MOOCs are typically valid within the scope of the MOOCs under scrutiny. There is a similar agreement when it comes to user engagement as user models for engagement are not generalisable: what some users find engaging, others do not. ...
... But despite this range of efforts, there are no published examples of scalable interventions aimed at direct support of students' adherence to assigned learning activities (examples in more limited contexts include [17], [18]). The relative scarcity of intervention tools for helping students stay on top of the college workload is particularly problematic, considering that contemporary elearning environments provide more autonomy for students to complete (or to fail to complete) their coursework [19], these difficulties may be predictable [20], [21], and completion of this coursework is a primary factor in student engagement and success [22]. ...
... The goal of the current study is to assess the benefit of proactive educative nudges for reducing missed assignments. Adherence to class assignments is, unsurprisingly, a principle predictor of student success and engagement in college [19], [22], and considering that assignments represent one of the most heavily utilized features of learning management systems [31], this is a fertile area for improving student success at scale. ...
Article
Full-text available
As institutions of higher education increasingly utilize online learning management systems, college students are asked to submit more assignments online. Under this regime, when most assignments are posted and submitted online, it is possible to know if a student is missing a submission for an imminent deadline, and to intervene proactively to reduce missed assignments and improve student outcomes. Toward this goal, we designed and evaluated a scalable targeted intervention: a mobile app that would deploy push notifications when students were missing submissions for assignments with imminent deadlines. Results from two experimental pilots demonstrate that this intervention system significantly decreased missed assignments compared with control notifications about instructor announcements to the class (in Experiment 1), and improved assignment adherence and course grades compared with courses that were not using the app (in Experiment 2). We discuss the benefits and theoretical implications of this behavioral guide rail, a purely informative proactive intervention to mitigate risk in advance of a negative outcome.
... Such measures are commonly viewed through the broader framework of self-regulated learning, such that the quality and extent of activity can provide a composite measure of a student's motivation, engagement, time management, learning strategies, study behavior, and more (Roll & Winne, 2015;Winne, 2017). In general, studies find that these objective measures of a student's LMS activity are positively associated with engagement and achievement (Cerezo et al., 2016;Conijn et al., 2017;Joksimović et al., 2015;You, 2016;Yu & Jo, 2014), even when controlling for the amount of work assigned within a course (Motz et al., 2019). ...
... Normally, one might expect that an increased number of learning activities and increased effort on these learning activities would correspond with improved academic outcomes, as is typically the case in learning analytics examinations (Cerezo et al., 2016;Conijn et al., 2017;Joksimović et al., 2015;Motz et al., 2019;You, 2016;Yu & Jo, 2014). However, we observed precisely the opposite. ...
Article
Full-text available
Under normal circumstances, when students invest more effort in their schoolwork, they generally show evidence of improved academic achievement. But when universities abruptly transitioned to remote instruction in Spring 2020, instructors assigned rapidly-prepared online learning activities, disrupting the normal relationship between effort and outcomes. In this study, we examine this relationship using data observed from a large-scale survey of undergraduate students, from logs of student activity in the online learning management system, and from students' estimated cumulative performance in their courses (n = 4,636). We find that there was a general increase in the number of assignments that students were expected to complete following the transition to remote instruction, and that students who spent more time and reported more effort carrying out this coursework generally had lower course performance and reported feeling less successful. We infer that instructors, under pressure to rapidly put their course materials online, modified their courses to include online busywork that did not constitute meaningful learning activities, which had a detrimental effect on student outcomes at scale. These findings are discussed in contrast with other situations when increased engagement does not necessarily lead to improved learning outcomes, and in comparison with the broader relationship between effort and academic achievement.
... Though the latter two dimensions are certainly significant components of the student engagement framework, they are difficult to be defined and measured in practice [11]. In this paper, we mainly focus on the measuring the behavioral aspect of student engagement due to its operational feasibility [9], [11], [12] and close connection with the teaching practice, which is the improvement target of our research. ...
... In addition, wide-angled self-reports miss the dynamic and situational snapshot of student engagement [14]. Others have used activity logs from a learning management system (LMS) to gauge student engagement [12], but although this can provide a more accurate measure of student involvement with the material, it misses the opportunity to accurately gauge in-class engagement. Direct observation can instead provide a more objective measure for capturing dynamic changes of in-class engagement. ...
... But despite this range of efforts, there are no published examples of scalable interventions aimed at direct support of students' adherence to assigned learning activities. The relative scarcity of intervention todols for helping students stay on top of the college workload is particularly problematic, considering that contemporary e-learning environments provide more autonomy for students to complete (or to fail to complete) their coursework [17], and where completion of this coursework is a primary predictor of student engagement and success [18]. ...
... The goal of the current study is to assess the benefit of proactive educative nudges for reducing missed assignments. Adherence to class assignments is, unsurprisingly, a principle predictor of student success and engagement in college [17,18], and considering that assignments represent one of the most heavily utilized features of learning management systems [27], this is a fertile area for improving student success at scale. ...
Preprint
Full-text available
As institutions of higher education increasingly utilize online learning management systems, college students are asked to submit more assignments online. Under this regime, when most assignments are posted and submitted online, it is possible to know if a student is missing a submission for an imminent deadline, and to intervene proactively to reduce missed assignments and improve student outcomes. Toward this goal, we designed and evaluated a scalable targeted intervention: a mobile app that would deploy push notifications when students were missing submissions for assignments with imminent deadlines. Results from two experimental pilots demonstrate that this intervention system significantly decreased missed assignments compared with control notifications about instructor announcements to the class (in Experiment 1), and improved assignment adherence and course grades compared with courses that were not using the app (in Experiment 2). We discuss the benefits and theoretical implications of this behavioral guide rail, a purely informative proactive intervention to mitigate risk in advance of a negative outcome.
... Under just the right conditions in natural educational settings, it is possible that any variable could be associated with significant changes, in either direction, for students' learning outcomes. For example, research into the duration of inactivity in a course site (Conjin et al., 2017), the access of assignments after the deadline (Motz et al., 2019), the order of exemplars during study (Carvalho & Goldstone, 2017), and the immersiveness of instructional examples (Day, Motz, & Goldstone, 2015), have all found opposing benefits in different contexts. Whether a researcher observes positive evidence of such an effect, fails to observe a significant effect, or observes the opposite effect, may be principally determined by the scope of the researcher's analysis, and not by whether the effect "exists." ...
... But on the other hand, the broader activities of learning analytics, educational data mining, and other forms of education research utilizing big data could probably benefit from a reconsideration of how effects are analyzed and interpreted (see also Koedinger, Booth, & Klahr, 2013). Such reconsiderations may involve estimating effects separately for different kinds of courses (Motz et al., 2018c(Motz et al., , 2019, developing new context-dependent theories of learning (Carvalho, 2018), and expanding the scope of experimental analyses to include a wide pool of independent samples (Motz et al., 2018b). ...
Preprint
Full-text available
This paper presents a brief discussion of "effects" and "relationships" in authentic educational contexts, and endeavors to scale-up our thinking about the meaning of these constructs. To discover the mere presence of a reliable main effect relating two variables in natural educational practice is often a feeble pursuit, for any effect might be observable in variable contexts with a sufficiently narrow analysis plan or with a sufficiently large sample size. In turn, this paper argues that researchers should place less emphasis on the mere discovery of relationships, and more emphasis on the analysis of the generalizability of these relationships, the ways that the relationships under investigation may interact with educationally-relevant covariates, and the identification of authentic edge cases where an expected relationship may disappear or reverse.
... That study is, of course, not unique in operationalizing engagement in such a way, as actions and timestamps are the bread and butter of log-based studies, and are easy-to-use for measuring various learning-related types of engagement (e.g., Deng & Benckendorff, 2017;Moubayed et al., 2020;Seidel, 2017). Importantly, predicting student engagement, based on log traces from learning management system, may be course-dependent, as different courses require different types of online engagement (Motz et al., 2019), so justifying a given operationalization a priori also has to do with the specific setting in matter. ...
Article
Log analysis has become a common methodology in the research of computer-assisted learning. Using this method, variables to measure various aspects of learning are computed from the data that is stored in computer-assisted learning environments' log files; these files document fine-grained data on student interaction with the learning system, and are updated automatically, continuously, and unobtrusively. However, besides challenges that any empirical investigation faces, log-based studies face some other, unique challenges. Despite their methodological importance, these distinctive challenges have not yet discussed in a comprehensive manner. In this review paper, we critically examine issues of validity, reliability, generalizability and transferability, and applicability of log-based analysis. We do so by covering relevant theoretical aspects, and demonstrating them via past research. We conclude with practical recommendations for researchers in the fields of Learning Analytics and Educational Data Mining.
... The use of LMS data and learning analytics tools can support learning in many ways. Such data provide detailed, nearly real-time documentation of students' activities as they engage with LMS (Motz et al., 2019). The utilization of learning analytics demonstrates significant potential for enhancing LMS user efficiency and the learning and teaching process (Tran & Meacheam, 2020). ...
Article
Full-text available
In online education, it is widely recognized that interaction and engagement have an impact on students’ academic performance. While previous research has extensively explored interactions between students, instructors, and content, there has been limited exploration of course design elements that promote the fourth type of interaction: interaction between students and the Learning Management System (LMS). Considering the connection between these interactions and students’ academic achievements, this study aims to bridge this gap in the existing literature by investigating the factors that can predict learner-LMS interactions. By analyzing LMS analytics and log data collected from 5,114 participants in an online computer science course, this quantitative study utilized a combination of Multiple Linear Regression (MLR) and Decision Tree (DT) to predict learner-LMS interactions. The chosen model, trained on 80% of the dataset and tested on the remaining 20%, demonstrated effectiveness. The findings highlight the power of the selected model in predicting learner-LMS interactions. Key predictors include students’ average submissions, average minutes, average content accesses, and average assessment accesses. Based on these key factors, the discussion provides insights for optimizing course design in online learning experiences.
... With the recent spread of online education using Learning Management Systems (LMS), a significant number of learning activity logs have been accumulated and are commonly used for Learning Analytics (LA) (Motz et al., 2019). One of the primary purposes of such online education is to expand knowledge, which requires understanding individual learners' knowledge states. ...
Poster
Full-text available
This study proposes the Open Knowledge and Learner Model (OKLM), a universal learner model in which a knowledge map extracted from any domain's learning materials relates to everyday learning activities. OKLM offers various learning support, such as visualization in a dashboard, network analysis, and feedback/recommendation. To address the issue of the cost of manually extracting knowledge maps from learning materials, we present an automated method for generating them. Our experiment successfully demonstrated the generation of OKLM using this method, providing a teacher with insights into learner characteristics and structures of learning materials. Given the identified potential of OKLM, our plan includes further development as a foundational element for learning support systems.
... Most self-reported surveys have multiple items to measure multifaceted engagement (Fredricks & McColskey, 2012), while existing research also indicates that a single-item scale can be a promising alternative where quick and easy measurement of learning engagement is needed (Łukowicz et al., 2017). Recently, log data from learning applications have been considered for this purpose (Xing et al., 2023;Motz et al., 2019), but their validity as a proxy for student engagement is still under debate (Henrie et al., 2018). The relationship between such log data and traditional self-report measures needs further exploration (Ober et al., 2021). ...
Article
Over the past decades, Social Networking Tools (SNT) have been applied in educational settings to support students' engagement in learning communities. Previous studies suggested the positive effects of including students' voices in technological and instructional design. However, educators usually cannot revise the features of SNT as they like, which may limit the possibility of enhancing students' engagement (i.e., cognitive, emotional, and social-behavioral engagement). Therefore, this study explored whether changing SNT technological and instructional design based on students' voices can improve engagement. We developed a photo-sharing web-based SNT and examined whether refining the SNT and instruction design based on students' input would enhance their multifaceted engagement in a learning community. We collected the opinions and feedback from 114 undergraduate students in an environmental psychology course at a private university in the USA using surveys every three weeks. We refined the technological and instructional design accordingly. Students' engagement was measured four times during the semester, and after the semester, nine students were interviewed with regard to how the technological and instructional design changes influenced their engagement. With successive iterations, we found that students' cognitive and emotional engagement significantly improved, while their social-behavioral engagement did not change significantly during the study. Interview results further explained how the design changes influenced students' engagement. The findings suggested soliciting students’ input into SNT technological and instructional design can benefit their engagement in a learning community, while engagement was also influenced by many other factors.
... Additionally, the validity and reliability evidence of aggregate measurements is rarely reported in LA research. Researchers have started investigating the validity issue of measurements in LA and recommended methods to improve the validity, such as grounding the measurements in theories, data triangulation, and combining data-driven and theory-driven approaches (Fan, van der Graaf, et al., 2022;Motz et al., 2019;Winne, 2020). In contrast, the reliability issue is more underexplored, but the reliability of measurements is as important as its validity. ...
Article
Full-text available
Background Learning analytics (LA) research often aggregates learning process data to extract measurements indicating constructs of interest. However, the warranty that such aggregation will produce reliable measurements has not been explicitly examined. The reliability evidence of aggregate measurements has rarely been reported, leaving an implicit assumption that such measurements are free of errors. Objectives This study addresses these gaps by investigating the psychometric pros and cons of aggregate measurements. Methods This study proposes a framework for aggregating process data, which includes the conditions where aggregation is appropriate, and a guideline for selecting the proper reliability evidence and the computing procedure. We support and demonstrate the framework by analysing undergraduates' academic procrastination and programming proficiency in an introductory computer science course. Results and Conclusion Aggregation over a period is acceptable and may improve measurement reliability only if the construct of interest is stable during the period. Otherwise, aggregation may mask meaningful changes in behaviours and should be avoided. While selecting the type of reliability evidence, a critical question is whether process data can be regarded as repeated measurements. Another question is whether the lengths of processes are unequal and individual events are unreliable. If the answer to the second question is no, segmenting each process into a fixed number of bins assists in computing the reliability coefficient. Major Takeaways The proposed framework can be a general guideline for aggregating process data in LA research. Researchers should check and report the reliability evidence for aggregate measurements before the ensuing interpretation.
... The students' learning activities can be analyzed further as the LMS data has become available in describing students' online learning progress and performance. The LMS supports student learning by providing digital content and learning activities such as quizzes, assignments, and discussion forums (Brozina et al., 2019;Motz et al., 2019). ...
... Case studies, observations, checklists, and evaluation scales filled out by teachers are some other strategies [13]. It is crucial to specify this breadth of involvement in order to accurately assess student participation in a particular setting [14], [15]. Based on student activity and behavior in courses from LMS data, this article illustrates the dynamic nature of the behavioral, social, and cognitive components of student involvement. ...
Article
Full-text available
The individuals who make up the globe constantly advance into the future and improve both their personal lives and the conditions in which they live. One ‘s education is the basis of one ‘s knowledge. Humans' education has a significant impact on their behavior and IQ. Through the use of diverse pedagogical techniques, instructors always play a part in changing students' ways of thinking and developing their social and cognitive abilities. However, getting students to participate in an online class is still difficult. In this study, we created an intelligent predictive system that aids instructors in anticipating students' levels of interest based on the information they learn in an online session and in motivating them through regular feedback. The level of students' engagement is divided into three tiers based on their online session activities (Not engaged, passively engaged, and actively engaged). The given data was subjected to the application of Decision Trees (DT), Random Forest Classifiers (RF), Logistic Regression (LR), and Long Short-Term Memory Networks are among the numerous machine learning approaches (LSTM). According to performance measurements, LSTM is the most accurate machine learning algorithm. The instructors can get in touch with the students and inspire them by improving their teaching approaches based on the results the system produces.
... Most of these are cross-sectional studies, in which engagement self-reporting methods were used rather than continuous monitoring (Henrie et al., 2017). Meanwhile, an e-learning context with the possibility to instantly record indicators of student's behaviors and learning activities within a learning management systems (LMS) provides a valid and approximate measure for student engagement in courses (Henrie et al., 2017;Motz et al., 2019). ...
Article
Full-text available
Log data of students’ activities recorded in a learning management system (LMS) can be used to measure their level of engagement in the online teaching–learning process. No previous studies have been found stating a consistent and systematically raised list of LMS-based student engagement indicators, so this systematized review aimed to fulfill this gap. The authors performed an advanced search in the PubMed, Ovid, Google Scholar, Scopus, Web of Science, ProQuest, Emerald, and ERIC databases to retrieve relevant original peer-reviewed articles published until the end of June 2021. Reviewing the 32 included articles resulted in 27 indicators that were categorized into three themes and six categories as follows: (a) log-in and usage (referring to LMS, access to course material), (b) student performance (assignments, assessments), and (c) communication (messaging, forum participation). Among the categories, access to course material and messaging were the most and the least mentioned, respectively.
... Regarding the character of the trace data used, we operate on short courses which contain multimedia content (such as videos and discussion messages; details are provided in section 3). The dataset also addresses the variability across a range of subjects [24]. Moreover, it allows us to showcase the methods in a real-life scenario, as the data is collected from a commercial educational platform. ...
Conference Paper
Full-text available
Predicting academic performance using trace data from learning management systems is a primary research topic in educational data mining. An important application is the identification of students at risk of failing the course or dropping out. However, most approaches utilise past grades, which are not always available and capture little of the student's learning strategy. The end-to-end models we implement predict whether a student will pass a course using only naviga-tional patterns in a multimedia system, with the advantage of not requiring past grades. We experiment on a dataset containing coarse-grained action logs of more than 100,000 students participating in hundreds of short course. We propose two approaches to improve the performance: a novel encoding scheme for trace data, which reflects the course structure while remaining flexible enough to accommodate previously unseen courses, and unsupervised embeddings obtained with an autoencoder. To provide insight into model behaviour, we incorporate an attention mechanism. Clustering the vector representations of student behaviour produced by the proposed methods shows that distinct learning strategies specific to low-and high-achievers are extracted.
... Sin embargo, numerosas evidencias indican una relación clara entre entorno virtual de aprendizaje y rendimiento académico en distintas áreas de conocimiento. Aunque se han descrito usos erráticos y dispares [27,28], y algunos autores consideran que puede ser un factor de distracción [29], muchos trabajos han demostrado un efecto positivo del campus virtual en las calificaciones [30][31][32][33] y que la actividad registrada puede proporcionar una medida indirecta válida del compromiso y la motivación del alumnado [34]. ...
Article
Full-text available
Introducción Uno de los objetivos del Espacio Europeo de Edu-cación Superior (EEES) es centrar la enseñanza en el aprendizaje del estudiante, en su formación y ca-pacitación continuas. El EEES enfatiza que el estu-diante sea activo y se responsabilice de su propio aprendizaje mientras que el docente le guíe en un proceso dinámico, colaborativo y constructivista entre iguales. Los entornos virtuales de aprendiza-je han permitido crear espacios seguros, dinámicos y facilitadores del aprendizaje. Según Gros Salvat, los entornos virtuales de aprendizaje deben pro-porcionar un espacio social en donde alumnos y profesores construyan conocimiento de forma conjunta [1]. Introducción. Los entornos virtuales de aprendizaje permiten crear espacios dinámicos y facilitadores del aprendizaje. In-vestigar el uso dado por los estudiantes puede identificar patrones de comportamiento y detectar tempranamente alum-nos en riesgo de abandono, y se han descrito correlaciones entre su uso y el rendimiento académico. Materiales y métodos. Se estudiaron siete espacios virtualizados correspondientes a cuatro asignaturas de tres grados de Ciencias de la Salud impartidas en los cursos 2017/2018 y 2018/2019, con un total de 517 estudiantes. Previamente se extrajeron, depuraron y anonimizaron los registros de cada espacio. Las variables analizadas fueron: número de visitas al campus virtual, de accesos a recursos y a URL, y uso del foro. Se aplicó un análisis de correspondencias múltiples, seguido de un análisis de conglomerados jerárquico. Resultados. Se obtuvieron cuatro clústeres, con tamaños de entre el 20,9 y el 29,4% de los estudiantes, caracterizados por comportamientos diferenciales en cuanto al uso del campus virtual, y se establecieron relaciones con las calificaciones finales, las notas teóricas y las prácticas de las asignaturas. Se observa que, a menor interacción en el campus virtual, menor rendimiento académico, mientras que, a mayor actividad registrada, mejores calificaciones. Conclusiones. Nuestro estudio revela grupos de estudiantes con comportamientos homogéneos según su uso del campus virtual y establece relaciones con el rendimiento académico. Palabras clave. Análisis de conglomerados. Analítica del aprendizaje. Ciencias de la Salud. Educación superior. Moodle. Rendimiento académico.
... Additionally, it is interesting to note that some learners affirm they invest more time and effort in their online assignments; and even when this happens, it could be assumed that students would be achieving more in their academic performance (Cerezo et al., 2016;Conijn et al., 2017;Joksimović et al., 2015;Motz et al., 2019), a study carried out by Motz et al. (2021) found the opposite. Students who invested more effort in their tasks earned lower grades and felt less successful than when doing their schoolwork in normal circumstances. ...
Article
Full-text available
p style="text-align: justify;">Coronavirus disease (COVID-19) Pandemic changed education conditions worldwide forcing all the parties involved to adapt to a new system. This study aimed to collect information related to the effects of teaching English online on English as a Foreign Language (EFL) students’ achievement. Data were collected from EFL teachers and students enrolled in three different Ecuadorian Universities (Technical University of Ambato, Higher Polytechnic School of Chimborazo, and University of Cuenca) from five different levels: A1, A2, B1, B1+, and B2. This preliminary paper reports the results of 480 students regarding four major sections: pedagogical practice and assessment, learning outcomes, affective factors and perceptions of students about the advantages and disadvantages of online learning during the pandemic COVID-19; considering the Hierarchy of online learning needs of Justin Shewell. An online survey questionnaire with 17 questions and a 5-point Likert scale was applied. The Cronbach's Alpha test presented 0.84 and 0.73 level of reliability. The Kolmogorov Smirnov’s statistic and, the Kendall's Tau_b tests, and the Levene's test for homogeneity of variances were performed with the SPSS statistical program. The results made evident that online learning affects academic achievement in EFL students during the COVID-19 pandemic, which was confirmed after analyzing four main areas: pedagogical practices and assessment, learning outcomes, affective factors and students’ perceptions about the advantages and disadvantages of online learning. The importance of online learning was highlighted since it has been understood as a tool to face the emergency produced by the COVID-19 pandemic.</p
... SD=5.65). According to Motz et al. (2019), the educator rate the engagement of students in LMS based on students activities such assignment, attendance, participation and overall engagement. A student get negative engagement rate based on categories like not complete the assignments, never attend class and low participation. ...
... Fincham et al. [7] identified several indicators of learners' engagement such as the number of weeks the student logs in, the number of unique videos watched, and the number of unique problem submissions. Motz et al. [32] used a rather exhaustive number of features of learners' activities with a Learning Management System to analyze learners' engagement, including time on pages, average page views, and average page views per session. In line with these studies, we focus on observable actions through the interactions of learners with the learning environment. ...
Article
Full-text available
Over the last ten years, gamification has been widely integrated in digital learning environments as a way to increase learners’ motivation. However, little is known about engaged behaviors adopted by learners when using gamified learning environments. In this paper, we analyze learners’ interactions with a gamified learning environment to study learners’ engagement in this particular context and to identify the factors that influence engaged behaviors. We also analyze the complex relationships that exist between learners’ engagement and motivation. We conducted a large-scale field study in ecological conditions, involving 257 students (13-14 years’ old) in 12 classes, from 4 different middle schools. We identified a model of engagement that distinguishes two types of engaged behaviors: an achievement-oriented engagement for initially intrinsically motivated learners or high achiever learners, and a perfection-oriented engagement for low achiever learners. We show that each type of engaged behavior has a specific impact on the variation in learners’ motivation during the learning activity. This model contributes to a better understanding of how gamification can affect learners’ engaged behaviors and motivation during the learning activity according to their initial motivation and player profile. These findings open up new perspectives in terms of motivational affordances, as well as the design and dynamic adaptation of gamification based on learners’ interaction traces with the learning environment.
... While they are not widespread and their coverage is limited, tools such as 605 WevQuery-PM [9] can lighten this burden. differ [58]. Hence, researchers emphasise on methodological approaches to 615 build student models [52]. ...
Article
Full-text available
We explore whether interactive navigational behaviours can be used as a reliable and effective source to measure the progress, achievement, and engagement of a learning process. To do this, we propose a data-driven methodology involving sequential pattern mining and thematic analysis of the low-level navigational interactions. We applied the method on an online learning platform which involved 193 students resulting in six interactive behaviours that are significantly associated with learner achievement including exploration of the first week’s materials and exploration of the forum. The value of including these behaviours in predictive models increased their explainability by 10% and accounted for an overall explainability of 82%. Performance evaluations of the models indicate 91-95% accuracy in identifying low-achieving students. Other relevant findings indicate a strong association between the reduction of the behaviours over time and student achievement. This suggests a relationship between student interface learnability and achievement: achievers become more efficient at using the functionalities of an online learning platform. These findings can provide context to learning progress and theoretical foundations of interventions against unhelpful learning behaviours.
... La motivación es particularmente importante en la histología por tratarse de una materia conceptualmente compleja, en la que es importante integrar correctamente imágenes y conceptos, y en la que no siempre los estudiantes alcanzan a comprender la importancia de su aprendizaje de cara a sus aplicaciones a lo largo de su formación y a largo plazo (Campos-Sánchez et al., 2014). La actividad registrada en los LMS puede proporcionar una medida indirecta válida del compromiso y motivación del alumnado (Motz et al., 2019). Así pues, un uso escaso o limitado del CV puede ser un indicador que alerte al profesorado y le mueva a tutorizar de forma individualizada y a tratar de estimular a los estudiantes poco motivados, al igual que la detección temprana del absentismo ha sido una forma de identificar alumnos en riesgo de abandono o de fracaso en la materia (Espada, 2008;Sacristán-Díaz et al., 2012;Selvig et al., 2015Selvig et al., , Álvarez, 2016. ...
Conference Paper
Full-text available
This paper presents the results obtained after extracting and processing the logs stored in Moodle platform corresponding to the activity of human histology’ students in the virtual campus. Data were analysed with the softwares R and SPSS to reveal students’ behavior and possible differences among them according to their academic performance were studied. Results showed a huge activity of the students much higher than the mean of Medicine virtual campuses and that the activity is clearly dependent of the academic schedule. Student’s behavior was not the same but, on the contrary, differences were pointed out when comparing clusters according to the theory grade. The decision tree revealed that students who passed with continuous assessment and those who must do a final exam have made a statistically different use of the virtual campus.
... 560 -563 complex concept that has different definitions depending on context and measurement [2]. We adopt a pragmatic definition of engagement to indicate that students spent a normal or extended amount of time (and likely cognitive resources) on consecutive modules, emphasizing the cognitive and behavioral aspect of engagement, bearing similarity to the definitions proposed by Miller [11] and described by Motz, et al. [12]. ...
Conference Paper
Full-text available
Many existing studies analyzing log data from online learning platforms model events such as accessing a webpage or problem solving as simple binary states. In this study, we combine quality information inferred from the duration of each event with the conventional binary states, distinguishing abnormally brief events from normal or extra-long events. The new event records, obtained from students' interaction with 10 online learning modules, can be seen as a special form of language, with each "word" describing a student's state of interaction with one learning module, and each "sentence" capturing the interaction with the entire sequence. We used second order Markov chains to learn the patterns of this new "language," with each chain using the interaction states on two given modules to indicate the interaction states on the following two modules. By visualizing the Markov chains that lead to interaction states associated with either disengagement or high levels of engagement, we observed that: 1) disengagement occurs more frequently towards the end of the 10 module sequence; 2) interaction states associated with the highest level of learning effort rarely leads to disengaged states; and 3) states containing brief learning events frequently lead to disengaged states. One advantage of our approach is that it can be applied to log data with relatively small numbers of events, which is common for many online learning systems in college level STEM disciplines. Combining quality information with event logs is a simple attempt at incorporating students' internal condition into learning analytics.
Chapter
Full-text available
Educational data mining (EDM) can be used to design better and smarter learning technology by finding and predicting aspects of learners. Amend if necessary. Insights from EDM are based on data collected from educational environments. Among these educational environments are computer-based educational systems (CBES) such as learning management systems (LMS) and conversational intelligent tutoring systems (CITSs). The use of large language models (LLMs) to power a CITS holds promise due to their advanced natural language understanding capabilities. These systems offer opportunities for enriching management and entrepreneurship education. Collecting data from classes experimenting with these new technologies raises some ethical challenges. This paper presents an EDM framework for analyzing and evaluating the impact of these LLM-based CITS on learning experiences in management and entrepreneurship courses and also places strong emphasis on ethical considerations. The different learning experience aspects to be tracked are (1) learning outcomes and (2) emotions or affect and sentiments. Data sources comprise Learning Management System (LMS) logs, pre-post-tests, and reflection papers gathered at multiple time points. This framework aims to deliver actionable insights for course and curriculum design and development through design science research (DSR), shedding light on the LLM-based system’s influence on student learning, engagement, and overall course efficacy. Classes targeted to apply this framework have 30–40 students on average, grouped between 2 and 6 members. They will involve sophomore to senior students aged 18–22 years. One entire semester takes about 14 weeks. Designed for broad application across diverse courses in management and entrepreneurship, the framework aims to ensure that the utilization of LLMs in education is not only effective but also ethically sound.
Article
Objective The goal was to determine whether or not there is an association between the belief that human development and family science (HDFS) is “just common sense” and academic performance in a rigorous research methods class in HDFS. Background Naïve realism is a cognitive bias that creates a belief in common sense that is difficult to challenge. It is unknown whether student commitment to common sense impedes students' ability to learn. Method Students ( N = 112) in an HDFS research methods class were followed for a semester. Potential barriers to learning were measured through a self‐report survey before the start of the semester. The outcome variable was objective performance in the course as measured by exam scores. Results Exam scores were positively correlated with prior academic achievement and negatively correlated with student belief that the discipline is just common sense. Conclusion Naïve realism, expressed as the belief that HDFS is just common sense, predicts poor performance in a research methods class. Implications Higher education faculty in HDFS must directly confront the problem that our discipline is perceived as just common sense. Naïve realism must be challenged directly in coursework if students are to learn about the science of HDFS.
Conference Paper
Girls’ participation in the computer science (CS) field is influenced by several factors, including positive student engagement, confidence in their ability to learn and perform well on CS tasks, and belief in the usefulness of learning CS. While there has been substantial research on student engagement in CS in general, no studies have focused on understanding how the dimensions of student engagement (behavioural, cognitive, emotional, and social) relate to confidence and perceived usefulness in relation to gender, especially for female students in the CS field. To address this gap, a study was conducted using a multidimensional student engagement selfreport instrument to collect data from female high school students in CS classes (n = 284). Spearman’s rank correlation results showed a positive correlation between confidence and perceived usefulness with all four dimensions of student engagement. Therefore, increasing student confidence and perceived usefulness can successfully engage students in learning CS. The paper also highlights areas that require further investigation and discusses the challenges and implications of these findings for developing interventions to increase student engagement, confidence, and perceived usefulness in CS. The study significantly contributes to the field by measuring the multidimensional student engagement of Saudi Arabia’s high school pupils in CS classes. The findings are expected to shed light on factors that may motivate female students’ interest in CS and increase their learning.
Article
Full-text available
In an asynchronous e-learning, students find themselves all alone in their learning process. This feeling of loneliness results in frustration and loss of motivation, leading to a high drop-out rate. Tracking engagement while the student is learning allows intervention at an appropriate time. Instructors are overwhelmed by data reports provided to them in the online courses. In this work, the researchers proposed a student engagement visualization dashboard that visualizes the instantaneous engagement levels every minute, visualizes trends of student engagement levels, and filters and displays the least engaged learner to address the above challenges. The researchers also evaluated the usability of this visualizer in a controlled experiment and found out that the perceived usefulness by the teachers was high. The visualizer allows teachers to gain insight into the engagement levels of all the students at a glance. It will also allow the teacher to take immediate action.
Conference Paper
Full-text available
Teknolojideki dönüşümün hayatımızda bir “sabit” haline geldiği 21. yüzyılda, öğretmenlerin teknoloji ile tutarlı ve uygun deneyimlere ne kadar sahip olduklarını ve teknolojiyi kendi sınıflarına ne kadar entegre ettiklerini belirlemek bir ihtiyaç haline gelmiştir. Bu çalışmada, 21. yüzyıl öğretmenlerinin teknoloji yeterliklerinin öz değerlendirme yoluyla belirlenmesi amaçlanmıştır. Öğretmenlerin 21. yüzyıl öğrenmeleri için teknoloji yeterliği öz değerlendirme puanları cinsiyet, branş, kıdem, kurum ve yaş değişkenleri açısından ele alınmış; öğretmenlerin bu yeterlikler kapsamında eğitimde dijital dönüşüme hazır olma durumları tartışılmıştır. Araştırmada nicel araştırma yöntemlerinden tarama modeli kullanılmıştır. Katılımcıların belirlenmesinde amaçsal örnekleme yöntemlerinden ölçüt örnekleme kullanılmış ve Milli Eğitim Bakanlığına bağlı kurumlarda görev yapan 198 öğretmenden veri toplanmıştır. Araştırma verileri Christensen ve Knezek (2017) tarafından geliştirilen ve Fidan, Debbağ ve Çukurbaşı (2020) tarafından Türkçe’ye uyarlanan “21.Yüzyıl Öğrenmeleri için Teknoloji Yeterliği Öz Değerlendirme Ölçeği” aracılığıyla toplanmıştır. Ölçek dört alt boyut içermekte ve toplam 24 maddeden oluşmaktadır. %95 güven aralığında (p=0,05) analiz edilen verilerin öncelikle normal dağılıp dağılmadığı incelenmiştir. Elde edilen bulgulara göre ölçek verilerinin normal dağılım gösterdiği belirlenmiş ve verilerin analizinde betimsel analiz, bağımsız örneklem t-Testi ve One-Way Anova testleri kullanılmıştır. Araştırma bulgularına göre, öğretmenlerin teknoloji yeterlikleri ile cinsiyetleri arasında istatistiksel olarak anlamlı bir fark bulunmamıştır. Öğretmenlerin teknoloji yeterlikleri ile branşları arasında ise istatistiksel olarak anlamlı bir fark bulunmuştur. Branş ortalamalarına bakıldığında fen bilimleri branşında yer alan öğretmenlerin teknoloji yeterliklerinin sosyal bilimler branşı öğretmenlerine göre daha yüksek olduğu görülmüştür. Fen bilimleri branşındaki öğretmenlerin entegre uygulamalar ve teknolojiyle öğretim yeterlikleri sosyal bilimler branşındaki öğretmenlere göre daha yüksektir. Özel kurumlarda çalışan öğretmenlerin ortalama puanları devlet kurumlarında çalışanlara göre daha yüksek olmasına rağmen öğretmenlerin teknoloji yeterlikleri ile çalıştıkları kurum arasında istatistiksel olarak anlamlı bir fark bulunmamıştır. Öğretmenlerin teknoloji yeterlikleri ile kıdem yılları arasında da istatistiksel olarak anlamlı bir fark bulunmuştur. Elde edilen bulgulara göre 20 yıla kadar kıdemli olan öğretmenlerin teknoloji yeterlikleri, 21 yıl ve üstü kıdemli öğretmenlere göre daha yüksektir. Öğretmenlerin teknoloji yeterlikleri ile yaş değişkeni arasında istatistiksel olarak anlamlı bir fark bulunmuştur. Bu fark 41 yaş ve altı öğretmenler lehine olup, bilişim teknolojilerini kullanarak büyüyen Y jenerasyonu dijital yerli öğretmenlerin teknolojik yeterliklerinin daha iyi olduğu görülmüştür.
Chapter
Many universities aim to improve students' 'learning to learn' (LTL) skills to prepare them for post-academic life. This requires evaluating LTL and integrating it into the university's curriculum and assessment regimes. Data is essential to provide evidence for the evaluation of LTL, meaning that available data sources must be connected to the types of evidence required for evaluation. This chapter describes a case study using an LTL ontology to connect the theoretical aspects of LTL with a university's existing data sources and to inform the design and application of learning analytics. The results produced by the analytics indicate that LTL can be treated as a dimension in its own right. The LTL dimension has a moderate relationship to academic performance. There is also evidence to suggest that LTL develops at an uneven pace across academic terms and that it exhibits different patterns in online as compared to face-to-face delivery methods.
Chapter
Many universities aim to improve students’ 'learning to learn' (LTL) skills to prepare them for post-academic life. This requires evaluating LTL and integrating it into the university’s curriculum and assessment regimes. Data is essential to provide evidence for the evaluation of LTL, meaning that available data sources must be connected to the types of evidence required for evaluation. This chapter describes a case study using an LTL ontology to connect the theoretical aspects of LTL with a university’s existing data sources and to inform the design and application of learning analytics. The results produced by the analytics indicate that LTL can be treated as a dimension in its own right. The LTL dimension has a moderate relationship to academic performance. There is also evidence to suggest that LTL develops at an uneven pace across academic terms and that it exhibits different patterns in online as compared to face-to-face delivery methods.
Article
Many higher-education institutions have endeavored to understand students' characteristics in order to improve the quality of education. To this end, demographic information and questionnaire surveys have been used, and more recently, digital information from learning management systems and other sources has emerged for students' profiling. This study adopted a novel approach using semantic trajectory data created from smart card logs of campus buildings and class attendance records to investigate the relationship between students' trajectory patterns and academic performance. More than 4000 freshmen were observed per semester at the Songdo International Campus, Yonsei University, in South Korea during four semesters in 2016 and 2017. Dynamic time warping was newly adopted to calculate the similarities among student trajectories, and the similarities of students' trajectories were grouped by hierarchical clustering. Average grade point averages (GPAs) of the groups were evaluated and compared by major and gender. The results showed that the average GPAs were statistically different from each other in general, which confirmed the hypothesis that a student's trajectory differentiates a student's GPA. Furthermore, GPA was positively associated with students' degree of activeness in movement — the more accesses to campus facilities, the better the GPA. Besides, the differences in the average GPAs of the male groups were clearer than was the case for females, and the trajectory of the second semester better characterized an individual student. The study shows that a semantic trajectory pattern generated from location logs is a new and influential factor that can be utilized to understand students' characteristics in higher education and to predict their academic performances.
Article
Full-text available
This article reports on how student engagement is measured in research on technology enhanced learning in higher education. For this purpose, a secondary analysis of a previously conducted systematic review on student engagement in higher education was carried out. 246 research instruments were extracted that relate to the cognitive, affective, and behavioral dimensions of student engagement. Although published in peer-reviewed journals, only 57.4% of the studies reveal their instrument or provide information on how they measured student engagement. Only 30.6% of the presented research instruments report reliability scores but most of those instruments rather relate to learning in general than to learning in educational technology contexts. Only four research instruments were used more than one time. These findings demonstrate the need for a convergence of instruments to operationalize student engagement. For further research, it is highly recommended to re-use instruments developed before and rely on scales with proven psychometric quality: A convergence of evaluated instruments is needed for researchers to rely on an established set of scales for the different dimensions of student engagement. To this end, we recommend relying on generic student engagement scales, as many of these reviewed instruments already exist and fulfill the requirements of psychometric criteria.
Article
Full-text available
Positive feedback has known benefits for improving task performance, but it is not clear why. On the one hand, positive feedback may direct attention to the task and one's motivations for having performed the task. On the other hand, positive feedback may direct attention to the task's value for achieving a future goal. This ambiguity presents a challenge for the design of automated feedback interventions. Specifically, it is unclear whether positive feedback will more effectively influence behavior when it praises the recipient for having performed an action, or when it highlights the action's value toward a goal. In the present study, we test these competing approaches in a large-scale field experiment (n = 1,766). Using a mobile app, we assigned college students to receive occasional notifications immediately upon submitting online assignments that either praised them for having submitted their coursework, or that highlighted the value of submitting coursework for academic success, or to a no-treatment control group. We find that only praise messages improved submission rates and course performance, suggesting that drawing attention to the feedback-eliciting task is necessary and sufficient for influencing behavior at scale.
Thesis
In Higher Education, instructors provide students with opportunities to develop essential knowledge, competencies and skills. To offer students the highest quality learning experiences, effective instructors analyze their practice, intentionally seek to identify and check their teaching assumptions, and make iterative instructional decisions based on evidence. However, teaching and learning situations are complex and ill-defined and there is a lack of a parsimonious theory of student characteristics and learning conditions that elicit optimal performance in students. Moreover, learning analytics support the processing, analysis and translation of data into actionable knowledge but there is no consensus yet on which interactions are relevant for effective learning. Thus, this study sought to gain a deeper understanding on how and why students thrived and were productively engaged with insights from psychometric information and course trace data. Findings of this study contribute to the literature that seek to 1) translate trace data into actionable knowledge, 2) understand those characteristics and conditions that elicit optimal student performance, or 3) demonstrate how to use academic achievement, trace data, and psychometric characteristics to analyze an instructor’s practice. This study reports on research into 4,150 unique student course interactions clustered within 46 undergraduate student trajectories during an elective blended-learning course. It sought to describe changes in students’ active and independent online interaction behaviours; explore differences in interaction trajectories between students; and examine the relationship between students’ interaction trajectories, psychometric characteristics and levels of achievement. Students’ course interaction trace data was captured by a Learning Management System (LMS). Student characteristics were collected through self-report psychometric instruments completed as supplemental, non-graded, in-class learning activities. Finally, student achievement through total course, summative exams and formative assignment grades. Restricted Maximum Likelihood (REML) linear regressions described interindividual differences in students’ growing proportion of course objects accessed across time (interaction trajectories). Maximum Likelihood (ML) multilevel longitudinal regression models, with changes in the proportion of course objects nested within individuals, significantly described students’ average and individual trajectories of interaction and differences between course assessment periods and conscientiousness levels. Pearson and Spearman correlations found significant relationships between interaction trajectories and personality traits, psychosocial maturity resolutions, self-efficacy, self-regulation, reasons for studying, and major life goals, and between interaction trajectories and student achievement (knowledge/exam grades). Significant negative relationships were found between academic achievement, psychosocial intimacy-isolation resolutions, and major life aspirations to have a family life, to make meaningful contributions, and to have fun. After reflecting on these results, this instructor concluded that the courses, although beneficial, could have better promoted students’ optimal performance by shifting to a more streamlined set of outcomes and a clearer learning path; and by realigning learning activities and intended learning outcomes to better match students’ long-range aspirations. Findings from this study suggest that students should be treated not only as cognitive systems but that students may be productively engaged as human beings continually seeking to realize their own possibilities. Although these propositions may not be statistically generalizable, they may be analytically generalized if replicated in more education contexts.
Conference Paper
Full-text available
Analyses of student data in post-secondary education should be sensitive to the fact that there are many different topics of study. These different areas will interest different kinds of students, and entail different experiences and learning activities. However, it can be challenging to identify the distinct academic themes that students might pursue in higher education, where students commonly have the freedom to sample from thousands of courses in dozens of degree programs. In this paper, we describe the use of topic modeling to identify distinct themes of study and classify students according their observed course enrollments, and present possible applications of this technique for the broader field of educational data mining.
Article
Full-text available
To identify the ways teachers and educational systems can improve learning, researchers need to make causal inferences. Analyses of existing datasets play an important role in detecting causal patterns, but conducting experiments also plays an indispensable role in this research. In this article, we advocate for experiments to be embedded in real educational contexts, allowing researchers to test whether interventions such as a learning activity, new technology, or advising strategy elicit reliable improvements in authentic student behaviours and educational outcomes. Embedded experiments, wherein theoretically relevant variables are systematically manipulated in real learning contexts, carry strong benefits for making causal inferences, particularly when allied with the data-rich resources of contemporary e-learning environments. Toward this goal, we offer a field guide to embedded experimentation, reviewing experimental design choices, addressing ethical concerns, discussing the importance of involving teachers, and reviewing how interventions can be deployed in a variety of contexts, at a range of scales. Causal inference is a critical component of a field that aims to improve student learning; including experimentation alongside analyses of existing data in learning analytics is the most compelling way to test causal claims.
Preprint
Full-text available
Teachers use injunctive norms when telling students what they should be doing. But researchers find that sometimes descriptive norms, information about what others are doing, more powerfully influence behavior. Currently, we examine which norm is more effective at increasing self-regulated studying and performance in an online college course. We found injunctive norms increased study behaviors aimed at fulfilling course requirements (completion of assigned activities), but did not improve learning outcomes. Descriptive norms increased behaviors aimed at improving knowledge (ungraded practice with activities after they were due), and improved performance. These results imply norms have a stronger influence over behavior when there is a match between the goal of the behavior (fulfilling course requirements vs. learning goals) and the pull of a stated norm (social approval vs. efficacy). Because the goal of education is learning, this suggests descriptive norms have a greater value for motivating self-regulated study in authentic learning environments.
Article
Full-text available
Purpose The purpose of this paper is to propose a process mining approach to help in making early predictions to improve students’ learning experience in massive open online courses (MOOCs). It investigates the impact of various machine learning techniques in combination with process mining features to measure effectiveness of these techniques. Design/methodology/approach Student’s data (e.g. assessment grades, demographic information) and weekly interaction data based on event logs (e.g. video lecture interaction, solution submission time, time spent weekly) have guided this design. This study evaluates four machine learning classification techniques used in the literature (logistic regression (LR), Naïve Bayes (NB), random forest (RF) and K-nearest neighbor) to monitor weekly progression of students’ performance and to predict their overall performance outcome. Two data sets – one, with traditional features and second, with features obtained from process conformance testing – have been used. Findings The results show that techniques used in the study are able to make predictions on the performance of students. Overall accuracy (F1-score, area under curve) of machine learning techniques can be improved by integrating process mining features with standard features. Specifically, the use of LR and NB classifiers outperforms other techniques in a statistical significant way. Practical implications Although MOOCs provide a platform for learning in highly scalable and flexible manner, they are prone to early dropout and low completion rate. This study outlines a data-driven approach to improve students’ learning experience and decrease the dropout rate. Social implications Early predictions based on individual’s participation can help educators provide support to students who are struggling in the course. Originality/value This study outlines the innovative use of process mining techniques in education data mining to help educators gather data-driven insight on student performances in the enrolled courses.
Article
Full-text available
Predicting student performance is a major tool in learning analytics. This study aims to identify how different measures of massive open online course (MOOC) data can be used to identify points of improvement in MOOCs. In the context of MOOCs, student performance is often defined as course completion. However, students could have other learning objectives than MOOC completion. Therefore, we define student performance as obtaining personal learning objective(s). This study examines a subsample of students in a graduate-level blended MOOC who shared on-campus course completion as a learning objective. Aggregated activity frequencies, specific course item frequencies, and order of activities were analysed to predict student performance using correlations, multiple regressions, and process mining. All aggregated MOOC activity frequencies related positively to on-campus exam grade. However, this relation is less clear when controlling for past performance. In total, 65% of the specific course items showed significant correlations with final exam grade. Students who passed the course spread their learning over more days compared with students who failed. Little difference was found in the order of activities within the MOOC between students who passed and who failed. The results are combined with course evaluations to identify points of improvement within the MOOC. © 2018 The Authors. Journal of Computer Assisted Learning Published by John Wiley & Sons, Ltd.
Conference Paper
Full-text available
Learning Analytics (LA) sits at the confluence of many contributing disciplines, which brings the risk of hidden assumptions inherited from those fields. Here, we consider a hidden assumption derived from computer science, namely, that improving computational accuracy in classification is always a worthy goal. We demonstrate that this assumption is unlikely to hold in some important educational contexts, and argue that embracing computational "imperfection" can improve outcomes for those scenarios. Specifically, we show that learner-facing approaches aimed at "learning how to learn" require more holistic validation strategies. We consider what information must be provided in order to reasonably evaluate algorithmic tools in LA, to facilitate transparency and realistic performance comparisons.
Conference Paper
Full-text available
This paper aims to identify self-regulation strategies from students' interactions with the learning management system (LMS). We used learning analytics techniques to identify metacognitive and cognitive strategies in the data. We define three research questions that guide our studies analyzing i) self-assessments of motivation and self regulation strategies using standard methods to draw a baseline, ii) interactions with the LMS to find traces of self regulation in observable indicators, and iii) self regulation behaviours over the course duration. The results show that the observable indicators can better explain self-regulatory behaviour and its influence in performance than preliminary subjective assessments.
Conference Paper
Full-text available
In this paper, we propose a method for predicting final grades of students by a Recurrent Neural Network (RNN) from the log data stored in the educational systems. We applied this method to the log data from 108 students and examined the accuracy of prediction. From the experimental results, comparing with multiple regression analysis, it is confirmed that an RNN is effective to early prediction of final grades.
Article
Full-text available
This study examined the extent to which instructional conditions influence the prediction of academic success in nine undergraduate courses offered in a blended learning model (n = 4134). The study illustrates the differences in predictive power and significant predictors between course-specific models and generalized predictive models. The results suggest that it is imperative for learning analytics research to account for the diverse ways technology is adopted and applied in course-specific contexts. The differences in technology use, especially those related to whether and how learners use the learning management system, require consideration before the log-data can be merged to create a generalized model for predicting academic success. A lack of attention to instructional conditions can lead to an over or under estimation of the effects of LMS features on students' academic success. These findings have broader implications for institutions seeking generalized and portable models for identifying students at risk of academic failure.
Article
Full-text available
Contemporary literature on online and distance education almost unequivocally argues for the importance of interactions in online learning settings. Nevertheless, the relationship between different types of interactions and learning outcomes is rather complex. Analyzing 204 offerings of 29 courses, over the period of six years, this study aimed at expanding the current understanding of the nature of this relationship. Specifically, with the use of trace data about interactions and utilizing the multilevel linear mixed modeling techniques, the study examined whether frequency and duration of student-student, student-instructor, student-system, and student-content interactions had an effect of learning outcomes, measured as final course grades. The findings show that the time spent on student-system interactions had a consistent and positive effect on the learning outcome, while the quantity of student-content interactions was negatively associated with the final course grades. The study also showed the importance of the educational level and the context of individual courses for the interaction types supported. Our findings further confirmed the potential of the use of trace data and learning analytics for studying learning and teaching in online settings. However, further research should account for various qualitative aspects of the interactions used while learning, different pedagogical/media features, as well as for the course design and delivery conditions in order to better explain the association between interaction types and the learning achievement. Finally, the results might imply the need for the development of the institutional and program-level strategies for learning and teaching that would promote effective pedagogical approaches to designing and guiding interactions in online and distance learning settings.
Article
Full-text available
Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.
Article
Full-text available
The analysis of data collected from the interaction of users with educational and information technology has attracted much attention as a promising approach for advancing our understanding of the learning process. This promise motivated the emergence of the new research field, learning analytics, and its closely related discipline, educational data mining. This paper first introduces the field of learning analytics and outlines the lessons learned from well-known case studies in the research literature. The paper then identifies the critical topics that require immediate research attention for learning analytics to make a sustainable impact on the research and practice of learning and teaching. The paper concludes by discussing a growing set of issues that if unaddressed, could impede the future maturation of the field. The paper stresses that learning analytics are about learning. As such, the computational aspects of learning analytics must be well integrated within the existing educational research.
Article
Full-text available
Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.
Article
Full-text available
Being aware of, monitoring and responding constructively to students’ signals of motivation and to students’ signals of engagement represent two important teaching skills. We hypothesised, however, that teachers would better estimate their students’ engagement than they would estimate their students’ motivation. To test this hypothesis, Korean high-school teachers rated three aspects of motivation and four aspects of engagement for each student in their class, while students completed questionnaires to provide referent self-reports of these same aspects of their motivation and engagement. Multi-level analyses showed that, after statistically controlling for the potentially confounding information of student achievement, teachers’ engagement estimates corresponded significantly to their students’ self-reports while their motivation estimate did not. These findings validate teachers’ skill in inferring their students’ classroom engagement and lead to the recommendation that teachers monitor classroom engagement to be in synch with their students during instruction.
Article
Full-text available
An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students' academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an "early warning" system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, time-dependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study.
Conference Paper
Full-text available
The aim of this study is to suggest more meaningful components for learning analytics in order to help learners improving their learning achievement continuously through an educational technology approach. Multiple linear regression analysis is conducted to determine which factors influence student's academic achievement. 84 undergraduate students in a women's university in South Korea participated in this study. The six-predictor model was able to account for 33.5% of the variance in final grade, F(6, 77) = 6.457, p < .001, R2 = .335. Total studying time in LMS, interaction with peers, regularity of learning interval in LMS, and number of downloads were determined to be significant factors for students' academic achievement in online learning environment. These four controllable variables not only predict learning outcomes significantly but also can be changed if learners put more effort to improve their academic performance. The results provide a rationale for the treatment for student time management effort.
Article
Full-text available
This article considers the developing field of learning analytics and argues that to move from small-scale practice to broad scale applicability, there is a need to establish a contextual framework that helps teachers interpret the information that analytics provides. The article presents learning design as a form of documentation of pedagogical intent that can provide the context for making sense of diverse sets of analytic data. We investigate one example of learning design to explore how broad categories of analytics—which we call checkpoint and process analytics—can inform the interpretation of outcomes from a learning design and facilitate pedagogical action.
Article
Full-text available
The situative perspective shifts the focus of analysis from individual behavior and cognition to larger systems that include behaving cognitive agents interacting with each other and with other subsystems in the environment. The first section presents a version of the situative perspective that draws on studies of social interaction, philosophical situation theory, and ecological psychology. Framing assumptions and concepts are proposed for a synthesis of the situative and cognitive theoretical perspectives, and a further situative synthesis is suggested that would draw on dynamic-systems theory. The second section discusses relations between the situative, cognitive, and behaviorist theoretical perspectives and principles of educational practice. The third section discusses an approach to research and social practice called interactive research and design, which fits with the situative perspective and provides a productive, albeit syncretic, combination of theory-oriented and instrumental functions of research. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The purpose of this research was to examine student engagement in totally asynchronous online courses through an empirical analysis of student behavior online and its relationship to persistence and achievement. A total of 13 sections of three undergraduate, general education courses provided the setting for the study. Three hundred fifty-four students were used in the data analysis. Using student access computer logs, student behaviors defined as frequency of participation and duration of participation were documented for eight variables. The descriptive data revealed significant differences in online participation between withdrawers and completers and between successful completers and non-successful completers. A multiple regression analysis was used to evaluate how well student participation measures predicted achievement. Approximately 31% of the variability in achievement was accounted for by student participation measures, and three of the eight variables were statistically significant.
Article
Full-text available
The current research started from the assumption that one of the major motives driving individuals' Internet use is to relieve psychosocial problems (e.g., loneliness, depression). This study showed that individuals who were lonely or did not have good social skills could develop strong compulsive Internet use behaviors resulting in negative life outcomes (e.g., harming other significant activities such as work, school, or significant relationships) instead of relieving their original problems. Such augmented negative outcomes were expected to isolate individuals from healthy social activities and lead them into more loneliness. Even though previous research suggests that social use of the Internet (e.g., social networking sites, instant messaging) could be more problematic than entertainment use (e.g., downloading files), the current study showed that the former did not show stronger associations than the latter in the key paths leading to compulsive Internet use.
Article
The application of learning analytics techniques to log data from Learning Management Systems (LMS) has raised increasing interest in the past years. Advances in this field include the selection of adequate indicators and development of research frameworks. However, most research has focused on individual students, which has hampered the development of learning analytics for team assessment in collaborative learning contexts. From a four-dimensional view of teamwork, this study proposes a set of log data-based indicators to facilitate group assessment in project-based learning courses, and identify relevant predictors of final project results.
Conference Paper
Tertiary educational institutions around the world are increasingly incorporating Virtual Learning Environments (VLE) to their teaching and learning process, enabling students to get more information and interact with instructors easily. However the actual potential of Learning Management System (LMS) is not yet totally utilized by the instructors and students for their teaching and learning process. The evaluation of utilization of online activities via LMS logs will help educational institutes to explore patterns of the usage and to discover new knowledge. The purpose of the present study is to identify useful patterns of the interactions made by students of the Department of Accountancy, University of Kelaniya, Sri Lanka with the LMS, and hence discover new knowledge for decision making. Data from the LMS of a sample undergraduate courses (two) and a sample diploma courses (two) of similar nature were selected. Data collected through activity log reports from LMS for the second semester of academic year 2015/2016. The data is then analyzed based on the pattern and behavior of students’ interaction with the online content of the courses. According to the findings of the study, Majority of undergraduate logs represent resource view, while next highest represent the course view. This pattern is common for both undergraduates and diploma students. Daily average use of LMS shows an important trend where undergraduates use LMS at an average rate during the first month of the semester and usage improves by the middle of semester. When it’s closer to the examination, LMS usage drops considerably. Diploma students show a different behavior, where in the mid of the semester, LMS usage drops drastically and when it’s closer to semester end examination, usage improves. LMS usage by Day of the week suggests that undergraduates log into the LMS when they have convenient internet access. Based on the findings, several recommendations are made to enhance the effectiveness of the existing services available with LMS and to design and add social connect modules to the learning experience. Keywords: Learning Management System, Virtual Learning Environment, Accountancy, Logs
Conference Paper
The current study introduces a model for measuring student diligence using online behaviors during intelligent tutoring system use. This model is validated using a full academic year dataset to test its predictive validity against long-term academic outcomes including end-of-year grades and total work completed by the end of the year. The model is additionally validated for robustness to time-sample length as well as data sampling frequency. While the model is shown to be predictive and robust to time-sample length, the results are inconclusive for robustness in data sampling frequency. Implications for research on interventions, and understanding the influence of self-control, motivation, metacognition, and cognition are discussed.
Article
It is generally acknowledged that engagement plays a critical role in learning. Unfortunately, the study of engagement has been stymied by a lack of valid and efficient measures. We introduce the advanced, analytic, and automated (AAA) approach to measure engagement at fine-grained temporal resolutions. The AAA measurement approach is grounded in embodied theories of cognition and affect, which advocate a close coupling between thought and action. It uses machine-learned computational models to automatically infer mental states associated with engagement (e.g., interest, flow) from machine-readable behavioral and physiological signals (e.g., facial expressions, eye tracking, click-stream data) and from aspects of the environmental context. We present 15 case studies that illustrate the potential of the AAA approach for measuring ensgagement in digital learning environments. We discuss strengths and weaknesses of the AAA approach, concluding that it has significant promise to catalyze engagement research.
Article
With the adoption of Learning Management Systems (LMSs) in educational institutions, a lot of data has become available describing students’ online behavior. Many researchers have used these data to predict student performance. This has led to a rather diverse set of findings, possibly related to the diversity in courses and predictor variables extracted from the LMS, which makes it hard to draw general conclusions about the mechanisms underlying student performance. We first provide an overview of the theoretical arguments used in learning analytics research and the typical predictors that have been used in recent studies. We then analyze 17 blended courses with 4,989 students in a single institution using Moodle LMS, in which we predict student performance from LMS predictor variables as used in the literature and from in-between assessment grades, using both multi-level and standard regressions. Our analyses show that the results of predictive modeling, notwithstanding the fact that they are collected within a single institution, strongly vary across courses. Thus, the portability of the prediction models across courses is low. In addition, we show that for the purpose of early intervention or when in-between assessment grades are taken into account, LMS data are of little (additional) value. We outline the implications of our findings and emphasize the need to include more specific theoretical argumentation and additional data sources other than just the LMS data.
Article
This study sought to identify significant behavioral indicators of learning using learning management system (LMS) data regarding online course achievement. Because self-regulated learning is critical to success in online learning, measures reflecting self-regulated learning were included to examine the relationship between LMS data measures and course achievement. Data were collected from 530 college students who took an online course. The results demonstrated that students' regular study, late submissions of assignments, number of sessions (the frequency of course logins), and proof of reading the course information packets significantly predicted their course achievement. These findings verify the importance of self-regulated learning and reveal the advantages of using measures related to meaningful learning behaviors rather than simple frequency measures. Furthermore, the measures collected in the middle of the course significantly predicted course achievement, and the findings support the potential for early prediction using learning performance data. Several implications of these findings are discussed.
Article
This paper outlines how teachers can use the learning management system (LMS) to identify at risk students in the first week of a course. Data is from nine second year campus based business courses that use a blend of face-to-face and online learning strategies. Students that used the LMS in the first week of the course were more likely to pass. For the rest of the course the pattern of usage is then largely similar for students who pass and those that do not pass. This paper identifies how a LMS can identify at risk students in the first week of the course and provides some strategies to motivate these students. © 2012 John Milne, Lynn M Jeffrey, Gordon Suddaby & Andrew Higgins.
Article
Blended learning (BL) is recognized as one of the major trends in higher education today. To identify how BL has been actually adopted, this study employed a data-driven approach instead of model-driven methods. Latent Class Analysis method as a clustering approach of educational data model-driven methods. Latent Class Analysis method as a clustering approach of educational data mining was employed to extract common activity features of 612 courses in a large private university located in South Korea by using online behavior data tracked from Learning Management System and institution's course database. Four unique subtypes were identified. Approximately 50% of the courses manifested inactive utilization of LMS or immature stage of blended learning implementation, which is labeled as Type I. Other subtypes included Type C - Communication or Collaboration (24.3%), Type D - Delivery or Discussion (18.0%), and Type S - Sharing or Submission (7.2%). We discussed the implications of BL based on data-driven decisions to provide strategic institutional initiatives.
Article
Demographics factors have been used successfully as predictors of student success in traditional higher education systems, but their relationship to achievement in MOOC environments has been largely untested. In this work we explore the predictive power of user demographics compared to learner interaction trace data generated by students in two MOOCs. We show that demographic information offers minimal predictive power compared to activity models, even when compared to models created very early on in the course before substantial interaction data has accrued.
Article
The Open Academic Analytics Initiative (OAAI) is a collaborative, multi‐year grant program aimed at researching issues related to the scaling up of learning analytics technologies and solutions across all of higher education. The paper describes the goals and objectives of the OAAI, depicts the process and challenges of collecting, organizing and mining student data to predict academic risk, and report results on the predictive performance of those models, their portability across pilot programs at partner institutions, and the results of interventions on at‐risk students.
Article
This paper develops Campbell and Oblinger's [4] five-step model of learning analytics (Capture, Report, Predict, Act, Refine) and other theorisations of the field, and draws on broader educational theory (including Kolb and Schön) to articulate an incrementally more developed, explicit and theoretically-grounded Learning Analytics Cycle. This cycle conceptualises successful learning analytics work as four linked steps: learners (1) generating data (2) that is used to produce metrics, analytics or visualisations (3). The key step is 'closing the loop' by feeding back this product to learners through one or more interventions (4). This paper seeks to begin to place learning analytics practice on a base of established learning theory, and draws several implications from this theory for the improvement of learning analytics projects. These include speeding up or shortening the cycle so feedback happens more quickly, and widening the audience for feedback (in particular, considering learners and teachers as audiences for analytics) so that it can have a larger impact.
Article
This chapter summarizes the history of the engagement concept, the development of the National Survey of Student Engagement (NSSE), and its impact on institutional researchers.
Article
STUDENT ENGAGEMENT IN HIGHER EDUCATION is an important volume that fills a longstanding void in the higher education and student affairs literature. The editors and authors make clear that diverse populations of students experience college differently and encounter group-specific barriers to success. Informed by relevant theories, each chapter focuses on a different population for whom research confirms that engagement and connectivity to the college experience are problematic, including: low-income students, racial/ethnic minorities, students with disabilities, LGBT students, and several others. The forward-thinking practical ideas offered throughout the book are based on the 41 contributors’ more than 540 cumulative years of full-time work experience in various capacities at two-year and four-year institutions of higher education. Faculty and administrators will undoubtedly find this book complete with fresh strategies to reverse problematic engagement trends among various college student populations.PRAISE FOR THIS BOOK:Maya Angelou once wrote, “You did the best you could with what you knew. And when you knew better, you did better.” This important book will enable educators and administrators to know better, and hopefully compel them to do better in transforming college campuses into places where all students are supported — JIM LARIMORE, Dean of Students, Swarthmore CollegeHarper and Quaye have assembled a useful book that seriously considers both the theories driving and practices relevant to student engagement.Their fresh insights paint a more nuanced understanding of engagement, which can potentially improve institutional capacity to engage diverse student populations in more deliberate and culturally responsive ways — MITCHELL J. CHANG, Professor, Higher Education and Organizational Change, UCLAThis book engages readers from the Foreword to the Afterword. Any professional on a college or university campus could find something in this book that helps them better understand how to contribute to the success of diverse populations — GWENDOLYN JORDAN DUNGY, Executive Director, National Association of Student Personnel AdministratorsThis book is available for purchase on Amazon.com and through the publisher's website:http://www.routledge.com/books/Student-Engagement-in-Higher-Education-isbn9780415988513TABLE OF CONTENTS:FOREWORDEstela Mara BensimonChapter 1BEYOND SAMENESS, WITH ENGAGEMENT AND OUTCOMES FOR ALL: AN INTRODUCTIONShaun R. Harper and Stephen John QuayeChapter 2INTERNATIONAL STUDENTS AT FOUR-YEAR INSTITUTIONS: DEVELOPMENTAL NEEDS, ISSUES, AND STRATEGIESGregory Anderson, Karen Carmichael, Todd J. Harper, and Tzufang HuangChapter 3BEYOND ACCOMMODATION: REMOVING BARRIERS TO ACADEMIC AND SOCIAL ENGAGEMENT FOR STUDENTS WITH DISABILITIESAndrew H. Nichols and Stephen John QuayeChapter 4FOSTERING SAFE, ENGAGING CAMPUSES FOR LESBIAN, GAY, BISEXUAL, TRANSGENDER, AND QUESTIONING STUDENTSLeah Schueler, Jeffrey Hoffman, and Elizabeth PetersonChapter 5CREATING WELCOMING CAMPUS ENVIRONMENTS FOR STUDENTS FROM MINORITY RELIGIOUS GROUPSCaitlin J. Mahaffey and Scott A. SmithChapter 6GENDER-SPECIFIC APPROACHES TO ENHANCING IDENTITY DEVELOPMENT AMONG UNDERGRADUATE WOMEN AND MENFrank Harris III and Jaime LesterChapter 7ENVIRONMENTAL AND DEVELOPMENTAL APPROACHES TO SUPPORTING WOMEN'S SUCCESS IN STEM FIELDSCandace Rypisi, Lindsey Malcom, and Helen KimChapter 8INSTITUTIONAL SERIOUSNESS CONCERNING BLACK MALE STUDENT ENGAGEMENT: NECESSARY CONDITIONS AND COLLABORATIVE PARTNERSHIPSShaun R. HarperChapter 9ENGAGING RACIAL/ETHNIC MINORITY STUDENTS IN PREDOMINANTLY WHITE CLASSROOM ENVIRONMENTSStephen John Quaye, Tracy Poon Tambascia, and Rameen Ahmadi TaleshChapter 10ENGAGING RACIAL/ETHNIC MINORITY STUDENTS IN OUT-OF-CLASS ACTIVITIES ON PREDOMINANTLY WHITE CAMPUSESViannda M. Hawkins and Heather LarabeeChapter 11ENGAGING WHITE STUDENTS IN A MULTICULTURAL CAMPUS: DEVELOPMENTAL NEEDS AND INSTITUTIONAL CHALLENGESMargaret W. Sallee, Moreen E. Logan, Susan Sims, and W. Paul HarringtonChapter 12MEETING THE NEEDS OF COMMUTER, PART-TIME, TRANSFER, AND RETURNING STUDENTSScott C. Silverman, Sarvenaz Aliabadi, and Michelle R. StilesChapter 13CREATING A PIPELINE TO ENGAGE LOW-INCOME, FIRST-GENERATION COLLEGE STUDENTSJarrett Gupton, Cristina Castelo-Rodríguez, David Angel Martínez, and Imelda QuintanarChapter 14IMPROVING TRANSFER TRAJECTORIES FOR FIRST-YEAR, FIRST-GENERATION, MINORITY COMMUNITY COLLEGE STUDENTSRamona Barrio-Sotillo, Kaneesha Miller, Kuro Nagasaka, and Tony ArguellesChapter 15REDEFINING CHAMPIONSHIP IN COLLEGE SPORTS: ENHACING OUTCOMES AND INCREASEING STUDENT-ATHLETE ENGAGEMENTBrandon E. MartinChapter 16THE CHANGING ACADEMY: DEVELOPMENTAL APPROACHES TO ENGAGING EMERGING POPULATIONS IN HIGHER EDUCATIONKenechukwu (K.C.) Mmeje, Christopher B. Newman, Dennis A. Kramer II, and Mark A. PearsonAFTERWORDGeorge D. Kuh
  • J Pomerantz
  • D C Brooks
J. Pomerantz and D. C. Brooks. 2017. ECAR Study of Faculty and Information Technology. ECAR, Louisville, CO.
Predicting student performance from LMS data: A comparison of 17 blended courses using Moodle LMS
  • R Conjin
  • C Snijders
  • A Kleingeld
  • U Matzat
R. Conjin, C. Snijders, A. Kleingeld, and U. Matzat. 2017. Predicting student performance from LMS data: A comparison of 17 blended courses using Moodle LMS. IEEE Transactions on Learning Technologies, 10, 1, 17-29. DOI: https://doi.org/10.1109/TLT.2016.2616312
Predicting student performance in a blended MOOC
  • R Conjin
  • A Van Den
  • P Beemt
  • Cuijpers
R. Conjin, A. Van den Beemt, and P. Cuijpers. 2018. Predicting student performance in a blended MOOC. Journal of Computer Assisted Learning, 34, 5, 615-628. DOI: https://doi.org/10.1111/jcal.12270
Jingle jangle and conceptual haziness: Evolution and future directions of the engagement construct
  • A Reschly
  • S Christenson
A. Reschly and S. Christenson. 2012. Jingle, jangle, and conceptual haziness: Evolution and future directions of the engagement construct. In Handbook of Research on Student Engagement, Boston, Springer, 3-19.
Greeno and Middle School Mathematics Project Group
J. G. Greeno and Middle School Mathematics Project Group. 1998. The situativity of knowing, learning, and research. American Psychologist, 53, 1, 5-26. DOI: https://doi.org/10.1037/0003-066X.53.1.5
Attention in educational contexts: The role of the learning task in guiding attention
  • A Olney
  • E F Risko
  • S K D'mello
  • A C Graesser
A. Olney, E. F. Risko, S. K. D'Mello, and A. C. Graesser. 2015. Attention in educational contexts: The role of the learning task in guiding attention. In The Handbook of Attention, Cambridge, MA, MIT Press, 623-642.
Let's not forget: Learning analytics are about learning
  • D Gašević
  • S Dawson
  • G Seimens
D. Gašević, S. Dawson, and G. Seimens. 2015. Let's not forget: Learning analytics are about learning. Tech Trends, 59, 1, 64-71. DOI: https://doi.org/10.1007/s11528-014-0822-x
ECAR Study of Faculty and Information Technology. ECAR Louisville CO. J. Pomerantz and D. C. Brooks. 2017. ECAR Study of Faculty and Information Technology
  • J Pomerantz
  • D C Brooks
Attention in educational contexts: The role of the learning task in guiding attention
  • A Olney
  • E F Risko
  • S K Mello
  • A C Graesser