Conference Paper

What college students say, and what they do: Aligning self-regulated learning theory with behavioral logs

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A central concern in learning analytics specifically and educational research more generally is the alignment of robust, coherent measures to well-developed conceptual and theoretical frameworks. Capturing and representing processes of learning remains an ongoing challenge in all areas of educational inquiry and presents substantive considerations on the nature of learning, knowledge, and assessment & measurement that have been continuously refined in various areas of education and pedagogical practice. Learning analytics as a still developing method of inquiry has yet to substantively navigate the alignment of measurement, capture, and representation of learning to theoretical frameworks despite being used to identify various practical concerns such as at risk students. This study seeks to address these concerns by comparing behavioral measurements from learning management systems to established measurements of components of learning as understood through self-regulated learning frameworks. Using several prominent and robustly supported self-reported survey measures designed to identify dimensions of self-regulated learning, as well as typical behavioral features extracted from a learning management system, we conducted descriptive and exploratory analyses on the relational structures of these data. With the exception of learners' self-reported time management strategies and level of motivation, the current results indicate that behavioral measures were not well correlated with survey measurements. Possibilities and recommendations for learning analytics as measurements for self-regulated learning are discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The present study also contributes to a central issue in LA, that is, the measurement of key learning constructs, such as SRL [9,29]. At the heart of this issue are two questions. ...
... As noted in our review, researchers are critical of students' selfreports of SRL, particularly self-report surveys [37]. However, selfreport surveys continue to be a mainstay in SRL research [29]. Our triangulation study offers a way to study the effects of feedback on SRL, by capturing self-reports of SRL through focus group discussions and analysing responses thematically using the framework of SRL. ...
... Finally, recent research (e.g., [2,29,31]) found that the analysis of trace data was aligned with SRL to some extent, notably in relation to time management. However, Quick et al. [29] also noted that behavioural data could not capture the breadth of SRL processes that were addressed in self-report surveys. ...
... Two channels of data are relevant to the work presented here: choice and order of clinical skills in time (learning paths) and text with student reflections. Learning paths can be analysed to support student reflection (Molenaar et al., 2020) or to link to self-reported self-measures (Quick et al., 2020). Learning paths are linked to process mining (see next section). ...
... Studies have linked process mining with self-regulated learning. For example, mapping out sequences of students' self-regulatory behaviours when interacting with a hypermedia program (Bannert et al., 2014), application of process mining to MOOC data and identification of six interaction sequence patterns matched to SRL strategies (Maldonado-Mahauad et al., 2018), and reports of correlations between self-reported SRL measures and behavioural traces in MOOCs (Quick et al., 2020). Process mining was also used to study temporal aspects of SRL using learners' learning management system data, comparing high-and low-performing students (Saint et al., 2020), and to detect sequences of students' modes of study to understand time management tactics and sequences of students' learning actions linked to learning tactics . ...
Article
Full-text available
The paper presents a multi-faceted data-driven computational approach to analyse workplace-based assessment (WBA) of clinical skills in medical education. Unlike formal university-based part of the degree, the setting of WBA can be informal and only loosely regulated, as students are encouraged to take every opportunity to learn from the clinical setting. For clinical educators and placement coordinators it is vital to follow and analyse students’ engagement with WBA while on placements, in order to understand how students are participating in the assessment, and what improvements can be made. We analyse digital data capturing the students’ WBA attempts and comments on how the assessments went, using process mining and text analytics. We compare Year 1 cohorts across three years, focusing on differences between primary vs. secondary care placements. The main contribution of the work presented in this paper is the exploration of computational approaches for multi-faceted, data-driven assessment analytics for workplace learning which includes:(i) a set of features for analysing clinical skills WBA data, (ii) analysis of the temporal aspects ofthat data using process mining, and (iii) utilising text analytics to compare student reflections on WBA. We show how assessment data captured during clinical placements can provide insights about the student engagement and inform the medical education practice. Our work is inspired by Jim Greer’s vision that intelligent methods and techniques should be adopted to address key challenges faced by educational practitioners in order to foster improvement of learning and teaching. In the broader AI in Education context, the paper shows the application of AI methods to address educational challenges in a new informal learning domain - practical healthcare placements in higher education medical training.
... Previous research has confirmed that activity indices from LMSs' web logs provide a reliable representation of learner behaviour (Quick et al., 2020) and student engagement (Motz et al., 2019) in varied learning environments. Joksimović et al. (2015) used trace data to examine the effect that the number and duration of four interaction types had on the students' final grades. ...
Conference Paper
Full-text available
The provision of educational material in higher education takes place through learning management systems (LMS) and other learning platforms. However, little is known yet about how and when the students access the educational materials provided to perform better. In this paper, we aim to answer the research question: 'How do the high achievers use the educational material provided to get better grades?'. To answer this question, the data from two educational platforms were merged: a LMS, and a lecture capture platform. We based our analysis on a series of quizzes to understand the differences between high and non high achievers regarding the use of lecture recordings and slides at different moments: (1) before and (2) while solving the quizzes, and (3) after their submission. Our analysis shows significant differences between both groups and highlights the value of considering all the educational platforms instead of limiting the analyses to a single data source.
Article
Full-text available
Background Learning analytics (LA) collects, analyses, and reports data from the learning environment, to provide evidence of the effects of a particular learning design. Learning design (LD) outlines the conceptual framework for a meaningful interpretation of Learning analytics data. Objectives The study aims to identify the most relevant concepts at the intersection of learning design and learning analytics, how these concepts are integrated into more general thematic areas, and the implications for the research and practice of the learning design and learning analytics synergy. Methods To this end, the study employs a critical interpretive synthesis (CIS) of the selected papers, and complements it with elements of systematic literature review as well as qualitative content analysis and text analytics that employ machine learning and language technology. Results and conclusions The most important themes identified are ‘analytics’, ‘learning’, ‘data’, ‘tools’, ‘research’, ‘framework’, ‘informed’, ‘model‐driven’, ‘participatory’, ‘technique’, ‘impact’, insight’, and ‘findings'. The text analytics detected two topic rarely explicitly discussed in the literature: ‘evidence‐informed instructional design approaches' and ‘design‐based research’. Future research should attempt a holistic perspective towards the LD and LA synergy considering evidence‐informed instructional design approaches as part of a design‐based research methodology that implements evidence‐based teachers' practice and research‐based findings.
Chapter
Currently, business organizations are struggling with the increasing demand for learning needs to address their knowledge gaps. They must have a structure that can reach all employees in terms of training and extract all the important data which is collected by Learning Management Systems during the instruction or learning process. This data will be of extreme importance for better business decisions. In this paper, it is presented a Systematic Literature Review with their respective phases duly explained and framed in the topic. It allowed us to understand the benefits, challenges, enablers, and inhibitors of the deployment and usage of a specified Teaching-Learning Analytics Framework. Finally, it is concluded, that the development of a reference model, could fulfill this gap in knowledge and help business organizations to allocate resources better and improve the decision-making process as well as an instructional and learning process. To achieve the final goal of this research, future work about the development of a Survey Research methodology will be started to fulfill this gap of knowledge.
Article
This study investigated how English learners complete multimodal formative quizzes. Participants included 17,950 students enrolled in a mandatory English for Academic Purposes course at a university in Hong Kong. We retrieved data from Blackboard, a learning management system, and conducted a two-step cluster analysis to examine student self-regulated learning (SRL) profiles with the quizzes. We first identified five clusters of learners with distinctively different self-regulated learning patterns. Then, we performed a multivariate analysis of variance (MANOVA) to further explore their differences in SRL, in terms of start day, days started before deadline, differences in scores between first and last attempt, and scores in language learning activities. Our findings echoed those of previous studies on the relationship between self-regulated learning and academic success. This research enables us to better understand the needs of EAP students in Hong Kong.
Article
Full-text available
Lay Description What is already known about this topic? Learning design (LD) is the pedagogic process used in teaching/learning that leads to the creation and sequencing of learning activities and the environment in which it occurs. Learning analytics (LA) is the measurement, collection, analysis & reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. There are multiple studies on the alignment of LA and LD but research shows that there is still room for improvement. What this paper adds? To achieve better alignment between LD and LA. We address this aim by proposing a framework, where we connect the LA indicators with the activity outcomes from the LD. To demonstrate how learning events/objectives and learning activities are associated with LA indicators and how an indicator is formed/created by (several) LA metrics. We address this aim in our review. This article also aims to assist the LA research community in the identification of commonly used concepts and terminologies; what to measure, and how to measure. Implications for practice and/or policy This article can help course designers, teachers, students, and educational researchers to get a better understanding on the application of LA. This study can further help LA researchers to connect their research with LD.
Conference Paper
Full-text available
The potential of learning analytics (LA) to improve learning and teaching is high. Yet, the adoption of LA across countries still remains low. One reason behind this is that the LA services often do not adequately meet the expectations and needs of their key stakeholders, namely students and teachers. Presently, there is limited research focusing on the examination of the students' expectations of LA across countries, especially in the Nordic, largely highly digitalized context. To fill this gap, this study examines Swedish students' attitudes of LA in a higher education institution. To do so, the validated survey instrument, Student Expectations of Learning Analytics Questionnaire (SELAQ) has been used. Through the application of SELAQ, the students' ideal and predicted expectations of the LA service and their expectations regarding privacy and ethics were examined. Data were collected in spring 2021. 132 students participated in the study. The results show that the students have higher ideal expectations of LA compared to the predicted ones, especially in regards to privacy and ethics. Also, the findings illustrate that the respondents have low expectations in areas related to the instructor feedback, based on the analytics results. Further, the results demonstrate that the students have high expectations on the part of the university in matters concerning privacy and ethics. In sum, the results from the study can be used as a basis for implementing LA in the selected context.
Article
Full-text available
This paper aims to explore time management strategies followed by students in a flipped classroom through the analysis of trace data. Specifically, an exploratory study was conducted on the dataset collected in three consecutive offerings of an undergraduate computer engineering course ( N = 1,134). Trace data about activities were initially coded for the timeliness of activity completion. Such data were then analysed using agglomerative hierarchical clustering based on Ward's algorithm, first order Markov chains, and inferential statistics to (a) detect time management tactics and strategies from students' learning activities and (b) analyse the effects of personalized analytics‐based feedback on time management. The results indicate that meaningful and theoretically relevant time management patterns can be detected from trace data as manifestations of students' tactics and strategies. The study also showed that time management tactics had significant associations with academic performance and were associated with different interventions in personalized analytics‐based feedback.
Article
Full-text available
Although self-regulated learning (SRL) is becoming increasingly important in modern educational contexts, disagreements exist regarding its measurement. One particularly important issue is whether self-reports represent valid ways to measure this process. Several researchers have advocated the use of behavioral indicators of SRL instead. An outstanding research debate concerns the extent to which it is possible to compare behavioral measures of SRL to traditional ways of measuring SRL using self-report questionnaire data, and which of these methods provides the most valid and reliable indicator of SRL. The current review investigates this question. It was found that granularity is an important concept in the comparison of SRL measurements, influencing the degree to which students can accurately report on their use of SRL strategies. The results show that self-report questionnaires may give a relatively accurate insight into students’ global level of self-regulation, giving them their own value in educational research and remediation. In contrast, when students are asked to report on specific SRL strategies, behavioral measures give a more accurate account. First and foremost, researchers and practitioners must have a clear idea about their research question or problem statement, before choosing or combining either form of measurement.
Chapter
Full-text available
The recent focus on learning analytics to analyse temporal dimensions of learning holds a strong promise to provide insights into latent constructs such as learning strategy, self-regulated learning, and metacognition. There is, however, a limited amount of research in temporally-focused process mining in educational settings. Building on a growing body of research around event-based data analysis, we explore the use of process mining techniques to identify strategic and tactical learner behaviours. We analyse trace data collected in online activities of a sample of nearly 300 computer engineering undergraduate students enrolled in a course that followed a flipped classroom pedagogy. Using a process mining approach based on first order Markov models in combination with unsupervised machine learning methods, we performed intra- and inter-strategy analysis. We found that certain temporal activity traits relate to performance in the summative assessments attached to the course, mediated by strategy type. Results show that more strategically minded activity, embodying learner self-regulation, generally proves to be more successful than less disciplined reactive behaviours.
Conference Paper
Full-text available
Learning Analytics (LA) sits at the confluence of many contributing disciplines, which brings the risk of hidden assumptions inherited from those fields. Here, we consider a hidden assumption derived from computer science, namely, that improving computational accuracy in classification is always a worthy goal. We demonstrate that this assumption is unlikely to hold in some important educational contexts, and argue that embracing computational "imperfection" can improve outcomes for those scenarios. Specifically, we show that learner-facing approaches aimed at "learning how to learn" require more holistic validation strategies. We consider what information must be provided in order to reasonably evaluate algorithmic tools in LA, to facilitate transparency and realistic performance comparisons.
Article
Full-text available
The use of analytic methods for extracting learning strategies from trace data has attracted considerable attention in the literature. However, there is a paucity of research examining any association between learning strategies extracted from trace data and responses to well-established self-report instruments and performance scores. This paper focuses on the link between the learning strategies identified in the trace data and student reported approaches to learning. The paper reports on the findings of a study conducted in the scope of an undergraduate engineering course (N=144) that followed a flipped classroom design. The study found that learning strategies extracted from trace data can be interpreted in terms of deep and surface approaches to learning. The detected significant links with self-report measures are with small effect sizes for both the overall deep approach to learning scale and the deep strategy scale. However, there was no observed significance linking the surface approach to learning and surface strategy nor were there significant associations with motivation scales of approaches to learning. The significant effects on academic performance were found, and consistent with the literature that used self-report instruments showing that students who followed a deep approach to learning had a significantly higher performance.
Article
Full-text available
Self‐regulated learning is an ongoing process rather than a single snapshot in time. Naturally, the field of learning analytics, focusing on interactions and learning trajectories, offers exciting opportunities for analyzing and supporting self‐regulated learning. This special section highlights the current state of research at the intersection of self‐regulated learning and learning analytics, bridging communities, disciplines, and schools of thought. In this opening article, we introduce the papers and identify themes and challenges in understanding and supporting self‐ regulated learning in interactive learning environments.
Article
Full-text available
The runjags package provides a set of interface functions to facilitate running Markov chain Monte Carlo models in JAGS from within R. Automated calculation of appropriate convergence and sample length diagnostics, user-friendly access to commonly used graphical outputs and summary statistics, and parallelized methods of running JAGS are provided. Template model specifications can be generated using a standard lme4-style formula interface to assist users less familiar with the BUGS syntax. Automated simulation study functions are implemented to facilitate model performance assessment, as well as drop-k type cross-validation studies, using high performance computing clusters such as those provided by parallel. A module extension for JAGS is also included within runjags, providing the Pareto family of distributions and a series of minimally-informative priors including the DuMouchel and half-Cauchy priors. This paper outlines the primary functions of this package, and gives an illustration of a simulation study to assess the sensitivity of two equivalent model formulations to different prior distributions.
Article
Full-text available
It is an exhilarating and important time for conducting research on learning, with unprecedented quantities of data available. There is a danger, however, in thinking that with enough data, the numbers speak for themselves. In fact, with larger amounts of data, theory plays an ever-more critical role in analysis. In this introduction to the special section on learning analytics and learning theory, we describe some critical problems in the analysis of large-scale data that occur when theory is not involved. These questions revolve around what variables a researcher should attend to and how to interpret a multitude of micro-results and make them actionable. We conclude our comments with a discussion of how the collection of empirical papers included in the special section, and the commentaries that were invited on them, speak to these challenges, and in doing so represent important steps towards theory-informed and theory-contributing learning analytics work. Our ultimate goal is to provoke a critical dialogue in the field about the ways in which learning analytics research draws on and contributes to theory.
Article
Full-text available
This study examined the extent to which instructional conditions influence the prediction of academic success in nine undergraduate courses offered in a blended learning model (n = 4134). The study illustrates the differences in predictive power and significant predictors between course-specific models and generalized predictive models. The results suggest that it is imperative for learning analytics research to account for the diverse ways technology is adopted and applied in course-specific contexts. The differences in technology use, especially those related to whether and how learners use the learning management system, require consideration before the log-data can be merged to create a generalized model for predicting academic success. A lack of attention to instructional conditions can lead to an over or under estimation of the effects of LMS features on students' academic success. These findings have broader implications for institutions seeking generalized and portable models for identifying students at risk of academic failure.
Article
Full-text available
The growing interest in the field of learning strategies has led to an increasing number of studies and, with that, the development of numerous instruments to measure the use of self-regulated learning (SRL) strategies. Due to the complexity of this research field, the types of assessment methods are diverse. For this reason, we conducted a systematic review of self-report instruments that measure SRL in higher education and highlight their main characteristics. In doing so, we applied the general principles of systematic reviewing—we conducted a systematic search of established psychological and educational databases with previously defined inclusion criteria and applied a multistage filtering process. In an additional step, we examined a subsample of nine established instruments in terms of their implementation characteristics, psychometric properties, and additional characteristics. The results illustrate the distribution of self-report instruments used in higher education and point to a growing use of course- or domain-specific questionnaires over the past decades as well as a lack of emotional and motivational regulation scales.
Article
Full-text available
Researchers have long recognized the potential benefits of open-ended computer-based learning environments (OELEs) to help students develop self-regulated learning behaviours. However, measuring self-regulation in these environments is a difficult task. In this paper, we present our work in developing and evaluating coherence analysis (CA), a novel approach to interpreting students' learning behaviours in OELEs. CA focuses on the learner's ability to seek out, interpret, and apply information encountered while working in the OELE. By characterizing behaviours in this manner, CA provides insight into students' open-ended problem-solving strategies as well as the extent to which they understand the nuances of their current learning task. To validate our approach, we applied CA to data from a recent classroom study with Betty's Brain. Results demonstrated relationships of CA-derived metrics to prior skill levels, task performance, and learning. Taken together, these results provide insight into students' SRL processes and suggest targets for adaptive scaffolds to support students' development of science understanding and open-ended problem solving skills.
Article
Full-text available
As enrolments in online courses continue to increase, there is a need to understand how students can best apply self-regulated learning strategies to achieve academic success within the online environment. A search of relevant databases was conducted in December 2014 for studies published from 2004 to Dec 2014 examining SRL strategies as correlates of academic achievement in online higher education settings. From 12 studies, the strategies of time management, metacognition, effort regulation, and critical thinking were positively correlated with academic outcomes, whereas rehearsal, elaboration, and organisation had the least empirical support. Peer learning had a moderate positive effect, however its confidence intervals crossed zero. Although the contributors to achievement in traditional face-to-face settings appear to generalise to on-line context, these effects appear weaker and suggest that (1) they may be less effective, and (2) that other, currently unexplored factors may be more important in on-line contexts.
Conference Paper
Full-text available
All forms of learning take time. There is a large body of research suggesting that the amount of time spent on learning can improve the quality of learning, as represented by academic performance. The wide-spread adoption of learning technologies such as learning management systems (LMSs), has resulted in large amounts of data about student learning being readily accessible to educational researchers. One common use of this data is to measure time that students have spent on different learning tasks (i.e., time-on-task). Given that LMS systems typically only capture times when students executed various actions, time-on-task measures are estimated based on the recorded trace data. LMS trace data has been extensively used in many studies in the field of learning analytics, yet the problem of time-on-task estimation is rarely described in detail and the consequences that it entails are not fully examined. This paper presents the results of a study that examined the effects of different time-on-task estimation methods on the results of commonly adopted analytical models. The primary goal of this paper is to raise awareness of the issue of accuracy and appropriateness surrounding time-estimation within the broader learning analytics community, and to initiate a debate about the challenges of this process. Furthermore, the paper provides an overview of time-on-task estimation methods in educational and related research fields.
Article
Full-text available
Engagement is one of the most widely misused and overgeneralized constructs found in the educational, learning, instructional, and psychological sciences. The articles in this special issue represent a wide range of traditions and highlight several key conceptual, theoretical, methodological, and analytical issues related to defining and measuring engagement. All the approaches exemplified by the contributors show different ways of conceptualizing and measuring engagement and demonstrate the strengths and weaknesses of each method to significantly augment our current understanding of engagement. Despite the numerous issues raised by the authors of this special issue and in my commentary, I argue that focusing on process data will lead to advances in models, theory, methods, analytical techniques, and ultimately instructional recommendations for learning contexts that effectively engage students.
Article
Full-text available
The analysis of data collected from the interaction of users with educational and information technology has attracted much attention as a promising approach for advancing our understanding of the learning process. This promise motivated the emergence of the new research field, learning analytics, and its closely related discipline, educational data mining. This paper first introduces the field of learning analytics and outlines the lessons learned from well-known case studies in the research literature. The paper then identifies the critical topics that require immediate research attention for learning analytics to make a sustainable impact on the research and practice of learning and teaching. The paper concludes by discussing a growing set of issues that if unaddressed, could impede the future maturation of the field. The paper stresses that learning analytics are about learning. As such, the computational aspects of learning analytics must be well integrated within the existing educational research.
Article
Full-text available
We continue the discussion of cognitive and situative perspectives by identifying several important points on which we judge the perspectives to be in agreement: (1) Individual and social perspectives on activity are both fundamentally important in education; (2) Learning can be general, and abstractions can be efficacious, but they sometimes aren't; (3) Situative and cognitive approaches can cast light on different aspects of the educational process, and both should be pursued vigorously; (4) Educational innovations should be informed by the available scientific knowledge base and should be evaluated and analyzed with rigorous research methods.
Article
Full-text available
ames Greeno has written a reply to our recently pub- lished challenge (Anderson, Reder, & Simon, 1996) to the soundness of ma n y educational implications that ave been drawn from the "situated learning" move- ment. Greeno's response (p. 5, this issue) has largely taken the discussion onto a more abstract plane rather than dis- p u t i n g our recommendations for educational practice. Along with his meta-level discussion, he has described several results and ma de a number of comments that help to clarify the educational issues. Greeno acknowledges the persuasiveness of our evi- dence for our findings and reco~n~endatf6~ns, and agrees that there is a consensus between the cognitive and situ- ated perspectives on certain important educational issues. So we want to begin our response by emphasizing those issues on which we all seem to be in a ~ 1. Learning need not be bound to t h e ~ s i t u a t i o n of its application, i n s t r u c t ~ - ~ e n g ~ 6 ~ e ~ fr0-m the classroom to "real world" situations. Greeno cites a list of studies from the situated camp which are consistent with this conclusion. We no longer have to contemplate aban- doning the classroom but can focus our attention on those factors that promote transfer from one situation to other situations. Our original paper contained pointers to the abundant research in cognitive psychology describing and examining these factors. 2. Knowledge can indeed transfer between different sorts of tasks. Again Greeno Cites situated papers which, if they do not provide new evidence for this proposition, at least accept it. Thus, we can aspire to see mathematics ed- ucation transfer to science, engineering, and jobs which require it. We need not teach every different competence anew. Again, our original paper provided references to the very powerful empirical and theoretical base that has de- veloped in cognitive psychology for understanding such transfer. 3. Abstract instruction can be very effectiv e a n d one need not teach everything in concrete, almost vocational settings. Greeno points out some looseness in our use of the terms "concrete" and "specific." If we caused any con- fusion we apologize, but apparently it is not in dispute that real value is to be found in the abstractions that students are taught in school. Again, the issue is how one makes abstract instruction effective, and again we cited cognitive
Article
Full-text available
3.1 Data input and descriptive statistics...................... 4
Article
Full-text available
The performance of four rules for determining the number of components to retain (Kaiser's eigenvalue greater than unity, Cattell's SCREE, Bartlett's test, and Velicer's MAP) was investigated across four systematically varied factors (sample size, number of variables, number of components, and component saturation). Ten sample correlation matrices were generated from each of 48 known population correlation matrices representing the combinations of conditions. The performance of the SCREE and MAP rules was generally the best across all situations. Bartlett's test was generally adequate except when the number of variables was close to the sample size. Kaiser's rule tended to severely overestimate the number of components.
Article
Full-text available
Zusammenfassung: Vor dem Hintergrund einiger kritischer Überlegungen zum Charakter von retrospektiven Selbstberichten wird die Notwendigkeit betont, die prädiktive Validität von Lernstrategieinventaren anhand von handlungsnahen Studien zu überprüfen. Gegenstand der hier vorgestellten Studie ist ein derartiger Vergleich zwischen dem retrospektiven Selbstbericht über das strategische Lernen von Schülern und ihrem tatsächlichem Lernverhalten in einer konkreten Anforderungssituation. Hierzu wurde in individuellen Untersuchungen mit 270 Schülern 4., 6. und 8. Klassen ihr tatsächlicher Strategiegebrauch bei der Arbeit mit Texten erhoben und mit ihren Bewertungen von Strategieitems in einem domänenspezifischen Fragebogen verglichen. Die Ergebnisse belegen, dass - zumindest in diesem Alter - weder auf Item- noch auf Skalenebene lineare Beziehungen zwischen den Berichten über und der tatsächlichen Anwendung von Strategien bestehen. Insgesamt neigen die untersuchten Schüler dazu, sich im Fragebogen relativ zur handlungsnahen Erfassung hinsichtlich ihrer strategischen Kompetenz und Performanz zu überschätzen. Die Ergebnisse legen nahe, dass die retrospektive Einschätzung des eigenen lernstrategischen Vorgehens via Fragebogen in der Kindheit und im frühen Jugendalter hinsichtlich ihrer prädiktiven Validität begründet bezweifelt werden kann.
Article
Full-text available
A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Conference Paper
Learning management system (LMS) web logs provide granular, near-real-time records of student behavior as learners interact with online course materials in digital learning environments. However, it remains unclear whether LMS activity indeed reflects behavioral properties of student engagement, and it also remains unclear how to deal with variability in LMS usage across a diversity of courses. In this study, we evaluate whether instructors' subjective ratings of their students' engagement are related to features of LMS activity for 9,021 students enrolled in 473 for-credit courses. We find that estimators derived from LMS web logs are closely related to instructor ratings of engagement, however, we also observe that there is not a single generic relationship between activity and engagement, and what constitutes the behavioral components of "engagement" will be contingent on course structure. However, for many of these courses, modeled engagement scores are comparable to instructors' ratings in their sensitivity for predicting academic performance. As long as they are tuned to the differences between courses, activity indices from LMS web logs can provide a valid and useful proxy measure of student engagement.
Article
Purpose The explosive growth in the number of digital tools utilized in everyday learning activities generates data at an unprecedented scale, providing exciting challenges that cross scholarly communities. This paper aims to provide an overview of learning analytics (LA) with the aim of helping members of the information and learning sciences communities understand how educational Big Data is relevant to their research agendas and how they can contribute to this growing new field. Design/methodology/approach Highlighting shared values and issues illustrates why LA is the perfect meeting ground for information and the learning sciences, and suggests how by working together effective LA tools can be designed to innovate education. Findings Analytics-driven performance dashboards are offered as a specific example of one research area where information and learning scientists can make a significant contribution to LA research. Recent reviews of existing dashboard studies point to a dearth of evaluation with regard to either theory or outcomes. Here, the relevant expertise from researchers in both the learning sciences and information science is offered as an important opportunity to improve the design and evaluation of student-facing dashboards. Originality/value This paper outlines important ties between three scholarly communities to illustrate how their combined research expertise is crucial to advancing how we understand learning and for developing LA-based interventions that meet the values that we all share.
Article
We surveyed all articles in the Journal of Personality and Social Psychology (JPSP), Psychological Science (PS), and the Journal of Experimental Psychology: General (JEP:G) that mentioned the term “Likert,” and found that 100% of the articles that analyzed ordinal data did so using a metric model. We present novel evidence that analyzing ordinal data as if they were metric can systematically lead to errors. We demonstrate false alarms (i.e., detecting an effect where none exists, Type I errors) and failures to detect effects (i.e., loss of power, Type II errors). We demonstrate systematic inversions of effects, for which treating ordinal data as metric indicates the opposite ordering of means than the true ordering of means. We show the same problems — false alarms, misses, and inversions — for interactions in factorial designs and for trend analyses in regression. We demonstrate that averaging across multiple ordinal measurements does not solve or even ameliorate these problems. A central contribution is a graphical explanation of how and when the misrepresentations occur. Moreover, we point out that there is no sure-fire way to detect these problems by treating the ordinal values as metric, and instead we advocate use of ordered-probit models (or similar) because they will better describe the data. Finally, although frequentist approaches to some ordered-probit models are available, we use Bayesian methods because of their flexibility in specifying models and their richness and accuracy in providing parameter estimates. An R script is provided for running an analysis that compares ordered-probit and metric models.
Article
The current study investigates the sustainability of metacognitive prompting on self-regulatory behavior using a Process Mining approach. Previous studies confirmed beneficial short-term effects of metacognitive prompts on the learning process and on learning outcomes. However, the question of how stable these effects are for similar tasks in the future so far remains unanswered. Also, the use of online trace methods and the emergence of new analytical approaches allow deeper insights into the sequential structure of learning behavior. Therefore, we examined long-term effects of instructional support on sub-processes of self-regulated learning using Process Mining. Think-aloud protocols from 69 university students were measured during two hypermedia learning sessions about Educational Psychology. Metacognitive prompts supported the experimental group (n = 35)only during the first session. Based on a process model generated by using the data of the first learning task, we analysed the sustainability of effects during the second learning session. Results showed significant differences between the experimental and control group regarding the frequency of metacognitive strategies, which remain stable over time. Additionally, the application of Process Mining indicated which sequences of learning events were transferred to the second session. Our findings demonstrate the benefits of evaluating instructional support using analysis techniques that take into account the sequential structure of learning processes. While the results provide initial evidence for sustainable long-term effects on self-regulatory behavior, they have to be replicated in future research.
Article
Big data in education offers unprecedented opportunities to support learners and advance research in the learning sciences. Analysis of observed behaviour using computational methods can uncover patterns that reflect theoretically established processes, such as those involved in self-regulated learning (SRL). This research addresses the question of how to integrate this bottom-up approach of mining behavioural patterns with the traditional top-down approach of using validated self-reporting instruments. Using process mining, we extracted interaction sequences from fine-grained behavioural traces for 3458 learners across three Massive Open Online Courses. We identified six distinct interaction sequence patterns. We matched each interaction sequence pattern with one or more theory-based SRL strategies and identified three clusters of learners. First, Comprehensive Learners, who follow the sequential structure of the course materials, which sets them up for gaining a deeper understanding of the content. Second, Targeting Learners, who strategically engage with specific course content that will help them pass the assessments. Third, Sampling Learners, who exhibit more erratic and less goal-oriented behaviour, report lower SRL, and underperform relative to both Comprehensive and Targeting Learners. Challenges that arise in the process of extracting theory-based patterns from observed behaviour are discussed, including analytic issues and limitations of available trace data from learning platforms. Link: https://authors.elsevier.com/a/1W59V2f~UW0yDj
Article
We continue the discussion of cognitive and situative perspectives by identifying several important points on which we judge the perspectives to be in agreement: (a) Individual and social perspectives on activity are both fundamentally important in education; (b) Learning can be general, and abstractions can be efficacious, but they sometimes aren’t; (c) Situative and cognitive approaches can cast light on different aspects of the educational process, and both should be pursued vigorously; (d) Educational innovations should be informed by the available scientific knowledge base and should be evaluated and analyzed with rigorous research methods.
Article
With the adoption of Learning Management Systems (LMSs) in educational institutions, a lot of data has become available describing students’ online behavior. Many researchers have used these data to predict student performance. This has led to a rather diverse set of findings, possibly related to the diversity in courses and predictor variables extracted from the LMS, which makes it hard to draw general conclusions about the mechanisms underlying student performance. We first provide an overview of the theoretical arguments used in learning analytics research and the typical predictors that have been used in recent studies. We then analyze 17 blended courses with 4,989 students in a single institution using Moodle LMS, in which we predict student performance from LMS predictor variables as used in the literature and from in-between assessment grades, using both multi-level and standard regressions. Our analyses show that the results of predictive modeling, notwithstanding the fact that they are collected within a single institution, strongly vary across courses. Thus, the portability of the prediction models across courses is low. In addition, we show that for the purpose of early intervention or when in-between assessment grades are taken into account, LMS data are of little (additional) value. We outline the implications of our findings and emphasize the need to include more specific theoretical argumentation and additional data sources other than just the LMS data.
Article
Individuals with strong self-regulated learning (SRL) skills, characterized by the ability to plan, manage and control their learning process, can learn faster and achieve higher grades compared to those with weaker SRL skills. SRL is critical in learning environments that provide low levels of support and guidance, as is commonly the case in Massive Open Online Courses (MOOCs). Learners can be trained to engage in SRL and further supported by facilitating prompts, activities, and tools. However, effective implementation of learner support systems in MOOCs requires an understanding of which SRL strategies are most effective and how these strategies manifest in learner behavior. Moreover, identifying learner characteristics that are predictive of weaker SRL skills can advance efforts to provide targeted support without obtrusive survey instruments. We investigated SRL in a sample of 4831 learners across six MOOCs based on individual records of overall course achievement, interactions with course content, and survey responses. Results indicated that goal setting and strategic planning predicted attainment of personal course goals, while help seeking appeared to be counterproductive. Learners with stronger SRL skills were more likely to revisit previously studied course materials, especially course assessments. Several learner characteristics, including demographics and motivation, predicted learners’ SRL skills. We discuss implications and next steps towards online learning environments that provide targeted support and guidance.
Article
Learning Analytics is an emerging research field and design discipline that occupies the “middle space” between the learning sciences/educational research and the use of computational techniques to capture and analyze data (Suthers & Verbert, 2013). We propose that the literature examining the triadic relationships between epistemology (the nature of knowledge), pedagogy (the nature of learning and teaching), and assessment provide critical considerations for bounding this middle space. We provide examples to illustrate the ways in which the understandings of particular analytics are informed by this triad. As a detailed worked example of how one might design analytics to scaffold a specific form of higher order learning, we focus on the construct of epistemic beliefs: beliefs about the nature of knowledge. We argue that analytics grounded in a pragmatic, socio-cultural perspective are well placed to explore this construct using discourse-centric technologies. The examples provided throughout this paper, through emphasizing the consideration of intentional design issues in the middle space, underscore the “interpretative flexibility” (Hamilton & Feenberg, 2005) of new technologies, including analytics.
Article
Massive open online courses (MOOCs) require individual learners to be able to self-regulate their learning, determining when and how they engage. However, MOOCs attract a diverse range of learners, each with different motivations and prior experience. This study investigates the self-regulated learning (SRL) learners apply in a MOOC, in particular focusing on how learners' motivations for taking a MOOC influence their behaviour and employment of SRL strategies. Following a quantitative investigation of the learning behaviours of 788 MOOC participants, follow-up interviews were conducted with 32 learners. The study compares the narrative descriptions of behaviour between learners with self-reported high and low SRL scores. Substantial differences were detected between the self-described learning behaviours of these two groups in five of the sub-processes examined. Learners' motivations and goals were found to shape how they conceptualised the purpose of the MOOC, which in turn affected their perception of the learning process.
Article
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.
Article
A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.
Article
Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learning-focused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges.
Article
Data integration is a crucial element in mixed methods analysis and conceptualization. It has three principal purposes: illustration, convergent validation (triangulation), and the development of analytic density or “richness.” This article discusses such applications in relation to new technologies for social research, looking at three innovative forms of data integration that rely on computational support: (a) the integration of geo-referencing technologies with qualitative software, (b) the integration of multistream visual data in mixed methods research, and (c) the integration of data from qualitative and quantitative methods.
Article
Recently, learning analytics (LA) has drawn the attention of academics, researchers, and administrators. This interest is motivated by the need to better understand teaching, learning, “intelligent content,” and personalization and adaptation. While still in the early stages of research and implementation, several organizations (Society for Learning Analytics Research and the International Educational Data Mining Society) have formed to foster a research community around the role of data analytics in education. This article considers the research fields that have contributed technologies and methodologies to the development of learning analytics, analytics models, the importance of increasing analytics capabilities in organizations, and models for deploying analytics in educational settings. The challenges facing LA as a field are also reviewed, particularly regarding the need to increase the scope of data capture so that the complexity of the learning process can be more accurately reflected in analysis. Privacy and data ownership will become increasingly important for all participants in analytics projects. The current legal system is immature in relation to privacy and ethics concerns in analytics. The article concludes by arguing that LA has sufficiently developed, through conferences, journals, summer institutes, and research labs, to be considered an emerging research field.
Article
As adults we believe that our knowledge of our own psychological states is substantially different from our knowledge of the psychological states of others: First-person knowledge comes directly from experience, but third-person knowledge involves inference. Developmental evidence suggests otherwise. Many 3-year-old children are consistently wrong in reporting some of their own immediately past psychological states and show similar difficulties reporting the psychological states of others. At about age 4 there is an important developmental shift to a representational model of the mind. This affects children's understanding of their own minds as well as the minds of others. Our sense that our perception of our own minds is direct may be analogous to many cases where expertise provides an illusion of direct perception. These empirical findings have important implications for debates about the foundations of cognitive science.
Article
JAGS analyzes Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation not wholly unlike BUGS. JAGS has three aims: to have a cross-platform engine for the BUGS language; to be extensible, allowing users to write their own functions, distributions and samplers; and to be a platform for experimentation with ideas in Bayesian modeling.
Article
Anderson, Reder, and Simon (1996) contested four propositions that they incorrectly called “claims of situated learning.” This response argues that the important differences between situative and cognitive perspectives are not addressed by discussion of these imputed claims. Instead, there are significant differences in the framing assumptions of the two perspectives. I clarify these differences by inferring questions to which Anderson et al.'s discussion provided answers, by identifying presuppositions of those questions made by Anderson et al., and by stating the different presuppositions and questions that I believe are consistent with the situative perspective. The evidence given by Anderson et al. is compatible with the framing assumptions of situativity; therefore, deciding between the perspectives will involve broader considerations than those presented in their article. These considerations include expectations about which framework offers the better prospect for developing a unified scientific account of activity considered from both social and individual points of view, and which framework supports research that will inform discussions of educational practice more productively. The cognitive perspective takes the theory of individual cognition as its basis and builds toward a broader theory by incrementally developing analyses of additional components that are considered as contexts. The situative perspective takes the theory of social and ecological interaction as its basis and builds toward a more comprehensive theory by developing increasingly detailed analyses of information structures in the contents of people's interactions. While I believe that the situative framework is more promising, the best strategy for the field is for both perspectives to be developed energetically.
Article
The psychometric properties and multigroup measurement invariance of scores on the Self-Efficacy for Self-Regulated Learning Scale taken from Bandura's Children's Self-Efficacy Scale were assessed in a sample of 3,760 students from Grades 4 to 11. Latent means differences were also examined by gender and school level. Results reveal a unidimensional construct with equivalent factor pattern coefficients for boys and girls and for students in elementary, middle, and high school. Elementary school students report higher self-efficacy for self-regulated learning than do students in middle and high school. The latent factor is related to self-efficacy, self-concept, task goal orientation, apprehension, and achievement.
Article
The psychometric properties of an instrument designed to assess study behaviors of college and university students were examined. A convenience sample of 1052 undergraduates at a group of midwestern colleges and universities and at a four-year college in a United States Caribbean territory responded to the Study Behavior Inventory (SBI), Form D. A series of factor analyses using the principal components model with iteration and varimax rotations yielded three factors composed of items which appear to deal with feelings of competence, preparation for daily routine academic tasks, and preparation for special academic tasks (e.g., term papers and examinations). Internal consistency reliability estimates for the entire instrument and the items in each of the three factors ranged from .70 to .88. The findings indicated that the SBI is a valid and reliable instrument for assessing study behaviors. It is suggested that providers of developmental education and other study skills program should consider including a strong counseling component in their offerings and that it may be useful to view study behaviors as consisting of two sets of activities directed toward short term, routine goals and toward long range, specific goals, respectively. References and tables are appended. (Author/PN)
Article
purpose of this paper is to identify some of the current issues in learning strategies assessment and to present a description of the design and development of a specific instrument created to address some problems encountered in diagnosing student deficits several research and practical issues related to self-report inventories that assess learning strategies will be briefly examined validity of this type of instrument in an applied setting (such as a study improvement course) will also be discussed initial design and development of the Learning and Study Strategy Inventory (LASSI) will be presented (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The study was designed to measure the relationship between probability of endorsement of personality items and the scaled social desirability of the items. Scale values were determined by applying the method of successive intervals to 140 personality trait items which had been administered to 152 subjects with pertinent instructions. The items were then administered to a different group of 140 students as a personality inventory. The proportion of "yes" answers was taken as a measure of the probability of endorsement and correlated against the social desirability scale value for the items. The high degree of relationship ( r = .871) is discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)