Table 4 - uploaded by Dragan Gasevic
Content may be subject to copyright.
Inferential statistics for the model fit assessment

Inferential statistics for the model fit assessment

Source publication
Article
Full-text available
This paper presents the results of a natural experiment investigating the effects of instructional conditions and experience on the adoption and sustained use of a learning tool. The experiment was conducted with undergraduate students, enrolled into four performing art courses (N=77) at a research intensive university in Canada. The students used...

Contexts in source publication

Context 1
... modeling was used to examine the association between the two factors (instructional conditions and previous experience) and the dependent variables. In order to assess this association for each of the dependent variables above and beyond the random effects, we built three models for each of the dependent variables - null model, fixed model, and final model (Table 4 in in Appendix AError! Reference source not found.). ...
Context 2
... the other hand, a fixed model included condition and experience as fixed effects, while the final model included condition, experience, and interaction between condition and experience, as fixed effects. Intraclass correlation coefficient (ICC) (Raudenbush & Bryk, 2002), second-order Akaike information criterion (AICc), and likelihood ratio test (Hastie, Tibshirani, & Friedman, 2011) were used to decide on the best fitting model (Table 4 in Appendix A). We also estimated an effect size (R 2 ) for each model as goodness-of-fit measure, calculating the variance explained using the method suggested by Xu (2003). ...
Context 3
... that the same significant effects were also found for counts of annotations, the interpretation of the results for counts of annotations per video is the same as stated for counts of annotations in the previous paragraph. Moreover, as shown in Table 4 in Appendix A and consistent with the results of the fixed effects for the count of annotations per video model, the random effect student within a course explained about 69%, while the course itself explained only 9% of the variability in the model. ...
Context 4
... students who were in the graded conditions had no different scores of word concreteness for annotation compared to the students in the non-graded condition. Interestingly though, Moreover, as shown in Table 4 in Appendix A, the random effect of course explained about 43.1% of the variability in the model. This can probably shed some light why the estimated mean values of density ( Figure 5) were different between the courses with graded vs. non-graded instructional conditions. ...
Context 5
... modeling was used to examine the association between the two factors (instructional conditions and previous experience) and the dependent variables. In order to assess this association for each of the dependent variables above and beyond the random effects, we built three models for each of the dependent variables - null model, fixed model, and final model (Table 4 in in Appendix AError! Reference source not found.). ...
Context 6
... the other hand, a fixed model included condition and experience as fixed effects, while the final model included condition, experience, and interaction between condition and experience, as fixed effects. Intraclass correlation coefficient (ICC) (Raudenbush & Bryk, 2002), second-order Akaike information criterion (AICc), and likelihood ratio test (Hastie, Tibshirani, & Friedman, 2011) were used to decide on the best fitting model (Table 4 in Appendix A). We also estimated an effect size (R 2 ) for each model as goodness-of-fit measure, calculating the variance explained using the method suggested by Xu (2003). ...
Context 7
... that the same significant effects were also found for counts of annotations, the interpretation of the results for counts of annotations per video is the same as stated for counts of annotations in the previous paragraph. Moreover, as shown in Table 4 in Appendix A and consistent with the results of the fixed effects for the count of annotations per video model, the random effect student within a course explained about 69%, while the course itself explained only 9% of the variability in the model. ...
Context 8
... students who were in the graded conditions had no different scores of word concreteness for annotation compared to the students in the non-graded condition. Interestingly though, Moreover, as shown in Table 4 in Appendix A, the random effect of course explained about 43.1% of the variability in the model. This can probably shed some light why the estimated mean values of density ( Figure 5) were different between the courses with graded vs. non-graded instructional conditions. ...

Citations

... Additionally, findings of higher student satisfaction are noteworthy since the use of tools (e.g., feedback tools) is influenced by how students experience these tools (Gašević, Mirriahi, et al., 2017). Although past studies demonstrated innovative methods to personalize support using meaningful information about students' learning processes obtained by trace data, the challenge remains to combine the analytics-based approach for the measurement of students' actual ongoing SRL processes (i.e., SRL processes in real-time) with key features of (adaptive) scaffolding which also offer personalized support content. ...
... Yet, we observed that despite the potential overload and lesser time, learning performance remained stable in scaffolded groups; furthermore, the personalized scaffolds affected students' learning activities. In order for students to be adept at using tools (i.e., scaffolds in this study) and recognize its value, they must first develop proficiency (Gašević, Mirriahi, et al., 2017). ...
Article
Full-text available
Self-Regulated Learning (SRL) is related to increased learning performance. Scaffolding learners in their SRL activities in a computer-based learning environment can help to improve learning outcomes, because students do not always regulate their learning spontaneously. Based on theoretical assumptions, scaffolds should be continuously adaptive and personalized to students' ongoing learning progress in order to promote SRL. The present study aimed to investigate the effects of analytics-based personalized scaffolds, facilitated by a rule-based artificial intelligence (AI) system, on students' learning process and outcomes by real-time measurement and support of SRL using trace data. Using a pre-post experimental design, students received personalized scaffolds (n = 36), generalized scaffolds (n = 32), or no scaffolds (n = 30) during learning. Findings indicated that personalized scaffolds induced more SRL activities, but no effects were found on learning outcomes. Process models indicated large similarities in the temporal structure of learning activities between groups which may explain why no group differences in learning performance were observed. In conclusion, analytics-based personalized scaffolds informed by students’ real-time SRL measured and supported with AI are a first step towards adaptive SRL supports incorporating artificial intelligence that has to be further developed in future research.
... However, this system does not help instructors determine why these points were important. Moreover, research has focused more on students' adoption of the tool (Gašević et al., 2017) and less on instructors' use of the tool. The gaze-based metric "with-me-ness direction" measures co-attention between learners' gaze and instructors' dialogue in a video lecture, helping instructors understand students' attention state. ...
Article
Full-text available
The use of online video lectures in universities, primarily for content delivery and learning, is on the rise. Instructors’ ability to recognize and understand student learning experiences with online video lectures, identify particularly difficult or disengaging content and thereby assess overall lecture quality can inform their instructional practice related to such lectures. This paper introduces Tcherly, a teacher-facing dashboard that presents class-level aggregated time series data on students’ self-reported cognitive-affective states they experienced during a lecture. Instructors can use the dashboard to evaluate and improve their instructional practice related to video lectures. We report the detailed iterative prototyping design process of the Tcherly Dashboard involving two stakeholders (instructors and designers) that informed various design decisions of the dashboard, and also provide usability and usefulness data. We demonstrate, with real-life examples of Tcherly Dashboard use generated by the researchers based on data collected from six courses and 11 lectures, how the dashboard can assist instructors in understanding their students’ learning experiences and evaluating the associated instructional materials.
... Recent works show that students' SRL manifests differently depending on the learning context and course modality [1,2,6,9]. For example, Matcha et al. [9] compared students' strategies in a BL course, in a Flipped Classroom (FC), and in Massive Open Online Courses (MOOCs), showing that students used similar strategies in BL and FC modalities, but these differed from the tactics used in MOOCs. ...
... To find the clusters, we performed an agglomerative hierarchical clustering using the scikit-learn-0.22.2 Python package 6 . For selecting the number of clusters, we analyzed the dendrogram of the clustering process and removed for the analysis those clusters containing less than 15 students, treated as outliers. ...
Chapter
In the past years, Blended Learning (BL) has gained traction as a methodology in Higher Education Institutions. Despite the positive effects of BL, several studies have shown that students require high levels of self-regulation to succeed in these types of practices. Still, there is little understanding of how students organize their learning in BL authentic contexts. To fill this gap, this paper presents an exploratory study to analyze the learning tactics and strategies of 119 students in a BL course using the Moodle Learning Management System. Specifically, we examined the effects on students’ learning behavior before and after an intervention with a dashboard-based plug-in designed to support self-regulated learning (SRL). Using a data-driven approach based on Hidden Markov Models (HMM), we identified the tactics and strategies employed by the students along the course. The results show that students’ tactics and strategies changed significantly depending on the course design and the context in which learning occurs (in or beyond the class). Also, we found evidence indicating that the main factor that correlates to the students’ learning strategies is their previous knowledge and the students’ SRL ability profile.KeywordsSelf-regulated learningLearning analyticsBlended learning
... This diversity of frameworks is perhaps why researchers have lately noticed a lack of widespread practice in using LA-supported LD frameworks or instruments. Mangaroska and Giannakos (2018) present a checklist for future work based on reviewed papers with-among other items-the following suggestions: (a) provide details about the learning environment and the pedagogical approaches used, where improvements in LD experiences based on LA outcomes will be measured (Rodríguez-Triana et al., 2015); (b) indicate how LA metrics offer insight into learning processes and can be theoretically grounded for meaningful interpretation to inform theory and design (Gašević et al., 2017); and (c) evaluate and denote how educators are planning, designing, implementing, and evaluating LD decisions (McKenney & Mor, 2015). Furthermore, other researchers present a strong plea for addressing the co-creation of LA, mainly to solve issues like a mismatch between design and capacity, invalid inferences, and reconceptualization from product to service (Dollinger & Lodge, 2018). ...
Article
Learning activities are at the core of every educational design effort. Designing learning activities is a process that benefits from reflecting on previous runs of those activities. One way to measure the behaviour and effects of design choices is to use learning analytics (LA). The challenge, however, lies in the unavailability of an easy-to-use, LA-supported learning design (LD) method. We established such a method—the Fellowship of Learning Activities and Analytics (FoLA2)—reinforced by a gameboard and cards, to provide structure and inspiration. The method enables several participants with different roles to interact with a set of card decks to collaboratively create an LA-supported LD. Using this method helps to design learning activities in a collaborative, practical way; it also raises awareness about the benefits of multidisciplinary co-design and connections between LA and LD. FoLA2 can be used to develop, capture, and systematize design elements and to systematically incorporate LA.
... Winne (2020) takes a step further and highlights that in learning analytics, reliability is not simply a function of a good design of a learning environment used for data collection; it is equally dependent on a learner's agency 1 and level of metacognitive knowledge 2 about learning tools that are available to them in the learning environment. If learners do not know about tools that are available in a learning environment, they are not likely to use them (Gašević et al., 2017;Winne, 2006). Thus, they will not 'produce' data that are deemed necessary to make assessment inferences about their learning. ...
Article
Learning analytics uses large amounts of data about learner interactions in digital learning environments to understand and enhance learning. Although measurement is a central dimension of learning analytics, there has thus far been little research that examines links between learning analytics and assessment. This special issue of Computers in Human Behavior highlights 11 studies that explore how links between learning analytics and assessment can be strengthened. The contributions of these studies can be broadly grouped into three categories: analytics for assessment (learning analytic approaches as forms of assessment); analytics of assessment (applications of learning analytics to answer questions about assessment practices); and validity of measurement (conceptualization of and practical approaches to assuring validity in measurement in learning analytics). The findings of these studies highlight pressing scientific and practical challenges and opportunities in the connections between learning analytics and assessment that will require interdisciplinary teams to address: task design, analysis of learning progressions, trustworthiness, and fairness – to unlock the full potential of the links between learning analytics and assessment.
... The selfmonitoring tick boxes shown by the dashed boxes in Figure 1-a were added to help students leverage best practices (MCQs principles) for creating MCQs (e.g., [12]) and better monitor their performance, which may lead to the development of high-quality resources. The effect of externally-facilitated monitoring on SRL has been acknowledged in past studies [5,13,15]. ...
Conference Paper
Full-text available
The benefits of incorporating scaffolds that promote strategies of self-regulated learning (SRL) to help student learning are widely studied and recognised in the literature. However, the best methods for incorporating them in educational technologies and empirical evidence about which scaffolds are most beneficial to students are still emerging. In this paper, we report our findings from conducting an in-the-field controlled experiment with 797 post-secondary students to evaluate the impact of incorporating scaffolds for promoting SRL strategies in the context of assisting students in creating novel content, also known as learnersourcing. The experiment had five conditions, including a control group that had access to none of the scaffolding strategies for creating content, three groups each having access to one of the scaffolding strategies (planning, externally-facilitated monitoring and self-assessing) and a group with access to all of the aforementioned scaffolds. The results revealed that the addition of the scaffolds for SRL strategies increased the complexity and effort required for creating content, were not positively assessed by learners and led to slight improvements in the quality of the generated content. We discuss the implications of our findings for incorporating SRL strategies in educational technologies.
... Multiple measures and methods can also ameliorate the limitation of self-report surveys in capturing data on the frequency of assessment or SRL events, rather than their quality. Gašević et al. (2017) emphasize the importance of attention to quality of regulation: "learners do not increase their usage of a newly acquired learning strategy, but rather apply this strategy in a more effective manner. In other words, when a strategy is effectively applied, the quantity remains consistent while the quality of the learning product increases" (p. ...
Article
Full-text available
Current conceptions of assessment describe interactive, reciprocal processes of co-regulation of learning from multiple sources, including students, their teachers and peers, and technological tools. In this systematic review, we examine the research literature for support for the view of classroom assessment as a mechanism of the co-regulation of learning and motivation. Using an expanded framework of self-regulated learning to categorize 94 studies, we observe that there is support for most but not all elements of the framework but little research that represents the reciprocal nature of co-regulation. We highlight studies that enable students and teachers to use assessment to scaffold co-regulation. Concluding that the contemporary perspective on assessment as the co-regulation of learning is a useful development, we consider future directions for research that can address the limitations of the collection reviewed.
... Instructional management includes student control, instructional style, setting rules, and the regulation of student misbehaviours (Sass, Lopes, Oliveira, & Martin, 2016). Instructional design and resources have been made to include scaffolding learner participation in a discussion forum (Gasevic, Mirriahi, Dawson, & Joksimovic, 2016). Stated differently, what a teacher believes is the best behaviour and instructional management style may not be realized depending on the class environment (Martin & Sass, 2010). ...
Article
Full-text available
The present investigation intends to assess instructional management and institutional effectiveness concerning the age and experience of school principals. The sample comprised twenty schools of Jalandhar and Kapurthala. The researchers used Hallinger’s Instructional Management Rating Scale and a self-prepared Institutional Effectiveness Rating Scale for the investigation. The result of the study reveals that in schools with younger principals, teachers exhibit better behaviour on coordinating the curriculum, protecting instruction time and developing academic standards of instructional management than teachers in schools with older principals. In schools with more experienced principals, teachers exhibit better behaviour concerning instructional management on co-ordinating the curriculum, protecting instruction time, providing incentives for teachers, protecting professional development, developing academic standards, and providing instructions for learning than the teachers in schools with less experience. In schools with older and more experienced principals, teachers exhibit better behaviour on supervising and evaluating instruction dimension of instructional leadership than the teachers in schools with older and less experience, younger and more experienced and younger and less experienced principals. There is no significant difference in the institutional effectiveness of schools with young and old aged principals. There is no significant difference in institutional effectiveness of schools with more and less experienced principals.
... Por último, en los procesos propios, ningún docente contempló algún principio sobre Ingeniería de Software, modelo o metodología de desarrollo. En su lugar, emplearon un proceso de diseño y desarrollo guiado por la retroalimentación de los usuarios finales del sistema (estudiantes y docentes) (Bedregal-Alpaca et al., 2019;Gašević et al., 2017) o por la efectividad que la plataforma tuvo en la mejora del proceso de enseñanza-aprendizaje (Chounta et al., 2017;Flores-Fuentes & Juárez-Ruiz, 2017). Por otra parte, en algunos casos se optó por identificar los componentes de la plataforma que deberían mejorarse, desarrollarse o incluirse. ...
... Únicamente se evalúan aspectos referentes al aprendizaje (Bakki et al., 2020;Kali et al., 2015;Torres et al., 2014). Un aspecto en común que está presente en aquellas investigaciones en las que no se contempló en lo absoluto la Ingeniería de Software, es que los mismos autores de las publicaciones, al concluir sus investigaciones, mencionan la necesidad de un proceso que guíe el desarrollo del software, a la vez que identifican la complejidad que supone el diseño y creación de una plataforma que cumpla con todos los aspectos necesarios para apoyar en el proceso de enseñanzaaprendizaje (Bedregal-Alpaca et al., 2019;Gašević et al., 2017). ...
Article
Full-text available
Debido a la creciente presencia de la tecnología digital en los entornos educativos formales, las aplicaciones digitales que apoyan los procesos de enseñanza-aprendizaje son cada día más sofisticados y los docentes están tomando parte activa en el diseño de esas aplicaciones. Uno de los aspectos que más ha evolucionado es el desarrollo de software para el diseño de plataformas de tecnología educativa. Este aspecto ha sido motivo de diversas investigaciones científicas, pero no existen publicaciones que den cuenta de cómo han sido considerados los principios de la Ingeniería de Software por parte de los docentes en el desarrollo de software para el diseño de plataformas de tecnología educativa. Para cumplir con lo anterior, se realizó una revisión sistemática de la literatura especializada publicada en los últimos cinco años con el método de investigación documental propio de la metasíntesis. La obtención de información se realizó en las bases de datos científicos Springer Link, Science Direct, ERIC y CONRICyT con la siguiente fórmula: "Software Engineering" AND ("Instructional Design" OR "Educational Technology"). Fueron analizados en total 69 artículos escritos en inglés o español. Tras una interpretación hermenéutica de los resultados, el hallazgo más relevante sugiere que, aunque los principios de Ingeniería de Software sí son contemplados y aplicados por la mayoría de los docentes, existe una brecha entre la teoría y la práctica referente a la tecnología educativa, misma que deriva de la complejidad de empatar la pedagogía con el desarrollo tecnológico. Finalmente, se sugiere el desarrollo de un modelo que facilite aplicar los principios de la Ingeniería de Software en el proceso de diseño de las plataformas, que a su vez facilitaría el proceso de desarrollo de software educativo.
... Regarding external conditions, we focus on their major component, namely instructional settings when defining variables (indicators) for the predictive models as well as when interpreting the results of such models. The role of instructional conditions in learners' self-regulation efforts and their use of learning tools has been examined and evidenced in numerous studies (e.g., Garrison & Cleveland-Innes, 2005;Gašević, Mirriahi, Dawson, & Joksimović, 2017;Lust et al., 2012). Accordingly, we can expect that the instructional settings would shape the students' interaction with the online component of a blended course, i.e., their use of the learning resources (i.e., materials, tools, activities) made available through the institutional LMS . ...
... Mixed-effect models combine fixed and random effects and thus allow for assessing the association between the fixed effects and the dependent variable after accounting for the variability due to the random effects (Hayes, 2006). These models have been used in several similar studies (e.g., Gašević et al., 2017;Joksimović et al., 2015;Jovanović, Gašević, Pardo, Dawson, & Whitelock-Wainwright, 2019) to estimate the strength of association between some constructs of interest (modelled as fixed effect(s) and the outcome) while controlling for the variability that originates in the individual differences among students and/or courses (random effects). Accordingly, we used mixed-effect linear models to examine the association between indicators of students' engagement with online learning activities (fixed effects) and their course performance (outcome), while at the same time accounting for the variability due to the student and course specific features (random effects). ...
Article
Full-text available
Predictive modelling of academic success and retention has been a key research theme in Learning Analytics. While the initial work on predictive modelling was focused on the development of general predictive models, portable across different learning settings, later studies demonstrated the drawbacks of not considering the specificities of course design and disciplinary context. This study builds on the methods and findings of related earlier studies to further explore factors predictive of learners' academic success in blended learning. In doing so, it differentiates itself by (i) relying on a larger and homogeneous course sample (15 courses, 50 course offerings in total), and (ii) considering both internal and external conditions as factors affecting the learning process. We apply mixed effect linear regression models, to examine: i) to what extent indicators of students' online learning behaviour can explain the variability in the final grades, and ii) to what extent that variability is attributable to the course and students' internal conditions, not captured by the logged data. Having examined different types of behaviour indicators (e.g., indicators of the overall activity level, those indicative of regularity of study, etc), we found little difference, if any, in their predictive power. Our results further indicate that a low proportion of variance is explained by the behaviour-based indicators, while a significant portion of variability stems from the learners' internal conditions. Hence, when variability in external conditions is largely controlled for (the same institution, discipline, and nominal pedagogical model), students’ internal state is the key predictor of their course performance.