Figure 7 - uploaded by Mohsen Dorodchi
Content may be subject to copyright.
Source publication
Learning analytics is an emerging discipline within data science. It is analytics that is concerned with developing methods for exploring the unique and increasingly large-scale datasets collected from educational settings, including the collection, analysis, and visualization of such educational data. The goal of the analyses and visualizations is...
Contexts in source publication
Context 1
... grouping gives context to the data items and allows for analysis at the level of both data items and nodes. Figure 7.1 illustrates the structure of the sequence data model in which information about a student is grouped by semester. The sequence starts with an initial node that captures attributes outside of the node-based temporal sequence, such as demographics and prior academic achievement. ...
Context 2
... activities are all dependent on and build on each other. In addition, students were reflecting on their learning and outcomes of activities that suggest the strong dependency as shown in Figure 7.2. ...
Context 3
... do so, we built a temporal data model, called the "student sequence model" (Mahzoon, Maher, Eltayeby, Dou, & Grace, 2018). In this model, we put all the data for one week into one node as shown in Figure 7.2. Next, we connected the nodes to form a sequence. ...
Context 4
... we connected the nodes to form a sequence. The sequence was then passed to a signature generation submodule followed by the learning analytics submodule for fi nal determination, as shown in Figure 7.3. ...
Context 5
... each salient feature, we ran sequence analytics to classify students at risk of obtaining at-risk grades of D, F, or W in the course. Figure 7.4 shows two examples of individual student signatures that were generated for successful grades of A, B, or C and at-risk (DFW) students. ...
Context 6
... 7.4 shows two examples of individual student signatures that were generated for successful grades of A, B, or C and at-risk (DFW) students. Figure 7.5 shows the averages of all the student signatures in the class grouped by final grade category. Figure 7.6 shows the averages grouped by successful (ABC) or at-risk (DFW). ...
Context 7
... 7.5 shows the averages of all the student signatures in the class grouped by final grade category. Figure 7.6 shows the averages grouped by successful (ABC) or at-risk (DFW). In all three figures, the data include tests, activities, and reflections used to generate the signatures. ...
Context 8
... evaluated the sequence model incrementally at multiple points in time to assess how the temporal model's accuracy changes over time. Figure 7.7 reports the model's accuracy for the following three salient features: tests, activities, and reflections. ...
Context 9
... this figure, the horizontal axis shows the number of weekly nodes included in the model, and the vertical axis shows the accuracy of the model as a percentage. For example, from Figure 7.7 we can conclude that if we only use one week of the data (e.g., tests, activities, or reflections), we are able to accurately classify the risk status of 700 of students. This accuracy increases as we include more nodes (i.e., more weeks into the semester) in the sequence model. ...
Similar publications
Background
There is a strong association between interactions and cognitive engagement, which is crucial for constructing new cognition and knowledge. Although interactions and cognitive engagement have attracted extensive attention in online learning environments, few studies have revealed the evolution of cognitive engagement with interaction lev...
Citations
... To control for the effect of active learning, we include the variable AssignmentPass, which indicates whether a learner submits an assignment and receives an overall passing score on assignments. Doing assignments and receiving a passing grade is an indicator of active learning (Dorodchi et al., 2020). Learners' social network activities could also affect their learning performance and thus need to be controlled. ...
Purpose – This research investigates the impact of learners’ non-substantive responses in online course forums, referred to as online listening responses, on e-learning performance. A common type of response in online course forums, online listening responses consist of brief, non substantive replies/comments (e.g., “agree,” “I see,” “thank you,” “me too”) and non-textual inputs (e.g., post-voting, emoticons) in online discussions. Extant literature on online forum participation focuses on learners’ active participation with substantive inputs and overlooks online listening responses. This research, by contrast, stresses the value of online listening responses in e-learning and their heterogeneous effects across learner characteristics. It calls for recognition and encouragement from online instructors and online forum designers to support this activity.
Design/methodology/approach – The large-scale proprietary dataset comes from iCourse, a
leading MOOC (massive open online courses) platform in China. The dataset includes 68,126 records of learners in five MOOCs during 2014–2018. An ordinary least squares model is used to analyze the data and test the hypotheses.
Findings – Online listening responses in course forums, along with learners’ substantive inputs, positively influence learner performance in online courses. The effects are heterogeneous across learner characteristics, being more prominent for early course registrants, learners with full-time jobs, and learners with more e-learning experience, but weaker for female learners.
Originality/value – This research distinguishes learners’ brief, non-substantive responses (online listening responses) and substantive inputs (online speaking) as two types of active participation in online forums and provides empirical evidence for the importance of online listening responses in e-learning. It contributes to online forum research by advancing the active-passive dichotomy of online forum participation to a nuanced classification of learner behaviors. It also adds to e-learning research by generating insights into the positive and heterogeneous value of learners’ online listening responses to e-learning outcomes. Finally, it enriches online listening research by introducing and examining online listening responses, thereby providing a new avenue to probe online discussions and e-learning performance.
... Data analytics has been used in higher education to evaluate university learning material and curriculum (Campagni et al., 2014), identify features affecting students' success (Aulck et al., 2017;Dorodchi et al., 2018;Lykourentzou et al., 2009;Márquez-Vera et al., 2016), and aid institutional planning (Caputi & Garrido, 2015). Learning analytics (LA) has emerged as a subfield of data analytics in higher education (Baker & Inventado, 2014;Ray & Saeed, 2018), with the goal of understanding and improving students' learning and learning environments (Dorodchi et al., , 2020Hellings & Haelermans, 2020). Bernacki et al. (2020) highlights a common thread among the data analytics subfields: the desire to develop analytical methods capable of predicting the future based on known information (predictive analytics; Gandomi & Haider, 2015;Nandal et al., 2017). ...
... Predictive analytics can aid decisions in areas such as enrollment management and curriculum development (Rajni & Malaya, 2015). Other examples include predicting which students may receive a D, fail, or withdraw from a course (Dorodchi et al., 2020;Smith et al., 2012) and which students may be retained or leave a university (Murtaugh et al., 1999). Predictive analytics have often relied on statistical methods like regression techniques (Gandomi & Haider, 2015). ...
About one-third of college students drop out before finishing their degree. The majority of those remaining will take longer than 4 years to complete their degree at “4-year” institutions. This problem emphasizes the need to identify students who may benefit from support to encourage timely graduation. Here we empirically develop machine learning algorithms, specifically Random Forest, to accurately predict if and when first-time-in-college undergraduates will graduate based on admissions, academic, and financial aid records two to six semesters after matriculation. Credit hours earned, college and high school grade point averages, estimated family (financial) contribution, and enrollment and grades in required gateway courses within a student’s major were all important predictors of graduation outcome. We predicted students’ graduation outcomes with an overall accuracy of 79%. Applying the machine learning algorithms to currently enrolled students allowed identification of those who could benefit from added support. Identified students included many who may be missed by established university protocols, such as students with high financial need who are making adequate but not strong degree progress.