Conference PaperPDF Available

Predicting success: How learners’ prior knowledge, skills and activities predict MOOC performance

Authors:

Abstract

While MOOCs have taken the world by storm, questions remain about their pedagogical value and high rates of attrition. In this paper we argue that MOOCs which have open entry and open curriculum structures, place pressure on learners to not only have the requisite knowledge and skills to complete the course, but also the skills to traverse the course in adaptive ways that lead to success. The empirical study presented in the paper investigated the degree to which students' prior knowledge and skills, and their engagement with the MOOC as measured through learning analytics, predict end-of-MOOC performance. The findings indicate that prior knowledge is the most significant predictor of MOOC success followed by students' ability to revise and revisit their previous work.
Predicting success: How learners’ prior knowledge,
skills and activities predict MOOC performance
Gregor Kennedy
Centre for the Study of
Higher Education
The University of Melbourne
gek@unimelb.edu.au
Carleton Coffrin
National ICT Australia
Victoria Research Laboratory
Melbourne, Victoria, Australia
carleton.coffrin@nicta.com.au
Paula de Barba
Centre for the Study of
Higher Education
The University of Melbourne
paula.de@unimelb.edu.au
ABSTRACT
While MOOCs have taken the world by storm, questions remain about
their pedagogical value and high rates of attrition. In this paper we argue
that MOOCs that have open entry and open curriculum structures, place
pressure on learners to not only have the requisite knowledge and skills
to complete the course, but also the skills to traverse the course in
adaptive ways that lead to success. The empirical study presented in the
paper investigated the degree to which studentsprior knowledge and
skills, and their engagement with the MOOC as measured through
learning analytics, predict end-of-MOOC performance. The findings
indicate that prior knowledge is the most significant predictor of MOOC
success followed by studentsability to revise and revisit their previous
work.
Categories and Subject Descriptors
K.3.1 [Computer Uses in Education]: Distance learningMOOC; J.1
[Administrative Data Processing]: Education
General Terms
Measurement, Performance, Experimentation, Human Factors.
Keywords
Prior knowledge, learning analytics, engagement, MOOCs
1. INTRODUCTION
The last five years have seen a rapid rise in the popularity of Massive
Open Online Courses, or MOOCs. This rise in popularity has been
reflected both in the number of learners enrolling in these courses and
the number of universities now offering courses in this format. While
there are many challenges in providing learners with high quality
educational experiences at such a large scale, MOOCs have created
significant opportunities for educational researchers to better understand
how learners develop their knowledge and understanding through online
learning. The sheer numbers of learners who participate in MOOCs
often in the thousands means that researchers have access to large
datasets of each learners' online interactions which, through the use of
learning analytics, can be used to develop a greater understanding of
learners' online experiences, processes, and outcomes [1].
While retention and pass rates have often been used as markers of
success in traditional courses, these are more problematic metrics in
MOOCs (see [2][3]). As MOOCs are free, students have less financial
incentive to persist with the course and this may be a reason for the high
attrition rates. Moreover, is it possible given MOOCs have open
enrollments and do not require any demonstration of pre-exiting
experience, qualifications or credentials, that enrolling students who
have poor prior knowledge and skills may find it difficult to successfully
engage with and complete the course.
1.1 MOOC Preparedness: Students’ Prior
Knowledge and Skills
For many years, educational theories and frameworks have implicated
students’ prior knowledge as a key ingredient in an individuals learning
success. Piaget’s [4] contention that the development of understanding
is through a process of assimilation and accommodation is underpinned
by the idea of pre-existing or prior knowledge. These foundational
concepts in educational psychology suggest that a student’s
understanding is developed by building on and modifying his or her
existing knowledge structures or schema. As [5] suggest, humans
“come to formal education with a range of prior knowledge, skills,
beliefs, and concepts that significantly influence what they notice about
the environments and how they organize and interpret it” (p. 10).
In many ways the emphasis of developmental and cognitive psychology
on the importance of prior knowledge underpins the constructivist
approaches to teaching and learning which currently dominate the
learning technology landscape [6]. These constructivist approaches
emphasise the need to understand what individual students bring to each
learning situation in terms of their background knowledge and
understanding. The most beneficial teaching and learning environments,
rather than having teachers simply broadcast information for students to
learn, take into account the potentially very different starting points of
individual students. The effectiveness of any particular instructional
situation or environment is dependent on accounting for different
student perspectives, backgrounds and prior knowledge.
As noted by Bransford, Brown and Cocking [5] students not only
possess prior knowledge in specific discipline-based content areas, they
also have pre-existing generic learning skills, which they bring to
learning situations. Education research is replete with taxonomies of the
types of cognitive or learning skills and strategies that students draw
upon in teaching and learning environments, including problem solving,
critical thinking, self-regulation and metacognition [7][8]. Students
ability to further develop and employ these skills effectively in the
course of their learning has received a great deal of attention by
education researchers [9].
In short, previous educational research has established that prior
knowledge and skills both in terms of content knowledge and generic
learning skills, such as problem solvingcan greatly influence students’
learning success.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Conference’10, Month 12, 2010, City, State, Country.
Copyright 2010 ACM 1-58113-000-0/00/0010 …$15.00.
1.2 MOOC Preparedness: Navigating Open
Curricula
A feature of the emerging MOOC landscape has been the opportunity
for educators to experiment with curriculum structures. While there is
some debate about the origins of MOOCs, many point to George
Siemens and Stephen Downes’ course on Connectivism and Connective
Knowledge as one of the first [3]. This course was founded on a
connectivist pedagogical framework [10], which at its heart advocates a
peer-based, networked approach to learning. While the “open” in
MOOCS is often taken to refer both to “open access” (anyone with
access to the internet can enroll) and “open educational resources”
(depending on the course, students make use of freely available
resources on the Internet), it can also refer to “open” curriculum
structures that are consistent with a connectivist philosophy of learning.
What this means in practice is that, unlike more traditional courses in
which students engagement with content is structured around weekly
topics and assignments which are followed in a progressive, linear way,
connectivist or “cMOOCs” with a more open curriculum structure are
less proscriptive and give students greater flexibility in the ways in
which they can engage with the course, other learners and the
curriculum material.
A clear implication of this is that students who choose to participate in a
MOOC, particularly those with an open curriculum structure, need to be
prepared for a certain amount of self-direction. With cMOOCs there is
arguably a greater onus on students to manage and plan their own
learning as they choose what aspects of the course they want and need
to engage with, based on their own interest and prior knowledge and
skills. Moreover, with open curriculum structures in which students can
complete tasks and assignments in any order, there is an opportunity for
learners to revisit and revise in ways that are often not advocated or even
available in more structured linear courses.
Set against this background, this paper presents an investigation of how
students’ preparedness for MOOCs impacts on their participation and
success in the course. More specifically, this paper considers how
students prior content knowledge in an advanced area of computer
science (Discrete Optimization) and their problem solving skills related
to this area of computer science impact on their patterns of engagement
as measured through learning analytics and their learning outcomes or
performance.
2. METHOD
2.1 Course Structure and Participants
Data were collected from the first session of Discrete Optimization, a
MOOC provided by the University of Melbourne on the Coursera
platform. Discrete Optimization was first offered in June 2013, and was
a graduate level course, which assumed incoming learners had a
background in computer science and strong computer programming
skills. It consisted of nine weeks of material presented in an open
curriculum structure. That is, all of the assignments and lectures were
made available in the first week, and while there was a logical sequence
and order in which material and assignments could be covered broadly
based on difficulty, learners designed their own study plan which they
completed at their own pace (see [2] [11] for further details).
The inaugural session of this course attracted the interest of 37,777
learners, with 22,731 starting the course, 6,635 active in the
assignments, and 774 receiving a certificate of completion for the
course. The sample for this investigation was drawn from a subset of
students who started the course: those who were active in the
assignments (n=6,635), and those who received a certificate of
completion (n=774). These groups were not mutually exclusive.
The assessments in Discrete Optimization consisted of seven
programming assignments designed around problem solving tasks.
Each of the core assignments presented the learner with an emulation of
what could be regarded as a real world Discrete Optimization
experience; that is, an employer telling them: “solve this problem, I
don't care how”. The lecture materials contained the necessary concepts
and ideas to solve the assignments, but the most appropriate technique
to apply was left for the learners to work out. This assignment design
helps to prepare learners for how optimization is conducted in the real
world and has been used effectively over a number of years in the
classroom version of Discrete Optimization [11].
The programming assignments increased in difficulty, with each
assignment requiring students to show a deeper understanding of the
course material. For example, the “first” (i.e. least difficult) assignment,
Screen Name, was very simple and only required a basic understanding
of computer programming to complete. The second assignment,
Knapsack, was more challenging and required significant prior content
knowledge in computer science. The next assignment, Graph Coloring,
required all of the skills of the previous assignments basic computer
programming and requisite computer science content knowledge but
in order to complete this assignment students needed to have more
sophisticated problem-solving skills. The remaining assignments
increased in difficulty and required students to show progressively
advanced knowledge and skills in computer programming, computer
science, and problem solving. Furthermore, these remaining
assignments were designed to be very challenging. A student striving to
achieve full marks needs to master nearly all of the course material. By
this design, it is expected that high achieving students will revisit past
assignments and revise their work as they develop their skills in the
subject area.
2.2 Learning Analytic Data Acquisition
All data analysed in this study were collected automatically by the
Coursera platform. The Coursera platform records a vast amount of
information on learners’ activities, and provides end users (instructors,
administrators, researchers) with three general views: course-wide
statistics which provide an aggregated overview of activity for the entire
class; a grade book which provides a summary information about each
learner’s performance; and an event log, which tracks every interaction
the learners have with the platform. The grade book and the event log
data formed the basis for the learning analytics used in this investigation.
The grade book data were exported as an excel spreadsheet, and the
event log data were provided as an SQL database for which custom
SQL queries and python scripts were used to extract, compute, and
aggregate the required student metrics. These two data sets were merged
into a single data file before being imported into SPSS version 21 for
analysis.
2.3 Measures
The measures used in this investigation are defined below.
2.3.1 Knapsack Points
Knapsack Points was derived from the knapsack programming
assignment, which assessed students’ prior content knowledge in
computer science. Students who have a strong background in computer
science would typically have been exposed to a method of algorithm
design called “dynamic programming”. The knapsack problem is
routinely used in computer science teaching to assess dynamic
programming, and completing this problem well is indicative of a strong
background in computer science. Students could score between 0 to 60
points on the knapsack problem, and Knapsack points was a measure of
the total points earned on the knapsack assignment on the last day of
class.
2.3.2 Graph Coloring Points
Graph Coloring Points was derived from the graph coloring
programming assignment, which assessed students’ skills in computer
science problem solving. Students could score between 0 to 60 points on
this assignments and Graph coloring points was a measure of the total
points earned on the graph coloring assignment on the last day of class.
2.3.3 Assignment Submissions
Assignment Submissions was a measure of the total number of times a
student submitted any assignment during the course. This value can range
from 0 to infinity, however if a student submitted each assignment just once
he or she would have an Assignment Submissions value of 37. The
assignment submissions value was calculated using a simple frequency
count from each studentsevent log data.
2.3.4 Active Days
Active Days was a measure of the total number of days a student was
actively submitting assignments in the course. As the course was nine weeks
long, this value could range from 0 to 62. Active days was calculated using
the event log data by taking the timestamp of each learner’s first assignment
submission and subtracting it from the timestamp of the last assignment
submission, and rounding down to a whole day.
2.3.5 Assignment Switches
Assignment switches was a measure of the number of times a learner
switched from submitting one assignment to a different assignment. This
measure reflects the degree to which students were following a more
traditional linear progression in the course. Students with a higher score on
Assignment Switches were more inclined to move between assignments and
revisit assignments that they had previously completed. The value for
Assignment switches could range from 0 to infinity, however, a learner who
worked on the assignments in a linear order and did not revisit any previous
assignments (i.e. moved from one assignment to another in a linear
sequence) would have a value of 6. The assignment switches value was
calculated by parsing the learner’s event log data in chronological order.
Each time the assignment submission type changed the switching value is
increased by one.
2.3.6 Total Points
Total Points was a measure of the students’ overall performance in the
course and was calculated as the cumulative points earned by the learner
across all assignments on the final day of the course. The value range for this
measure was 0 to 396.
3. RESULTS
The analyses conducted as part of this investigation employed two samples.
The first sample included all participants (n=6,635) who were active in the
MOOC as determined by submitting at least one assignment. The second
samplea subset of the firstonly included participants who passed the
course (n=774). These two samples were investigated as it was expected
that, given the focus on prior knowledge and skills, distinct patterns may
emerge for those who passed the course and those who did not.
3.1 All Participants
Descriptive statistics for all variables and the correlations between them for
all 6,635 participants are presented on Table 1. Correlations between all
variables were significant. Strong positive correlations were seen among
knapsack points, graph coloring, active days, and assignment switching,
particularly between graph coloring and active days and assignment
switching, and between active days and assignment switching. While
positive, assignment submission was weakly correlated with all other
variables.
A stepwise multiple regression was conducted to determine the degree to
which these variables were able to predict total points. At step 1 of the
analysis knapsack points was entered into the regression model and was
significantly related to total points, F (1,6633) = 7401.77, p<.001. This
model accounted for approximately 53% of the variance of total points (Adj.
R2 = .527). At step 2 of the analysis graph coloring was entered into the
model and was also significantly related to total points, F (2,6632) =
15982.75, p<.001. This model accounted for approximately 83% of the
variance of total points (Adj. R2 = .828). Total points was primarily
predicted by graph coloring, and to a lesser extent by knapsack points.
Finally, at step 3 of the analysis the three remaining variables were entered
into the regression model and active days and assignment switching were
statistically significant, F (5,6629) = 13044.39, p < .001. This model
accounted for approximately 91% of the variance of total points (Adj. R2 =
.908). Total points were primarily predicted by graph coloring, assignment
switches, and active days, and to a lesser extent by knapsack points.
Assignment submission’s contribution to the model was not significant.
Regression coefficients and other relevant statistics for each model are
presented in Table 2.
3.2 “Passing” Participants
From the 6,635 learners who participated in the previous analysis, 774
passed the course. Descriptive statistics and correlations between variables
for these participants are presented in Table 3. There was a moderate positive
correlation between graph coloring and knapsack points, and assignment
submission and assignment switching. There was a weak positive correlation
between assignment submissions and both graph coloring and active days;
and assignment switching with graph coloring and active days.
Again, a stepwise multiple regression was used to determine the degree to
which variables predicted total points. At step 1 of the analysis knapsack
points was entered into the regression equation and was significantly related
to total points, F (1,772) = 102.87, p<.001; and accounted for only about
12% of the variance in total points (Adj. R2 = .116). The second step of the
model, in which graph coloring was entered, was significant (F (2,771) =
262.56, p<.001) and accounted for approximately 40% of the variance of
total points (Adj. R2 = .404). Total points were primarily predicted by graph
coloring, and to a lesser extent by knapsack points. Finally, at step 3 of the
analysis assignment switching, assignment submission and active days were
entered into the regression model which was statistically significant, F
(5,768) = 121.67, p < .001. This model accounted for approximately 44% of
the variance of total points (Adj. R2 = .438). Total points were primarily
predicted by graph coloring, and to a lesser extent by assignment
switching and knapsack points. The contributions of assignment
submissions and active days to this model were not significant.
Regression coefficients and other relevant statistics for each model are
presented in Table 4.
Table 1. Descriptive statistics and correlation matrix for the primary study variables All Cases (n = 6,635)
M
SD
1
2
3
5
6
1. Knapsack points
27.94
25.74
.658**
.033**
.644**
.527**
.726**
2. Graph coloring
11.94
20.02
.054**
.809**
.718**
.891**
3. Assignment submissions
69.94
2340.79
.051**
.030**
.042**
4. Active days
11.51
17.74
.738**
.860**
5. Assignment switching
2.31
4.43
.833**
6. Total points
66.41
94.79
Notes. ** p < .001.
4. DISCUSSION
This investigation considered how students’ prior knowledge in
computer science and problem solving, and their engagement within an
open MOOC curriculum, impacted on their MOOC performance.
The stepwise regression analyses conducted with all participants clearly
showed that prior knowledge in computer science was a key indicator of
success, but that the impact of prior knowledge of the content area was
suppressed somewhat by prior problem solving skills when it was
introduced into the regression model. These findings suggest that more
generic prior knowledge in problem solving skills is more important to
students’ success than prior knowledge in the content area.
The stepwise regression model indicated that while both these forms of
prior knowledge maintained their importance when participation in the
MOOC was considered, students propensity to exploit the open
curriculum structure by switching between assignment tasks in a non-
linear fashion was also important. The full model also indicated that
regular or persistent activity in the class was significantly related to
successful performance.
A second stepwise regression analysis was conducted to determine
whether the pattern of associations found in the first regression model
with all active students could be replicated with just those students who
passed the course. The results from this second analysis and sample
showed a pattern of results that was similar but different in two primary
respects from the first regression analysis. The analyses undertaken with
the second sample was similar in that prior content knowledge and
problem solving skills still significantly predicted students
performance, with problem solving skills still showing a stronger effect
than prior content knowledge. In addition, assignment switching
remained a significant predictor of success. However, in the second set
of regression analyses, the number of active days students spent on the
course was not a significant predictor, and the overall amount of
variance in the outcome explained was markedly lower than that seen in
the first model.
Prior knowledge both content and problem solving skills had a very
strong association with studentsperformance, particularly with the
sample of all active students; prior knowledge variables alone accounted
for 83% of the variance in students’ performance. When compared to
the sample of passingstudents this was approximately double the
amount of variance explained in performance. This is reflected in the
full regression models for both samples: the proportion of variance
explained in students’ performance in passingstudents sample was
markedly lower than that of the sample of all active students (44% and
91% respectively).
A clear possible explanation for this finding is that measures of prior
knowledge are strong predictors of success in the cohort of “active”
students because this sample contains a larger number of learners who
attempt an assignment or two, but recognize they do not have the prior
knowledge and skills to complete the course, subsequently disengage,
and ultimately drop-out. This would result in a large number of learners
being included in the sample with a low total points score (thereby
reducing the overall variance of this measure). If a large cohort of
students among the 6,635 are in this category it would explain why prior
knowledge measures account for such a high proportion of variance in
the outcome.
However, it is important to note that this explanation does not diminish
the influential role of prior knowledge in predicting students
performance in the MOOC. The regression analysis undertaken with the
second sample, in which the variance in total points is higher, shows that
prior knowledge and skills are still very strong predictors of variance in
students’ learning outcomes.
While prior knowledge is an important variable in predicting success in
the “passing students” sample, there is a significant proportion of
variance left unexplained in this model. That is, additional factors must
be contributing to students’ success in the MOOC. It is reasonable to
conclude that prior knowledge and skills are a necessary but not
sufficient condition in predicting students’ MOOC completion and
success.
The results presented indicate that after measures of prior knowledge,
students’ ability to exploit the open curriculum structure was a
significant factor in their ultimate success. Students who were inclined
to return to, revisit and revise their assignments encouraged by the
design of the learning tasks and the open curriculum structure were
more likely to perform well.
Interestingly, while active days had a significant association with
performance for the sample of all participants, it was not predictive of
success for those students who passed the course. While the argument
based on the difference between the two samples may go some way in
explaining this, it is interesting to reflect on why active days was not a
significant predictor for passing students. It seems likely that students
who passed the course were completing it at different rates some
students finished the course quickly, in as few as two weeks, while
others used the full nine weeks. This seems to be a possible explanation
for why “degree of activity” did not predict passing students end-of-
course performance.
The descriptive statistics indicate learners are submitting each
assignment many times more than would have been expected in a linear
course design (4.7 submissions of each assignment, on average).
However, the findings showed that this high rate of assignment
Table 2. Stepwise Regression Results All Cases (n = 6,635)
Model
b
SE-b
Beta
Pearson r
sr2
Struct. Coef.
1
Constant
-8.328
1.181
Knapsack points**
2.675
.031
.726
.726
.527
1.000
2
Constant
-.193
.716
Knapsack points**
.910
.025
.247
.726
.035
.038
Graph coloring**
3.448
.032
.728
.891
.300
.330
3
Constant
-1.564
.525
Knapsack points**
.651
.019
.177
.726
.017
.762
Graph coloring**
1.819
.033
.384
.891
.042
.935
Assign. sub.
.000
.000
-.005
.042
.002
.044
Active days**
1.090
.038
.204
.860
.012
.902
Assing. switch.**
6.728
.124
.314
.833
.041
.874
Notes. The dependent variable was total points. sr2 is the squared semi-partial correlation. Assign. subm. = Assignment
submissions. Assign. switch. = Assignment switching. Struct. Coef. = Structure Coefficient. ** p < .001.
submission did not impact on students’ performance in the course, for
either sample. It seems that it is not the number of submissions, but
general activity or engagement in the course, and more importantly, the
degree to which the learner is prepared to move between assignments
within the course that is associated with course success.
A clear implication of the findings from this paper is that it may be
useful early on in a MOOC to provide students with diagnostic
measures of prior content knowledge and learning skills such as
problem solving, as this would provide learners with an indication of
their pre-existing competency to complete and succeed in the course.
Such measures would be not only useful for the student, they would also
be useful for staff who are teaching the course, both in terms of setting
expectations, and determining which students are encountering
difficulty early on. As we have suggested in our earlier work [2], such
measures may provide scope for teaching staff and MOOC developers
to intervene early in a studentsengagement with a MOOC and direct
them to alternative and/or supplementary learning resources.
5. ACKNOWLEDGMENTS
The authors acknowledge the support of the Learning Analytics
Research Group at the University of Melbourne, and NICTA which is
funded by the Australian Government through the Department of
Communications and the Australian Research Council through the ICT
Centre of Excellence Program.
6. REFERENCES
[1] Siemens, G., and Long, P. 2011. Penetrating the fog: Analytics in
learning and education. Educause Review, 46(5), 30-32.
[2] Coffrin, C., Corrin, L., de Barba, P., and Kennedy, G. 2014.
Visualizing patterns of student engagement and performance in
MOOCs. In Proceedings of the Fourth International Conference on
Learning Analytics And Knowledge (Indianapolis, USA, March 24-
28, 2014). ACM, New York, NY, 83-92.
[3] Daniel, J. 2012. Making sense of MOOCs: Musings in a maze of
myth, paradox and possibility. Journal of Interactive Media in
Education, 3.
[4] Piaget, J. 1973. To understand is to invent: The future of education.
New York: Grossman.
[5] Bransford, J. D., Brown, A. L., and Cocking, R. R. 1999. How
people learn: Brain, mind, experience, and school. National
Academy Press.
[6] Hawkins, D. (1994). Constructivism: Some history. In P.J. Fensham,
R.F. Gunstone & R.T. White (Eds), The content of science: A
constructivist approach to its teaching and learning (pp. 9-13).
London: Falmer.
[7]!Pintrich, P. R., Smith, D., García, T., and McKeachie, W. 1991. A
manual for the use of the Motivated Strategies for Learning
Questionnaire (MSLQ). Ann Arbor. Michigan, 48109, 1259.
[8]!Weinstein, C. E. and Mayer, R. E. 1986. The teaching of learning
strategies. In M. C. Wittrock (Ed.) Handbook of research on
teaching (3rd Ed.) (pp. 315-327). New York: Macmillan.
[9] Zimmerman, B. J., and Schunk, D. H. (Eds.). 2011. Handbook of
self-regulation of learning and performance. Taylor & Francis.
[10]!Siemens, G. 2005. Connectivism: A learning theory for the digital
age. International journal of instructional technology and distance
learning 2.1, 3-10.
[11] Van Hentenryck, P., Coffrin, C. 2014. Teaching Creative Problem
Solving in a MOOC.In Proceedings of The 45th ACM Technical
Symposium on Computer Science Education (Atlanta, GA, USA,
March 05-08, 2014) (Indianapolis, USA, March 24-28, 2014).
ACM, New York, NY, 677-682.
Table 4. Stepwise Regression Results Only Passed Cases (n = 774)
Model
b
SE-b
Beta
Pearson r
sr2
Struct. Coef.
1
Constant
-27.917
31.928
Knapsack points**
5.488
.541
.343
.343
.118
1.000
2
Constant
-3.690
26.261
Knapsack points**
2.039
.479
127
.343
.014
.538
Graph coloring**
3.540
.183
.578
.625
.287
.981
3
Constant
-23.409
26.118
Knapsack points**
2.226
.467
.139
.343
.016
.192
Graph coloring**
3.329
.181
.543
.625
.247
.747
Assign. sub.
.000
.009
.000
.108
.000
.000
Active days
.079
.111
.019
.062
.000
.029
Assing. switch.**
1.327
.204
.191
.278
.031
.263
Notes. The dependent variable was total points. sr2 is the squared semi-partial correlation. Assign. subm. = Assignment
submissions. Assing. switch. = Assignment switching. Struct. Coef. = Structure Coefficient. ** p < .001.
Table 3. Descriptive statistics and correlation matrix for the primary study variables Only Passed Cases (n = 774)
M
SD
1
2
3
4
5
6
1. Knapsack points
58.93
2.98
.373**
-.054
.014
.005
.343**
2. Graph coloring
50.57
7.78
.078*
.033
.154**
.625**
3. Assignment submissions
206.45
154.40
.085*
.373**
.108**
4. Active days
47.23
11.64
.120**
.062
5. Assignment switching
11.76
6.87
.278**
6. Total points
295.50
47.67
Notes. ** p < .001, * p < .05.
... This retention factor is found to significantly affect student success in both for-credit institutions [42,63] and MOOCs [66], especially for entrance examinations [50]. The analysis indicates that students who have good marks in previous degrees and admission tests have significantly better achievements [47] and are less inclined to drop out or switch to another degree [76]. ...
... Overall academic performance was widely evaluated as a predictor using various forms of grades, such as overall GPA, semester GPA, credit students earned in first semester [23,25,[27][28][29]32,37,40,42,47,49,52,60,66,67,71,73,76]. The impact of performance on attrition for first and second year students has also been verified in predicting student performance and dropping out [32,76]. ...
Article
Full-text available
Student persistence and retention in STEM disciplines is an important yet complex and multi-dimensional issue confronting universities. Considering the rapid evolution of online pedagogy and virtual learning environments, we must rethink the factors that impact students’ decisions to stay or leave the current course. Learning analytics has demonstrated positive outcomes in higher education contexts and shows promise in enhancing academic success and retention. However, the retention factors in learning analytics practice for STEM education have not been fully reviewed and revealed. The purpose of this systematic review is to contribute to this research gap by reviewing the empirical evidence on factors affecting student persistence and retention in STEM disciplines in higher education and how these factors are measured and quantified in learning analytics practice. By analysing 59 key publications, seven factors and associated features contributing to STEM retention using learning analytics were comprehensively categorised and discussed. This study will guide future research to critically evaluate the influence of each factor and evaluate relationships among factors and the feature selection process to enrich STEM retention studies using learning analytics.
... Information about the entry-level competencies of students is useful for lecturers because it can then be used to plan lessons to allow more effective application of the learning process (Newman Trimmer, & Padró, 2019) and to decide on the right portion of learning materials and other applicable learning strategies (Barrette & Paesani, 2018). Similarly, an understanding of the entry-level competence of students allows the teachers to better determine whether the individual fits in with the learning tasks being performed (Jiang, Zhang, May, & Qin, 2018) as the overall effectiveness of the learning process often depends on the perspective, background, and prior knowledge of the students (Kennedy, Coffrin, Barba, & Corrin, 2015). The learning process often faces obstacles because the demands of the learning task are not in line with the learning needs of the students (Hassel & Ridout, 2017). ...
... The existence of the correlation, as the findings of the study, implies that the achievements and learning experiences obtained by students in previously learning do not contribute significantly in determining learning success. Kennedy et al. (2015) stated that the results of the correlation analysis are used as a basis for predicting students' prior knowledge that contributes to the achievement of learning outcomes for their future lectures. Therefore, even though it does not contribute significantly, to determine the learning materials that will be studied by students, teachers, in addition to referring to the entry-level test results, need to consider student learning experiences (Arismunandar et al., 2022). ...
Article
Full-text available
BIPA (the acronym of Bahasa Indonesia untuk Pelajar Asing) is one of the Indonesian-language learning programs for foreign learners. Understanding entry-level of student competencies are very useful as a starting point for preparing learning according to their learning needs in future lectures. This study aimed to describe how to assess student competence before conducting the lecture to determine their learning needs. This study used a qualitative and quantitative approach conducted in BIPA lectures involving seven lecturers as informants and 159 students as samples. Data were collected through document analysis, interviews, and tests. Qualitative data were used to describe the essential learning materials studied in BIPA lectures. Quantitative data were analyzed using the statistical tool 'Pearson Product Moment to assess entry-level competencies of students before taking courses. The results showed that the entry-level of students were in a low category. The correlation showed that the Grade Point Index (GPI) achieved in the previous lecture did not contribute significantly to student understanding in BIPA lectures because of the different substantial topic of the subjects. The low entry-level test scores reflect the students have not mastered the essential materials for BIPA lectures so they need to study them seriously. These findings can be used as a reference in determining the learning needs of students in the BIPA special program lectures. Therefore, these findings are useful for designing learning and determining policies that can meet the learning needs of students.
... While application-oriented insights from open and distance education studies can help optimise flexible, self-paced courses, these studies risk instrumentalising technology and curriculum to passive, neutral tools for education (e.g. Chen et al., 2019;Kennedy et al., 2015). ...
Article
In open and higher education, digital technologies are increasingly used to enable flexible learning pathways and unbundle programs into separate courses. Whereas technologies have been praised for enhancing the flexibility of curricula, the implications of going digital have yet to be fully explored in curriculum studies. This article aims to critically investigate how an open education platform, OpenLearn, describes, prescribes, and enacts a particular form of curriculum. Rather than understanding platforms as passive tools for facilitating education, the article draws on theoretical and methodological ideas from science and technology studies (STS) to approach “curriculum” as a collection of socio-technical practices in which platforms play an active role. The findings of our analysis detail how networks of human and other-than-human actors are situated in a wider ecology and enact five curricular practices: prescribing, mobilising, enrolling, evaluating, and rebundling. We propose “platform curriculum” as a sensitising concept to investigate how technologies enable and constrain these practices instead of simply flexibilising them. With this article, we argue for the further adoption of STS in curriculum studies to disentangle the specific ways in which technologies, too, shape education.
... There is a pedagogical revolution occurring in the methods of teaching and learning (Kim & Bonk, 2016). As a result, online distance education is growing popularity around the world, and high-quality instruction is in high demand (Kennedy, 2015). ...
Article
Full-text available
This study looked at the online learning readiness and 21st century pedagogical skills among pre-service teachers of a state university in the Philippines during the school year 2021-2022. The descriptive-correlational method was applied through an adapted survey questionnaire administered to 28 pre-service teachers. The findings revealed that pre-service teachers were ready for online learning in terms of computer/internet self-efficacy, self-directed learning, learner control, motivation for learning, and online communication self-efficacy. They were also equipped with 21st century pedagogical skills such as information and communication technology skills, life-long learning skills, flexibility skills, creative problem-solving skills, and critical thinking skills. The findings also indicated a substantial relationship between the pre-service teachers’ readiness in online learning and their 21st century pedagogical skills. The study suggests that universities may create appropriate provisions to provide their faculty members with the necessary professional development training opportunities to raise awareness and improve e-pedagogical skills.
... These data can uncover essential knowledge about the study process (Papamitsiou & Economides, 2014). The analysis of the data has been primarily aimed at predictive modelling of student performance using data collected from Virtual Learning Environments (VLEs), which has been proved to be the source of helpful information (Arnold & Pistilli, 2012;Kennedy et al., 2015). ...
Article
Full-text available
Student drop-out is one of the most critical issues that higher educational institutions face nowadays. The problem is significant for first-year students. These freshmen are especially at risk of failing due to the transition from different educational settings at high school. Thanks to the massive boom of Information and Communication Technologies, universities have started to collect a vast amount of study- and student-related data. Teachers can use the collected information to support students at risk of failing their studies. At the Faculty of Mechanical Engineering, Czech Technical University in Prague, the situation is no different, and first-year students are a vulnerable group similar to other institutions. The most critical part of the first year is the first exam period. One of the essential skills the student needs to develop is planning for exams. The presented research aims to explore the exam-taking patterns of first-year students. Data of 361 first-year students have been analysed and used to construct “layered” Markov chain probabilistic graphs. The graphs have revealed interesting behavioural patterns within the groups of successful and unsuccessful students.
... Richards (2011) emphasizes that meaningful learning occurs when students are actively participating, whereas Kuh (2003) defines student engagement as the amount of time and energy students spend on their educational activities. According to a study that used descriptive statistics, students' interest and effective performance are also connected to regularity and tenacity in learning activities (Kennedy et al., 2015;Greller et al., 2017). The following hypotheses were suggested based on the discussion above: H3: SE is positively associated with PU. ...
Article
Full-text available
Higher education authorities have supplied information and communication technologies (ICTs) to guarantee that students use ICT to improve their learning and research outputs. ICT, on the other hand, has been proven to be underused, particularly by students. Therefore, we aimed to develop a new model to measure students’ active learning and actual use of ICT in higher education. To investigate this issue, the technology acceptance model and constructivism learning theory were verified and applied to evaluate university students’ use of ICT for active learning purposes. The participants in the study were 317 postgraduate and undergraduate students from four faculties at King Faisal University who consented to take part. The research data were analyzed using structural equation modeling (AMOS-SEM). Three specific components were used: the technology adoption model, constructivism learning, and active learning using ICT. The findings revealed that: (a) using ICTs for students’ interactivity, engagement, expected effort, subjective norm, and perceived ease of use has a direct positive impact on perceived enjoyment and usefulness; (b) perceived enjoyment and usefulness has a direct positive impact on active learning, attitude toward use, and behavioral intention to use ICTs; (c) active learning has a direct positive impact on attitude toward use, behavioral intention to use ICTs; and (d) active learning has a direct positive impact on attitude. Moreover, the results showed the mediator factors’ values positively “R square,” active learning (0.529), attitude toward use (0.572), behavioral intention to use (0.583), and actual ICT use (0.512) in higher education. Therefore, the results of the hypotheses developed a validated instrument to measure students’ active learning and actual use of ICTs in higher education in Saudi Arabia.
Article
Technologies and teaching practices can provide a rich log data, which enables learning analytics (LA) to bring new insights into the learning process for ultimately enhancing student success. This type of data has been used to discover student online learning patterns, relationships between online learning behaviors and assessment performance. Previous studies have provided empirical evidence that not all log variables were significantly associated with student academic achievement and the relationships varied across courses. Therefore, this study employs a systematic review with meta‐analysis method to provide a comprehensive review of the log variables that have an impact on student academic achievement. We searched six databases and reviewed 88 relevant empirical studies published from 2010 to 2021 for an in‐depth analysis. The results show different types of log variables and the learning contexts investigated in the reviewed studies. We also included four moderating factors to do moderator analyses. A further significance test was performed to test the difference of effect size among different types of log variables. Limitations and future research expectations are provided subsequently. Practitioner notes What is already known about this topic Significant relationship between active engagement in online courses and academic achievement was identified in a number of previous studies. Researchers have reviewed the literature to examine different aspects of applying LA to gain insights for monitoring student learning in digital environments (eg, data sources, data analysis techniques). What this paper adds Presents a new perspective of the log variables, which provides a reliable quantitative conclusion of log variables in predicting student academic achievement. Conducted subgroup analysis, examined four potential moderating variables and identified their moderating effect on several log variables such as regularity of study interval, number of online sessions, time‐on‐task, starting late and late submission. Compared the effect of generic and course‐specific, basic and elaborated log variables, and found significant difference between the basic and elaborated. Implications for practice and/or policy A depth of understanding of these log variables may enable researchers to build robust prediction models. It can guide the instructors to timely adjust teaching strategies according to their online learning behaviors. What is already known about this topic Significant relationship between active engagement in online courses and academic achievement was identified in a number of previous studies. Researchers have reviewed the literature to examine different aspects of applying LA to gain insights for monitoring student learning in digital environments (eg, data sources, data analysis techniques). What this paper adds Presents a new perspective of the log variables, which provides a reliable quantitative conclusion of log variables in predicting student academic achievement. Conducted subgroup analysis, examined four potential moderating variables and identified their moderating effect on several log variables such as regularity of study interval, number of online sessions, time‐on‐task, starting late and late submission. Compared the effect of generic and course‐specific, basic and elaborated log variables, and found significant difference between the basic and elaborated. Implications for practice and/or policy A depth of understanding of these log variables may enable researchers to build robust prediction models. It can guide the instructors to timely adjust teaching strategies according to their online learning behaviors.
Article
Full-text available
Lay Description What is already known about this topic? Learning design (LD) is the pedagogic process used in teaching/learning that leads to the creation and sequencing of learning activities and the environment in which it occurs. Learning analytics (LA) is the measurement, collection, analysis & reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. There are multiple studies on the alignment of LA and LD but research shows that there is still room for improvement. What this paper adds? To achieve better alignment between LD and LA. We address this aim by proposing a framework, where we connect the LA indicators with the activity outcomes from the LD. To demonstrate how learning events/objectives and learning activities are associated with LA indicators and how an indicator is formed/created by (several) LA metrics. We address this aim in our review. This article also aims to assist the LA research community in the identification of commonly used concepts and terminologies; what to measure, and how to measure. Implications for practice and/or policy This article can help course designers, teachers, students, and educational researchers to get a better understanding on the application of LA. This study can further help LA researchers to connect their research with LD.
Article
Open research is increasingly required by journals and funders but involves many new skills. Creating open-source tutorials is useful to the field and personally rewarding, but these efforts must be credited accordingly. Open research is increasingly required by journals and funders but involves many new skills. Creating open-source tutorials is useful to the field and personally rewarding, but these efforts must be credited accordingly.
Thesis
Au cours de ces dernières années, les cours massifs ouverts en ligne (CLOM) ou les Massive Open Online Courses (MOOC) ont offert des possibilités d'apprentissage dans le monde entier dans divers domaines. Néanmoins, ces dispositifs sont critiqués vu leurs taux de réussite très bas. La recherche actuelle s’intéresse à la problématique de l’engagement et la réussite dans les MOOC. Comme pour de nombreuses technologies éducatives émergentes, il convient de mieux comprendre pourquoi et comment les apprenants réussissent aux MOOC et, surtout, quels sont les facteurs qui contribuent à améliorer leurs performances dans ces dispositifs. Cette thèse a pour objectif de contribuer à une meilleure compréhension des déterminants qui affectent l’engagement et les performances des apprenants dans les MOOC. Dans les environnements d'apprentissage en ligne, le niveau d’engagement semble être lié aux performances des apprenants. Cependant, l’engagement est une construction complexe et la recherche sur la façon dont il fonctionne dans les MOOC n'en est qu'à ses débuts. Nous avons cherché dans ce travail à proposer une modélisation théorique de l’engagement en tant que concept multidimensionnel qui joue le rôle d’une variable médiatrice qui s’intercale entre les variables individuelles et les performances des apprenants dans les MOOC. En invoquant un cadre théorique multi-référencés et à travers l’analyse des données empiriques et les traces log de 5904 apprenants dans un xMOOC, notre travail propose un modèle permettant d’une part de mesurer l’engagement des apprenants dans un MOOC et d’autre part d’interroger la nature des relations qui existent entre les variables individuelles, l’engagement et les performances des apprenants. Les résultats saillants de ce travail de recherche mettent en valeur :-Les effets positifs de l’âge de l’apprenant sur son engagement comportemental et cognitif ;-Les effets positifs du principal motif d’inscription sur l’engagement comportemental et cognitif et sur les performances de l’apprenant ;-Les effets positifs du contexte du suivi du MOOC sur l’engagement comportemental, cognitif, social et les performances de l’apprenant ;-L’effet négatif du genre de l’apprenant sur son engagement cognitif ;-Et l’absence de l’effet du niveau d’études de l’apprenant sur son engagement et sur ses performances.Les résultats de ce travail de recherche permettent ainsi de valoriser le rôle médiateur de l’engagement qui s’intercale entre les variables âge, principal motif d’inscription et contexte de suivi du MOOC et les performances et de mettre en exergue les effets positifs de l’engagement dans ses dimensions comportementale, cognitive et sociale sur les performances d’un apprenant dans le MOOC PRD5, avec un effet plus important de l’engagement comportemental, suivi de l’engagement cognitif et en fin de l’engagement social.
Article
Full-text available
MOOCs (Massive Open Online Courses) are the educational buzzword of 2012. Media frenzy surrounds them and commercial interests have moved in. Sober analysis is overwhelmed by apocalyptic predictions that ignore the history of earlier educational technology fads. The paper describes the short history of MOOCs and sets them in the wider context of the evolution of educational technology and open/distance learning. While the hype about MOOCs presaging a revolution in higher education has focussed on their scale, the real revolution is that universities with scarcity at the heart of their business models are embracing openness. We explore the paradoxes that permeate the MOOCs movement and explode some myths enlisted in its support. The competition inherent in the gadarene rush to offer MOOCs will create a sea change by obliging participating institutions to revisit their missions and focus on teaching quality and students as never before. It could also create a welcome deflationary trend in the costs of higher education. Explanatory Note During my time as a Fellow at the Korea National Open University (KNOU) in September 2012 media and web coverage of Massive Open Online Courses (MOOCs) was intense. Since one of the requirements of the fellowship was a research paper, exploring the phenomenon of MOOCs seemed an appropriate topic. This essay had to be submitted to KNOU on 25 September 2012 but the MOOCs story is still evolving rapidly. I shall continue to follow it. 'What is new is not true, and what is true is not new'. Hans Eysenck on Freudianism This paper is published by JIME following its first release as a paper produced as part of a fellowship at the Korea National Open University (KNOU). Both the original and this republication are available non-exclusively under Creative Commons Attribution (CC-BY). Apart from this note and minor editorial adjustments the paper is unchanged. Normal 0 false false false EN-GB X-NONE X-NONE
Conference Paper
Full-text available
The practice of discrete optimization involves modeling and solving complex combinatorial problems which have never been encountered before and for which no universal computational paradigm exists. Teaching such skills is challenging: Students must learn, not only the core technical skills, but also an ability to think creatively in order to select and adapt a paradigm to solve the problem at hand. This paper explores the question of whether the teaching of such creative skills translates to massive open online courses (MOOCs). It first describes a methodology for teaching discrete optimization that has been successful on campus over fifteen years. It then discusses how to adapt the campus format to a MOOC version. The success of the approach is evaluated through extensive data analytics enabled by the wealth of information produced by MOOCs.
Conference Paper
Full-text available
In the last five years, the world has seen a remarkable level of interest in Massive Open Online Courses, or MOOCs. A consistent message from universities participating in MOOC delivery is their eagerness to understand students' online learning processes. This paper reports on an exploratory investigation of students' learning processes in two MOOCs which have different curriculum and assessment designs. When viewed through the lens of common MOOC learning analytics, the high level of initial student interest and, ultimately, the high level of attrition, makes these two courses appear very similar to each other, and to MOOCs in general. With the goal of developing a greater understanding of students' patterns of learning behavior in these courses, we investigated alternative learning analytic approaches and visual representations of the output of these analyses. Using these approaches we were able to meaningfully classify student types and visualize patterns of student engagement which were previously unclear. The findings from this research contribute to the educational community's understanding of students' engagement and performance in MOOCs, and also provide the broader learning analytics community with suggestions of new ways to approach learning analytic data analysis and visualization.
Article
Full-text available
Nell’era di Internet, delle tecnologie mobili e dell’istruzione aperta, la necessità di interventi per migliorare l’efficienza e la qualità dell’istruzione superiore è diventata pressante. I big data e il Learning Analytics possono contribuire a condurre questi interventi, e a ridisegnare il futuro dell’istruzione superiore. Basare le decisioni su dati e sulle evidenze empiriche sembra incredibilmente ovvio. Tuttavia, l’istruzione superiore, un campo che raccoglie una quantità enorme di dati sui propri “clienti”, è stata tradizionalmente inefficiente nell’utilizzo dei dati, spesso operando con notevole ritardo nell’analizzarli, pur essendo questi immediatamente disponibili. In questo articolo, viene evidenziato il valore delle tecniche di analisi dei dati per l’istruzione superiore, e presentato un modello di sviluppo per i dati legati all’apprendimento. Ovviamente, l’apprendimento è un fenomeno complesso, e la sua descrizione attraverso strumenti di analisi non è semplice; pertanto, l’articolo presenta anche le principali problematiche etiche e pedagogiche connesse all’utilizzo delle tecniche di analisi dei dati in ambito educativo. Cionondimeno, il Learning Analytics può penetrare la nebbia di incertezza che avvolge il futuro dell’istruzione superiore, e rendere più evidente come allocare le risorse, come sviluppare vantaggi competitivi e, soprattutto, come migliorare la qualità e il valore dell’esperienza di apprendimento.