ResearchPDF Available

Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition

Authors:

Abstract and Figures

Jordan, K. (2015) Massive open online course completion rates revisited: Assessment, length and attrition. The International Review of Research in Open and Distributed Learning 16(3), 341-358.
Content may be subject to copyright.
40
International Review of Research in Open and Distributed Learning
Volume 16, Number 3
2015
Massive'Open'Online'Course'Completion'Rates'
Revisited:'Assessment,'Length'And'Attrition''
Katy Jordan
The Open University, UK
Abstract(
This analysis is based upon enrolment and completion data collected for a total of 221 Massive
Open Online Courses (MOOCs). It extends previously reported work (Author, 2014a) with an
expanded dataset; the original work is extended to include a multiple regression analysis of
factors that affect completion rates and analysis of attrition rates during courses. Completion
rates (defined as the percentage of enrolled students who completed the course) vary from 0.7%
to 52.1%, with a median value of 12.6%. Since their inception, enrolments on MOOCs have fallen
while completion rates have increased. Completion rates vary significantly according to course
length (longer courses having lower completion rates), start date (more recent courses having
higher percentage completion) and assessment type (courses using auto grading only having
higher completion rates). For a sub-sample of courses where rates of active use and assessment
submission across the course are available, the first and second weeks appear to be critical in
achieving student engagement, after which the proportion of active students and those submitting
assessments levels out, with less than 3% difference between them.
Keywords: Distance education; open learning; online learning; massive open online courses
(MOOCs)
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
41
Introduction(
Since Massive Open Online Courses (MOOCs) became mainstream in 2012, completion rates
have been a controversial topic. While six-figure enrolment figures have garnered intense media
attention, critics have highlighted that few students complete courses relative to more formal
modes of learning. Counter to this, others have argued that the emancipatory effect of free online
access to education allows students to take what they need from MOOCs to meet their own
learning goals without formally completing courses; to examine completion rates is potentially
misleading (LeBar, 2014). This study takes a perspective which acknowledges that while
completing courses is not the only way of benefitting from participation in a MOOC, it is better to
try to understand the factors that affect completion rates and any implications for course design
than to ignore them.
In the literature on MOOCs there is a lack of peer-reviewed research publications which draw
upon more than a small number of courses restricted to single institutions, and the need for meta-
analysis independent of MOOC platform providers is a key issue for the field at present. For
example, Kizilcec, Piech and Schneider (2013) identified learner populations based on analysis of
three early Coursera MOOCs; replication of this analytical approach on data from the Futurelearn
platform identified different groups (Ferguson & Clow, 2015). Enrolment and completion figures
are the type of data that is most widely publicly available for analysis across the field, which is
necessary to ensure that conclusions are generalisable and not particular to a small number of
detailed cases.
Understanding the factors which affect completion rate can be approached from the perspective
of characteristics of learners and their reasons for participating, or improving the design of
courses. Greater focus to date has been on the motivations and behaviours of students in relation
to success (for example, Breslow et al., 2013; Kizilcec et al., 2013; Koller, Ng, Do & Chen, 2013;
Rosé et al., 2014). However, studies have suggested that those most likely to succeed in MOOCs
are the students who are already most educationally privileged (Emanuel, 2013; Koller and Ng,
2013). Addressing MOOC completion rates from a course design perspective, in order to help as
many diverse learners to complete as would wish to, is a pedagogical issue and a challenge for
course designers and instructors. In order for MOOCs to realise their potential in making
education open to all, it is not sufficient to simply make pre-existing course materials freely
available online. Completion rates are relatively low even among students who intend to complete
the course (an average of 22%; Reich, 2014) so for those students who intend to complete courses
or engage with the course as designed, not considering completion rates prevents exploration of
what can be done by educators to facilitate further student success.
This paper reports work undertaken to extend a previous study on initial trends in MOOC
completion rate (Jordan, 2014a), which remains one of the largest MOOC studies in terms of
number of courses included. The dataset is expanded (to include more recent data, and not
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
42
restricted to the major MOOC providers) and multiple regression analysis is used to explore the
combined effects of a range of basic factors in correlation to completion rate. Additionally,
quantitative measures of engagement across the course of a MOOC are examined in instances
where this data is available.
Data(Collection(and(Analysis(
The approach to data collection built upon an initial dataset (reported in Jordan, 2014a) which
combined enrolment and completion figures from news stories, MOOCs the author had taken as a
participant, and crowdsourced figures submitted by other students and MOOC instructors via a
blog. In order to expand the dataset, the author compiled a list of completed MOOCs not present
in the original dataset (based on in formation from MOOC-aggregating websites such as
https://www.class-central.com/ ) and performed a series of Internet searches to find sources
containing enrolment and completion information for the courses. Information was found and
included relating to a total of 221 MOOCs. Information about 35.3% of the courses was located in
news articles; 33.6% were sourced from academic reports and articles; 14.5% from instructors’
social media; 9.9% from students’ social media; and 6.8% from course sites. Further information
about courses, including course length and assessment type, was gathered via signing up to the
courses, asking participants or consulting MOOC-aggregating websites. Information about
university reputation was included based on the scores used by the Times Higher Education
World University Rankings (Times Higher Education, 2013). No further courses were added to
the dataset after November 2013. To access the full dataset, including links to each source and
further recent additions, see the online data visualization (Jordan, 2015). The dataset can be
summarised as follows:
A total of 221 courses were included in the dataset. Within this sample, enrolment figures
were available for 220 courses; completion figures were available for 129 courses; and
figures relating to engagement over the course of a MOOC were available for 59 courses.
The two most common definitions of engagement across the duration of courses used by
the sources were the number of students accessing resources, or completing assignments.
Courses from a range of different MOOC providers were included. Coursera (120 courses)
and Open2Study (43 courses) were the best represented platforms, although courses from
12 other providers and 19 independent courses were also included. A total of 78
institutions were present in the dataset.
A variety of different definitions of completion are in operation. Of the 129 courses for
which completion data was available, the most prevalent definition of completion was
earning a certificate (93 courses). Other definitions (as used by their data sources)
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
43
included ‘completed course’ (14 courses), ‘passed course’ (10 courses), ‘completed
assignments’ (6 courses), ‘memorably active participants’, achieving a ‘strong final score’,
active contributors at end of course, certificates purchased, ‘kept up’ with whole course,
or ‘took final exam’ (1 course each).
In addition to enrolment and completion figures, other data collected about courses
included start date, length of course (in weeks), and assessment type used. Assessment
type was categorised using three basic categories: auto grading only (92 courses), peer
grading only (10 courses), or a combination of both auto and peer grading (23 courses).
Three types of analysis were applied to the dataset. Since the expanded dataset encompassed a
wider time period, linear regression analysis was used to gain an overview of whether MOOC
enrolments and completion rates were changing over time.
Multiple regression analysis was then used with a sub-sample of courses to explore whether
MOOC completion rates are significantly correlated with a range of factors. Multiple regression is
“an extension of simple regression in which an outcome is predicted by two or more predictor
variables” (Field, 2009, p.790). As an analytical approach, multiple regression offers the
advantage of being able to examine the relationship between multiple variables upon an outcome
(in this case, completion rate). The factors examined included assessment type, course length,
date, and university ranking score. While course length, date and university ranking had
previously been examined individually (Jordan, 2014), the larger dataset offered the opportunity
to consider the factors together. Platform and MOOC type were not included due to wide
variation in sample sizes. Note that full information about all of the factors was not available for
every course in the sample, so not all of the courses in the dataset were used in the regression
analysis (see results and discussion). The statistical analyses were undertaken using SPSS (Field,
2009).
The third type of analysis focused upon a smaller sample of courses for which data was available
about number of students participating week-by-week across the course of live MOOCs. In some
cases, raw data was not available and required extraction from charts using software (Rohatgi,
2014). This extends the original finding that approximately fifty percent of potential MOOC
students who sign up go on to become active users (Jordan, 2014) and examine how this trend
progresses. Participation is defined in two ways by data sources: either the number of students
viewing course materials, or the number submitting assignments. To allow comparison, these
values were expressed as a percentage of the total enrolment for each course. The resulting curves
were compared visually and average curves constructed using mean values.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
44
Results(
((
The first analyses focused upon whether enrolments and completion rates appeared to be
changing over time. Regression analysis was performed to examine the extent to which course
start date predicts the number of students enrolled (figure 1), and the extent to which course start
date predicts the percentage of students that complete the course (figure 2). Prior to both
analyses, a Box-Cox transformation was applied as the residuals were not normally distributed.
Date significantly predicted total enrolment figures by the following formula: Enrolled^0.180902
= 66.3311 - 0.00147092 Date (n=219 , R2=0.0252 , p=0.019). Note that the correlation here is
negative; as time has progressed, the size of the average MOOC has decreased. Date also
significantly predicted completion rate by the following formula: PercentCompleted^0.5 = -
152.428 + 0.00377601 Date (n=129 , R2=0.1457 , p<0.001). In contrast, this represented a
positive correlation, so while total enrolments have decreased, completion rates have increased
over time.
Figure 1. Scatterplot of MOOC course enrolments plotted against course start date (n=219).
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
45
Figure 2. Scatterplot of percentage of students who completed courses plotted against course start
date (n=129).
Multiple(Regression(Analysis(
A multiple regression analysis was carried out in order to examine the combined effects of the
factors upon completion rate, which had previously been examined individually (Jordan, 2014).
An initial run of the analysis included the following factors: University ranking score, start date,
course length (in weeks), total number of students enrolled, and assessment type. Assessment
type was a categorical variable comprising three categories (‘Auto grading only’, ‘peer grading
only’, and ‘auto and peer grading’). The start date variable was defined as the date that each
course formally launched; for the purposes of the analysis, this information was converted to the
Lilian date format. The result of this initial analysis showed that while the model explained a
significant amount of the variance in completion rate (F(6, 53) = 13.98, p < .05, R2 = .61, R2Adjusted
= .57), not all of the factors significantly predicted completion rate. In light of this, university
ranking and total enrolment were excluded from the analysis, and assessment types recoded into
two categories based on whether or not peer grading was used at all.
The analysis proceeded to examine the extent to which start date, course length and use of peer
grading predicted completion rate. An analysis of standard residuals was carried out, which
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
46
showed that the data contained no outliers (Std. Residual Min = -2.56, Std. Residual Max = 2.07).
Tests to see if the data met the assumption of collinearity indicated that multicollinearity was not
a concern (course length, Tolerance = .96, VIF = 1.40; start date, Tolerance = .99, VIF = 1.01; use
of peer grading, Tolerance = .96, VIF = 1.05). The data met the assumption of independent errors
(Durbin-Watson value = 1.08). The histogram of standardised residuals indicated that the data
contained approximately normally distributed errors, as did the normal P-P plot of standardised
residuals. The scatterplot of standardised predicted values showed that the data met the
assumption of linearity but may have violated the assumption of homoscedasticity. Note that
violating this assumption does not invalidate the analysis, which is accurate based upon the
sample used, but reduces the likelihood that the model generalises to the population (this would
not be certain, but more likely, if all the assumptions were met; Field, 2009). The
heteroscedasticity observed is not severe and is most likely caused by the presence of significant
variables that are not included in the model. Given the opportunistic nature of the data collection
it would be surprising if all significant variables had been identified. However, this does not
detract from the significant factors that have been determined. The data also met the assumption
of non-zero variances (course length, Variance = 13.58; start date, Variance = 15480621.9; use of
peer grading, Variance = .195).
Using the enter method it was found that course length, start date and use of peer grading explain
a significant amount of the variance in the completion rate (F(3, 117) = 57.7, p < .05, R2 = .60,
R2Adjusted = .59). The model summary, ANOVA table and coefficients table from the analysis are
shown in tables 1, 2 and 3, respectively.
Table 1
Model summary
Model Summaryb
Model
R
Square
Adjusted R Square
Std. Error of the
Estimate
Durbin-Watson
1
.597
.587
6.905697235734618
1.084
a. Predictors: (Constant), PeerGrading, DateISO, Length
b. Dependent Variable: PercentCompleted
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
47
Table 2
ANOVA Table
ANOVAa
Model
Sum of Squares
df
Mean
Square
F
Sig.
1
Regression
8260.829
3
2753.610
57.741
.000b
Residual
5579.573
117
47.689
Total
13840.402
120
a. Dependent Variable: PercentCompleted
b. Predictors: (Constant), PeerGrading, DateISO, Length
Table 3
Coefficients Table
Coefficientsa
Model
Unstandardised
Coefficients
Standardised
Coefficients
t
Sig.
Collinearity
Statistics
B
Std.
Error
Beta
Toleran
ce
VIF
1
(Constant)
57.38
0
5.029
11.4
09
.000
DateISO
-.001
.000
-.343
-
5.81
6
.000
.991
1.009
Length
-1.751
.209
-.503
-
8.39
1
.000
.958
1.043
PeerGradi
ng
-
9.606
1.472
-.392
-
6.52
7
.000
.955
1.047
a. Dependent Variable: PercentCompleted
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
48
The correlation between start date and completion rate was positive, in that completion rates
increased over time. The other factors course length and whether or not peer grading was used
were both negative correlations (Figures 3 and 4), with longer courses and those which use peer
grading having lower completion rates than shorter or auto-graded courses.
Figure 3. Completion rate plotted against course length in weeks.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
49
Figure 4. Boxplots showing distributions of completion rates according to different assessment
types.
This categorization is not elaborate and, with an R2 of 60%, a reasonable proportion of the
variance remains unexplained, It does however demonstrate that it is possible to gain insights
into the impact of learning design decisions by considering completion rate. Availability of data is
an obstacle to further detailed analyses (such as considering the use of exams, different forms of
auto and peer grading, formative or summative assessment) at this stage.
Attrition(Rates(During(Live(Courses(
The previous study considered the conversion rate between students who enrol and then go on to
become active in courses, by accessing course materials or logging in to the course site (Jordan,
2014). This part of the analysis sought to extend this by considering the levels of use week-by-
week during live courses, for MOOCs where this level of detail is available. The data collected by
courses only focused upon levels of use during the periods which the course was active.
Participation was defined in two ways; either by the number accessing course materials (‘active
students’), or the number who submitted assignments. Data about active students was available
for 59 courses (figure 5), and those submitting assignments in 54 courses (figure 6). The sample
included data from a range of courses and platforms, including 43 Open2Study courses
(Open2Study, 2013), 17 Coursera courses (Belanger, 2013; Duke University, 2012; Grainger,
2013; University of Edinburgh, 2013; Severance, 2013); one edX course (Breslow et al., 2013);
and two platform-independent courses (Cross, 2013; Weller, 2013). Note that the majority of
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
50
courses included in figures 5, 6 and 7 were four weeks long due to the open availability of data
from the Open2Study platform, upon which all courses are four weeks in duration.
Figure 5. Proportion of active students (accessing course materials) per week since start of course
as a percentage of total enrolment (n=59).
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
51
Figure 6. Proportion of students submitting assessments per week since start of courses as a
percentage of total enrolment (n=54).
The curves shown in Figures 5 and 6 are notable in two main ways. First, while the sample
contains variety of different types of MOOCs (different platforms, institutions, and modes of
teaching and assessment are present), the curves follow a similar overall trend. Around half of a
MOOCs’ enrolled students will not show up, and the first two weeks of a course appear to be
critical in gaining student engagement. Second, after the first two weeks, there is little difference
between the two measures of engagement those accessing course materials and those
submitting assignments. The difference between the two measures is shown in Figure 7. After
week 3, the difference is less than five percent for all but two courses. This calls into question the
extent to which ‘lurking’ (that is, selectively accessing course materials but not actively
participating in assessments) is being used as a participation strategy, underlining the need for
further research in this area.
!
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
52
Figure 7. Difference between percentage of active students and percentage of students submitting
assignments per week since the start of courses as a percentage of total enrolment (n=50).
Note that since this work was undertaken, a similar study has been published based on rich,
detailed data from 16 Coursera-based MOOCs provided by the University of Pennsylvania (Perna,
Ruby, Boruch, Wang, Scull, Ahmad & Evans, 2014). This study corroborates the findings here in
that the initial weeks of courses are key for students engagement. Over the course of the MOOCs
in the Perna et al. (2014) sample, broadly similar attrition curves are demonstrated, and the gap
between students accessing materials and taking assignments narrows over time.
Conclusions(
The multiple regression analysis highlights that it may be possible to gain insights into the
impacts of different aspects of MOOC course design by considering completion rate across a large
sample of courses. The results here may be useful for educators to consider when designing
MOOCs, although there are limitations to this study and further research would be valuable.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
53
Factors that significantly predicted completion rate included start date, course length and
assessment type. Completion rates were positively correlated with start date; that is, more recent
courses demonstrate higher percentage completion. This is likely due to a decrease in average
total enrolments over time, but may also reflect feedback and iterative design of courses.
On the basis of the negative correlation with course length, coupled with the attrition observed in
the initial weeks of courses, a case could be made for shorter, more modular courses. Greater
signposting would be required between courses for those students looking to create a more
substantial programme of learning. Shorter courses with better guidance about how they could be
combined could also benefit those students who prefer to direct their own learning by making it
easier to find the parts of a course that they value; this would also allow for these students’ MOOC
achievements to be recognised. Modularisation for MOOCs has already been suggested by some
(for example, Bol cited in Harvard Magazine, 2013; Challen & Seltzer, 2014); the evidence here
provides an empirical rationale for such developments, and further research would be valuable to
examine the effects in practice. Note that in contrast to this finding Perna et al. (2014) reported
no relationship between course length and completion rate. Given the similarity between the
attrition curves reported by Perna et al. and those presented here, it is likely that the lack of a
negative correlation is due to small sample size (16 courses from a single institution and platform,
several of which are included in the dataset here).
The negative correlation between use of peer grading for assessments and completion rate
suggests that course designers should carefully consider whether to use this as an assessment
mechanism, or whether automated assessments would meet their educational goals. For example,
in the case of peer grading short essays using a rubric based on factual recall, similar results could
be achieved using multiple choice questions. In contrast, larger, project-based peer graded
assessments may yield more significant learning gains for the students who do complete them as
they are arguably more demanding and there is an aspect of vicarious learning from assessing
others, though the quality of this also requires empirical verification. Further research into the
use of peer grading would be valuable to investigate the reasons behind this finding. For example,
it could be hypothesised that the lower completion rate in courses using peer grading may be due
to peer assessments being more rigourous, disengagement due to having to wait for feedback, or
reasons why students may choose not to attempt them (such as proficiency in English, for
example). Another possible factor influencing increased attrition in peer graded MOOCs may be
disengagement from students from minority cultural backgrounds as MOOC students assessing
their peers have been shown to give higher marks to students from their own country (Kulkarni,
Koh, Le, Papadopoulos, Cheng, Koller & Klemmer, 2013). A better understanding of these issues
is needed to clarify the circumstances in which peer grading could be recommended.
While this study provides some insights into the potential impact of learning design decisions
upon completion rates, it does have its limitations and further empirical work is required. The
principal limitation of this work is the availability of data from courses. This sample only reflects
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
54
courses for which data is publicly available; a more detailed picture would be possible if further
data was made available. The definition of completion rate as a percentage of enrolled students
may be over-simplistic and subject to wide variations in enrolments (Ho et al., 2014). More
nuanced definitions have been called for to reflect the numerous ways students may interact with
MOOCs (DeBoer et al., 2014). However, the fact remains that total enrolments and certificate-
earning completers are the statistics most frequently present in the public domain. As the body of
academic literature related to MOOCs grows, the potential for more detailed and robust meta-
analaysis is likely to increase in the future.
Acknowledgments(
This research was supported by a grant from the MOOC Research Initiative, funded by the Gates
Foundation.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
55
References(
Belanger, Y. (2013) IntroAstro : An intense experience. Duke University Libraries. Retrieved from
http://hdl.handle.net/10161/6679
Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A.D., & Seaton, D. T. (2013). Studying
learning in the worldwide classroom: Research into edX’s first MOOC. Research and
Practice in Assessment, 8, 13-25.
Challen, G. & Seltzer, M. (2014) Enabling MOOC collaborations through modularity. Proceedings
of the 2014 Learning with MOOCs Practitioner’s Workshop. Retrieved from
http://blue.cse.buffalo.edu/papers/lwmoocs2014-mmoocs/
Cross, S. (2013). Evaluation of the OLDS MOOC curriculum design course: participant
perspectives, expectations and experiences. OLDS MOOC Project, Milton Keynes.
Retrieved from http://oro.open.ac.uk/37836/
DeBoer, J., Ho, A., Stump, G., & Breslow, L. (2014). Changing “course:” reconceptualizing
educational variables for massive open online courses. Educational Researcher, 43(2),
74-84.
Duke University (2012) Introduction to Genetics and Evolution, preliminary report. Duke Today.
Retrieved from http://today.duke.edu/node/93914
Emanuel, E. J. (2013). Online education: MOOCs taken by educated few. Nature, 503(342).
Retrieved from http://dx.doi.org/10.1038/503342a
Ferguson, R. & Clow, D. (2015) Examining engagement: analysing learner subpopulations in
massive open online courses (MOOCs). In 5th International Learning Analytics and
Knowledge Conference (LAK15), 1620 March 2015, Poughkeepsie, NY, USA, ACM.
Field, A. (2009) Discovering statistics using SPSS, 3rd ed. London: SAGE.
Grainger, B. (2013) Overview statistics for the International Programmes’ Coursera MOOCs.
University of London International Academy. Retrieved from :
http://www.londoninternational.ac.uk/sites/default/files/governance/ltas13/ltas13.3_m
ooc_statistics.pdf
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
56
Harvard Magazine (2013) What modularity means for MOOCs. Harvard Magazine, 5th December
2013. Retrieved from http://harvardmagazine.com/2013/12/harvard-mit-online-
education-views-changing
Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J., & Chuang, I. (2014).
HarvardX and MITx: The first year of open online courses (HarvardX and MITx Working
Paper No. 1). Retrieved from:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2381263
Jordan, K. (2014) Initial trends in enrolment and completion of massive open online courses. The
International Review of Research in Open and Distance Learning, 15(1), 133-160.
Jordan, K. (2015) MOOC completion rates: The data. Retrieved from:
http://www.katyjordan.com/MOOCproject.html
Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing
learner subpopulations in massive open online courses. Third International Conference
on Learning Analytics and Knowledge, LAK ’13 Leuven, Belgium.
Koller, D., & Ng, A. (2013). The online revolution: Education for everyone. Seminar presentation
at the Said Business School, Oxford University, 28th January 2013. Retrieved from
http://www.youtube.com/watch?v=mQ-K-sOW4fU&feature=youtu.be
Koller, D., Ng, A., Do, C., & Chen, Z. (2013). Retention and intention in massive open online
courses: In depth. Educause Review. Retrieved from
http://www.educause.edu/ero/article/retention-and-intention-massive-open-online-
courses-depth-0
Kulkarni, C., Koh, P. W., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., & Klemmer,
S.R. (2013). Peer and self-assessment in massive
online classes. ACM Transactions on Computer-Human Interactions, 9(4) Article
39.
LeBar, M. (2014) MOOCs Completion is not important. Forbes. Retrieved from:
http://www.forbes.com/sites/ccap/2014/09/16/moocs-finishing-is-not-the-important-
part/
Open2Study (2013) Open2study Research Report, September 2013. Retrieved from
https://www.open2study.com/research/download/417
Perna, L.W., Ruby, A., Boruch, R.F., Wang, N., Scull, J., Ahmad, S. & Evans, C. (2014) Moving
through MOOCs: Understanding the progression of users in Massive Open Online
Courses. Educational Researcher 43(9), 421-432.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
57
Reich, J. (2014) MOOC completion and retention in the context of student intent. Educause
Review. Retrieved from http://www.educause.edu/ero/article/mooc-completion-and-
retention-context-student-intent
Rohatgi, A. (2014) WebPlotDigitizer software tool. Retrieved from
http://arohatgi.info/WebPlotDigitizer/
Rosé, C. P., Carlson, R., Yang, D., Wen, M., Resnick, L., Goldman, P., & Sherer, J. (2014). Social
factors that contribute to attrition in MOOCs. In Proceedings of the first ACM conference
on Learning@ scale conference (pp. 197198). ACM.
Severance, C. (2012) Internet history, technology and security (IHTS). Retrieved from
http://www.slideshare.net/fullscreen/csev/internet-history-technology-and-security-
grand-finale-lecture-20121001/7
Times Higher Education. (2013). World University Rankings 2013-2014. Retrieved from
http://www.timeshighereducation.co.uk/world-university-rankings/
University of Edinburgh. (2013). MOOCs @ Edinburgh 2013 - Report #1. Retrieved from
http://hdl.handle.net/1842/6683
Weller, M. (2013). H18Open reflections. The Ed Techie blog. Retrieved from
http://nogoodreason.typepad.co.uk/no_good_reason/h817open/
© Jordan
... However, accessing, engaging with and learning in online environments can be challenging even under optimal circumstances. Though MOOCs and other online courses are lauded for their potential to deliver learning at scale, their completion rates are low [17,18]. Moreover, high rates of learner multitasking online can impede focused learning [19]. ...
... One course, offered to HWs worldwide, had a reported completion rate of 36% [46]. There are many more reports on MOOC completion rates, which range from < 5 to 13% [17,18], though rates are higher for some courses and platforms [14]. Some learners skipped modules or completed modules more than once; it would be useful to evaluate learners' reasons for this in the future. ...
Article
Full-text available
Background Health workers (HWs) in Africa face challenges accessing and learning from existing online training opportunities. To address these challenges, we developed a modular, self-paced, mobile-ready and work-relevant online course covering foundational infection prevention and control (IPC) concepts. Here, we evaluate the first pilot of this course, conducted with HWs in Nigeria. Methods We used a learner-centered design and prototyping process to create a new approach to delivering online training for HWs. The resulting course comprised 10 self-paced modules optimized for use on mobile devices. Modules presented IPC vignettes in which learning was driven by short assessment questions with feedback. Learners were recruited by distributing a link to the training through Nigeria-based email lists, WhatsApp groups and similar networks of HWs, managers and allied professionals. The course was open to learners for 8 weeks. We tracked question responses and time on task with platform analytics and assessed learning gains with pre- and post-testing. Significance was evaluated with the Wilcoxon signed-rank test, and effect size was calculated using Cohen’s d . Results Three hundred seventy-two learners, with roles across the health system, enrolled in the training; 59% completed all 10 modules and earned a certificate. Baseline knowledge of foundational IPC concepts was low, as measured by pre-test scores (29%). Post-test scores were significantly higher at 54% (effect size 1.22, 95% confidence interval 1.00-1.44). Learning gains were significant both among learners with low pre-test scores and among those who scored higher on the pre-test. We used the Net Promoter Score (NPS), a common user experience metric, to evaluate the training. The NPS was + 62, which is slightly higher than published scores of other self-paced online learning experiences. Conclusions High completion rates, significant learning gains and positive feedback indicate that self-paced, mobile-ready training that emphasizes short, low-stakes assessment questions can be an effective, scalable way to train HWs who choose to enroll. Low pre-test scores suggest that there are gaps in IPC knowledge among this learner population.
... However, these studies failed to account for deficiencies at the micro level, which was in the pedagogical approaches for learners. MOOCs were being studied through the traditional educational system lensusing course completion percentages as indicators of quality or success of the course (DeBoer et al., 2014;Hood & Littlejohn, 2016;Jordan, 2015). Recent research has attempted to account for this macro-level understanding of MOOC learners by examining their motivational influences (Moore & Wang, 2021;Reeves et al., 2017;Williams et al., 2018). ...
... La literatura categoriza a los estudiantes de cursos virtuales como comprometidos (personas que participaron en todas las evaluaciones hasta el nal del curso), estudiantes que abandonaron temprano (personas que inicialmente participaron pero no terminaron el curso) y otros estudiantes (el resto) (9). Los cursos MOOC tienen tasas de nalización relativamente bajas, con un promedio del 22% (10). Sin embargo, se puede considerar que los cursos del CVSP tienen un buen resultado ya que alrededor del 50% de los enfermeros matriculados llegan a terminarlos. ...
Article
Full-text available
Introducción: educación continua es una inversión personal y profesional necesaria a lo largo de la vida profesional de los enfermeros, porque amplía y perfecciona conocimientos técnicos, científicos y habilidades prácticas. Objetivo: describir la participación de enfermeros de la Región de las Américas en los cursos ofrecidos por el Campus Virtual de Salud Pública antes y al comienzo de la pandemia de la COVID-19, según género, edad, nivel educativo y lugar de trabajo. Método: estudio descriptivo cuantitativo. Se analizaron datos de los participantes de los cursos del periodo de enero del 2013 hasta junio del 2020 del Campus Virtual de Salud Pública. Población de estudio: enfermeros de la Región de las Américas que tomaron cursos virtuales. Se calcularon medidas de estadística descriptiva y proporciones de las variables sociodemográficas de los participantes de manera confidencial con autorización y credenciales para acceder datos en la plataforma. Resultados: a los 515 cursos virtuales ofrecidos por el Campus Virtual de Salud Pública se matricularon 368.018 enfermeros. Los porcentajes de enfermeros con certificado según el país de procedencia son los siguientes: Ecuador con 64,7%, México con 58,3%, Honduras con 52,9%, Paraguay con 48,6%, Colombia con 45,2%, Uruguay con 42,2%, Argentina con 38,0%, Chile con 22,7%, Perú con 21,3%, y Brasil con 9,7%. El 83,1% eran mujeres, el 75% tenía un nivel educativo universitario y el 40% tenía edades entre 26 y 35 años y el 47,1% trabajaba en hospitales. Conclusión: la pandemia de la COVID-19 refuerza la importancia de la educación continua virtual en la que los enfermeros pueden mejorar sus conocimientos, potencializar sus habilidades, y calificarse para los cambios en la práctica, la enseñanza y la investigación.
Chapter
Massive Open Online Course (MOCC) is an emerging mode of higher education across the world. Amidst the COVID-19 pandemic and subsequent campus closure worldwide, its potential became more apparent. The Collaborating Centre for Oxford University and CUHK for Disaster and Medical Humanitarian Response (CCOUC) had been running the global web-based Health-EDRM course “Public Health Principles in Disaster and Medical Humanitarian Response” from 2014 to 2019 to build disaster resilience in a doubt sense: for its users as receivers of education service and for the higher educational institution as the provider. A total of 7,749 participants from more than 150 countries registered for the Health-EDRM online course, with an exceptionally high course completion rate of 23.21%. Drawing on the capacity and resilience built up by this preparedness activity, CCOUC was running another 10 global Health-EDRM MOOCs in English and two in Chinese during the COVID-19 pandemic, and at the same time collaborating with academic units and programmes at its parent The Chinese University of Hong Kong to offer these courses for fulfilling students’ course requirements. Thousands of students across the world were benefited, particularly those in developing countries in Asia and Africa. This chapter will explore this successful higher education institution resilience building experience.
Chapter
Automatic evaluation of a student’s STEM learning profile to understand her persistence is of national interest. In this paper, we propose an early “dropout” and behavior prediction model that can identify the potentially ‘marginalized’ student learning patterns to facilitate early instructional intervention in Massive Open Online Courses (MOOC) learning platform. Note that in the MOOC setting, building a comprehensive learning profile of the students is particularly more challenging due to the lack of available information and constrained communication modes. Unlike most existing works, which ignore these environmental constraints of missing information to formulate an over-simplified problem of ‘one-time’ prediction task in a supervised setting, the proposed model introduces a continual automated monitoring and proactive estimation process, which transforms its decision making capacity over time with evolving data patterns. In a semi-supervised scenario, the Multi-Domain Adversarial Feature Representation (mDAFR) strategy promotes the emergence of features, which are discriminative for the main learning task, while remaining largely invariant to the data sources (course from which the data was captured) in consideration. This ensures an enhanced distributed learning capacity over different course environments. Compared to transfer learning, mDAFR reports 11–15% improved classification accuracy in KDDCup dataset, and demonstrates a competitive performance against several state-of-the-art methods in both KDDCup and MOOCDropout datasets.
Article
Full-text available
O objetivo do estudo é validar e testar a confiabilidade da escala de conteúdo das Circunstâncias da Vida e Aspectos Motivacionais do Estudante (CVAME). Para tanto, utiliza-se técnicas de Análise Fatorial Exploratória (AFE) e Análise Fatorial Confirmatória (CFA). Os resultados obtidos a partir do modelo representado pela Figura 3. Modelo de medida foram considerados adequados, pois em conjunto, o valor de 0.782 da fiabilidade compósita (“composite reliability”, ou “CR”) e o valor de 0.555 da variância média extraída (“average variance extracted”, ou “AVE”) indicam valores aceitáveis de fiabilidade e de validade convergente para o modelo de medida. Esses valores evidenciam a qualidade do modelo estrutural do instrumento. Tendo em vista os resultados apresentados, este instrumento parece-nos muito útil e permite afirmar que, ele é sensível, válido e confiável para a avaliação do apoio acadêmico recebido pelos estudantes, do conteúdo ou mudanças das circunstâncias da vida do estudante durante o processo de formação, aos aspectos motivacionais das experiências de aprendizagem.
Chapter
Damit Controller mit Data Scientists und den IT-Abteilungen zusammenarbeiten können, müssen sie wahre IT-Experten sein. Viele Controller sind aber nur bedingt auf die neuen Anforderungen der Digitalisierung im Finanzbereich vorbereitet. Das zeigen die Ergebnisse der dritten Zukunftsstudie des WHU Controller Panels von 2017. Nur fünf Prozent der Befragten gaben an, dass sie Predictive Analytics intensiv für ihre Arbeit nutzen. Aber auch auf Vorstandsebenen, wo die strategischen Weichen für die digitale Transformation gestellt werden, fehlt es an entsprechendem Know-how.
Article
Objective Systematically review the evaluation and impact of online health education interventions: assess approaches used, summarize main findings, and identify knowledge gaps. Data Source We searched the following databases: EMBASE, ERIC, MEDLINE, and Web of Science. Study Inclusion and Exclusion Criteria Studies were included if (a) published in English between 2010-2020 in a peer-reviewed journal (b) reported an online health education intervention aimed at consumers, caregivers, and the public (c) evaluated implementation OR participant outcomes (d) included ≥ 100 participants per study arm. Data Extraction Two authors extracted data using a standardized form. Data Synthesis Data synthesis was structured around the primary outcomes of the included studies. Results 26 studies met the inclusion criteria. We found substantial heterogeneity in study population, design, intervention, and primary outcomes, and significant methodological issues that resulted in moderate to high risk of bias. Overall, interventions that were available to all (e.g., on YouTube) consistently attained a large global reach, and knowledge was consistently improved. However, the impact on other outcomes of interest (e.g., health literacy, health behaviors) remains unclear. Conclusion Evidence around the impacts of the type of online health education interventions assessed in this review is sparse. A greater understanding of who online interventions work for and what outcomes can be achieved is crucial to determine, and potentially expand, their place in health education.
Article
Full-text available
This paper reports on the progress of users through 16 Coursera courses taught by University of Pennsylvania faculty for the first time between June 2012 and July 2013. Using descriptive analyses, this study advances knowledge by considering two definitions of massive open online course (MOOC) users (registrants and starters), comparing two approaches to measuring student progress through a MOOC course (sequential versus user driven), and examining several measures of MOOC outcomes and milestones. The patterns of user progression found in this study may not describe current or future patterns given the continued evolution of MOOCs. Nonetheless, the findings provide a baseline for future studies.
Conference Paper
Full-text available
In this paper, we explore student dropout behavior in a Massively Open Online Course (MOOC). We use a survival model to measure the impact of three social factors that make predictions about attrition along the way for students who have participated in the course discussion forum.
Article
Full-text available
The past two years have seen rapid development of massive open online courses (MOOCs) with the rise of a number of MOOC platforms. The scale of enrolment and participation in the earliest mainstream MOOC courses has garnered a good deal of media attention. However, data about how the enrolment and completion figures have changed since the early courses is not consistently released. This paper seeks to draw together the data that has found its way into the public domain in order to explore factors affecting enrolment and completion. The average MOOC course is found to enroll around 43,000 students, 6.5% of whom complete the course. Enrolment numbers are decreasing over time and are positively correlated with course length. Completion rates are consistent across time, university rank, and total enrolment, but negatively correlated with course length. This study provides a more detailed view of trends in enrolment and completion than was available previously, and a more accurate view of how the MOOC field is developing.
Conference Paper
Full-text available
As MOOCs grow in popularity, the relatively low completion rates of learners has been a central criticism. This focus on completion rates, however, reflects a monolithic view of disengagement that does not allow MOOC designers to target interventions or develop adaptive course features for particular subpopulations of learners. To address this, we present a simple, scalable, and informative classification method that identifies a small number of longitudinal engagement trajectories in MOOCs. Learners are classified based on their patterns of interaction with video lectures and assessments, the primary features of most MOOCs to date. In an analysis of three computer science MOOCs, the classifier consistently identifies four prototypical trajectories of engagement. The most notable of these is the learners who stay engaged through the course without taking assessments. These trajectories are also a useful framework for the comparison of learner engagement between different course structures or instructional approaches. We compare learners in each trajectory and course across demographics, forum participation, video access, and reports of overall experience. These results inform a discussion of future interventions, research, and design directions for MOOCs. Potential improvements to the classification mechanism are also discussed, including the introduction of more fine-grained analytics.
Article
This report presents an evaluation of the Open Learning Design Studio MOOC (OLDS MOOC) that took place between January and March 2013. This evaluation focuses on the experience of those who registered, participated and actively contributed in the public course space. In particular the evaluation focuses on participant expectations, a detailed analysis of participation rates, use of the course space and technologies, and the effectiveness and challenges presented by collaborative group working. The evaluation also looks at how participants understood and used the series of nine badges on offer. Throughout, a broad evidence base of qualitative and quantitative information is used including data from pre- and post-course surveys, from page view and contributions data available in the public spaces and from hundreds of participant blog, discussion forum and social media posts. Report at: http://oro.open.ac.uk/37836/
Article
Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9&percnt; of students’ grades within 5&percnt; of the staff grade, and 65.5&percnt; within 10&percnt;. On average, students assessed their work 7&percnt; higher than staff did. Students also rated peers’ work from their own country 3.6&percnt; higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4&percnt; to 9.9&percnt;.
Article
In massive open online courses (MOOCs), low barriers to registration attract large numbers of students with diverse interests and backgrounds, and student use of course content is asynchronous and unconstrained. The authors argue that MOOC data are not only plentiful and different in kind but require reconceptualization—new educational variables or different interpretations of existing variables. The authors illustrate this by demonstrating the inadequacy or insufficiency of conventional interpretations of four variables for quantitative analysis and reporting: enrollment, participation, curriculum, and achievement. Drawing from 230 million clicks from 154,763 registrants for a prototypical MOOC offering in 2012, the authors present new approaches to describing and understanding user behavior in this emerging educational context.
Article
http://ubi-learn.com/the-latest-news/mooc-completion-rates-the-data Massive Open Online Courses (MOOCs) have the potential to enable free university-level education on an enormous scale. A concern often raised about MOOCs is that although thousands enrol for courses, a very small proportion actually complete the course. The release of information about enrollment and completion rates from MOOCs appears to be ad hoc at the moment - that is, official statistics are not published for every course. This data visualisation draws together information about enrollment numbers and completion rates from across online news stories and blogs. How big is the typical MOOC? - while enrollment has reached up to ~180,000, 50,000 students enrolled is a much more typical MOOC size. How many students complete courses? - completion rates can approach 20%, although most MOOCs have completion rates of less than 10%. Clicking on data points on the chart will display further details about each course, including a link to the data source.