ArticlePDF Available

Massive open online course completion rates revisited: Assessment, length and attrition

Authors:

Abstract and Figures

p>This analysis is based upon enrolment and completion data collected for a total of 221 Massive Open Online Courses (MOOCs). It extends previously reported work (Jordan, 2014) with an expanded dataset; the original work is extended to include a multiple regression analysis of factors that affect completion rates and analysis of attrition rates during courses. Completion rates (defined as the percentage of enrolled students who completed the course) vary from 0.7% to 52.1%, with a median value of 12.6%. Since their inception, enrolments on MOOCs have fallen while completion rates have increased. Completion rates vary significantly according to course length (longer courses having lower completion rates), start date (more recent courses having higher percentage completion) and assessment type (courses using auto grading only having higher completion rates). For a sub-sample of courses where rates of active use and assessment submission across the course are available, the first and second weeks appear to be critical in achieving student engagement, after which the proportion of active students and those submitting assessments levels out, with less than 3% difference between them.</p
Content may be subject to copyright.
341
International Review of Research in Open and Distributed Learning
Volume 16, Number 3
June 2015
Massive Open Online Course Completion Rates
Revisited: Assessment, Length and Attrition
Katy Jordan
The Open University, UK
Abstract
This analysis is based upon enrolment and completion data collected for a total of 221 Massive
Open Online Courses (MOOCs). It extends previously reported work (Jordan, 2014) with an
expanded dataset; the original work is extended to include a multiple regression analysis of
factors that affect completion rates and analysis of attrition rates during courses. Completion
rates (defined as the percentage of enrolled students who completed the course) vary from 0.7%
to 52.1%, with a median value of 12.6%. Since their inception, enrolments on MOOCs have fallen
while completion rates have increased. Completion rates vary significantly according to course
length (longer courses having lower completion rates), start date (more recent courses having
higher percentage completion) and assessment type (courses using auto grading only having
higher completion rates). For a sub-sample of courses where rates of active use and assessment
submission across the course are available, the first and second weeks appear to be critical in
achieving student engagement, after which the proportion of active students and those submitting
assessments levels out, with less than 3% difference between them.
Keywords: Distance education; open learning; online learning; massive open online courses
(MOOCs)
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
342
Introduction
Since Massive Open Online Courses (MOOCs) became mainstream in 2012, completion rates
have been a controversial topic. While six-figure enrolment figures have garnered intense media
attention, critics have highlighted that few students complete courses relative to more formal
modes of learning. Counter to this, others have argued that the emancipatory effect of free online
access to education allows students to take what they need from MOOCs to meet their own
learning goals without formally completing courses; to examine completion rates is potentially
misleading (LeBar, 2014). This study takes a perspective which acknowledges that while
completing courses is not the only way of benefitting from participation in a MOOC, it is better to
try to understand the factors that affect completion rates and any implications for course design
than to ignore them.
In the literature on MOOCs there is a lack of peer-reviewed research publications which draw
upon more than a small number of courses restricted to single institutions, and the need for meta-
analysis independent of MOOC platform providers is a key issue for the field at present. For
example, Kizilcec, Piech and Schneider (2013) identified learner populations based on analysis of
three early Coursera MOOCs; replication of this analytical approach on data from the Futurelearn
platform identified different groups (Ferguson & Clow, 2015). Enrolment and completion figures
are the type of data that is most widely publicly available for analysis across the field, which is
necessary to ensure that conclusions are generalisable and not particular to a small number of
detailed cases.
Understanding the factors which affect completion rate can be approached from the perspective
of characteristics of learners and their reasons for participating, or improving the design of
courses. Greater focus to date has been on the motivations and behaviours of students in relation
to success (for example, Breslow et al., 2013; Kizilcec et al., 2013; Koller, Ng, Do & Chen, 2013;
Rosé et al., 2014). However, studies have suggested that those most likely to succeed in MOOCs
are the students who are already most educationally privileged (Emanuel, 2013; Koller and Ng,
2013). Addressing MOOC completion rates from a course design perspective, in order to help as
many diverse learners to complete as would wish to, is a pedagogical issue and a challenge for
course designers and instructors. In order for MOOCs to realise their potential in making
education open to all, it is not sufficient to simply make pre-existing course materials freely
available online. Completion rates are relatively low even among students who intend to complete
the course (an average of 22%; Reich, 2014) so for those students who intend to complete courses
or engage with the course as designed, not considering completion rates prevents exploration of
what can be done by educators to facilitate further student success.
This paper reports work undertaken to extend a previous study on initial trends in MOOC
completion rate (Jordan, 2014), which remains one of the largest MOOC studies in terms of
number of courses included. The dataset is expanded (to include more recent data, and not
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
343
restricted to the major MOOC providers) and multiple regression analysis is used to explore the
combined effects of a range of basic factors in correlation to completion rate. Additionally,
quantitative measures of engagement across the course of a MOOC are examined in instances
where this data is available.
Data Collection and Analysis
The approach to data collection built upon an initial dataset (reported in Jordan, 2014) which
combined enrolment and completion figures from news stories, MOOCs the author had taken as a
participant, and crowdsourced figures submitted by other students and MOOC instructors via a
blog. In order to expand the dataset, the author compiled a list of completed MOOCs not present
in the original dataset (based on in formation from MOOC-aggregating websites such as
https://www.class-central.com/) and performed a series of Internet searches to find sources
containing enrolment and completion information for the courses. Information was found and
included relating to a total of 221 MOOCs. Information about 35.3% of the courses was located in
news articles; 33.6% were sourced from academic reports and articles; 14.5% from instructors’
social media; 9.9% from students’ social media; and 6.8% from course sites. Further information
about courses, including course length and assessment type, was gathered via signing up to the
courses, asking participants or consulting MOOC-aggregating websites. Information about
university reputation was included based on the scores used by the Times Higher Education
World University Rankings (Times Higher Education, 2013). No further courses were added to
the dataset after November 2013. To access the full dataset, including links to each source and
further recent additions, see the online data visualization (Jordan, 2015). The dataset can be
summarised as follows:
A total of 221 courses were included in the dataset. Within this sample, enrolment figures
were available for 220 courses; completion figures were available for 129 courses; and
figures relating to engagement over the course of a MOOC were available for 59 courses.
The two most common definitions of engagement across the duration of courses used by
the sources were the number of students accessing resources, or completing assignments.
Courses from a range of different MOOC providers were included. Coursera (120 courses)
and Open2Study (43 courses) were the best represented platforms, although courses from
12 other providers and 19 independent courses were also included. A total of 78
institutions were present in the dataset.
A variety of different definitions of completion are in operation. Of the 129 courses for
which completion data was available, the most prevalent definition of completion was
earning a certificate (93 courses). Other definitions (as used by their data sources)
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
344
included ‘completed course’ (14 courses), ‘passed course’ (10 courses), ‘completed
assignments’ (6 courses), ‘memorably active participants’, achieving a ‘strong final score’,
active contributors at end of course, certificates purchased, ‘kept up’ with whole course,
or ‘took final exam’ (1 course each).
In addition to enrolment and completion figures, other data collected about courses
included start date, length of course (in weeks), and assessment type used. Assessment
type was categorised using three basic categories: auto grading only (92 courses), peer
grading only (10 courses), or a combination of both auto and peer grading (23 courses).
Three types of analysis were applied to the dataset. Since the expanded dataset encompassed a
wider time period, linear regression analysis was used to gain an overview of whether MOOC
enrolments and completion rates were changing over time.
Multiple regression analysis was then used with a sub-sample of courses to explore whether
MOOC completion rates are significantly correlated with a range of factors. Multiple regression is
“an extension of simple regression in which an outcome is predicted by two or more predictor
variables” (Field, 2009, p. 790). As an analytical approach, multiple regression offers the
advantage of being able to examine the relationship between multiple variables upon an outcome
(in this case, completion rate). The factors examined included assessment type, course length,
date, and university ranking score. While course length, date and university ranking had
previously been examined individually (Jordan, 2014), the larger dataset offered the opportunity
to consider the factors together. Platform and MOOC type were not included due to wide variation
in sample sizes. Note that full information about all of the factors was not available for every
course in the sample, so not all of the courses in the dataset were used in the regression analysis
(see results and discussion). The statistical analyses were undertaken using SPSS (Field, 2009).
The third type of analysis focused upon a smaller sample of courses for which data was available
about number of students participating week-by-week across the course of live MOOCs. In some
cases, raw data was not available and required extraction from charts using software (Rohatgi,
2014). This extends the original finding that approximately fifty percent of potential MOOC
students who sign up go on to become active users (Jordan, 2014) and examine how this trend
progresses. Participation is defined in two ways by data sources: either the number of students
viewing course materials, or the number submitting assignments. To allow comparison, these
values were expressed as a percentage of the total enrolment for each course. The resulting curves
were compared visually and average curves constructed using mean values.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
345
Results
The first analyses focused upon whether enrolments and completion rates appeared to be
changing over time. Regression analysis was performed to examine the extent to which course
start date predicts the number of students enrolled (figure 1), and the extent to which course start
date predicts the percentage of students that complete the course (figure 2). Prior to both
analyses, a Box-Cox transformation was applied as the residuals were not normally distributed.
Date significantly predicted total enrolment figures by the following formula: Enrolled^0.180902
= 66.3311 - 0.00147092 Date (n=219 , R2=0.0252 , p=0.019). Note that the correlation here is
negative; as time has progressed, the size of the average MOOC has decreased. Date also
significantly predicted completion rate by the following formula: PercentCompleted^0.5 = -
152.428 + 0.00377601 Date (n=129 , R2=0.1457 , p<0.001). In contrast, this represented a
positive correlation, so while total enrolments have decreased, completion rates have increased
over time.
Figure 1. Scatterplot of MOOC course enrolments plotted against course start date (n=219).
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
346
Figure 2. Scatterplot of percentage of students who completed courses plotted against course start
date (n=129).
Multiple Regression Analysis
A multiple regression analysis was carried out in order to examine the combined effects of the
factors upon completion rate, which had previously been examined individually (Jordan, 2014).
An initial run of the analysis included the following factors: University ranking score, start date,
course length (in weeks), total number of students enrolled, and assessment type. Assessment
type was a categorical variable comprising three categories (‘Auto grading only’, ‘peer grading
only’, and ‘auto and peer grading’). The start date variable was defined as the date that each
course formally launched; for the purposes of the analysis, this information was converted to the
Lilian date format. The result of this initial analysis showed that while the model explained a
significant amount of the variance in completion rate (F(6, 53) = 13.98, p < .05, R2 = .61, R2Adjusted
= .57), not all of the factors significantly predicted completion rate. In light of this, university
ranking and total enrolment were excluded from the analysis, and assessment types recoded into
two categories based on whether or not peer grading was used at all.
The analysis proceeded to examine the extent to which start date, course length and use of peer
grading predicted completion rate. An analysis of standard residuals was carried out, which
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
347
showed that the data contained no outliers (Std. Residual Min = -2.56, Std. Residual Max = 2.07).
Tests to see if the data met the assumption of collinearity indicated that multicollinearity was not
a concern (course length, Tolerance = .96, VIF = 1.40; start date, Tolerance = .99, VIF = 1.01; use
of peer grading, Tolerance = .96, VIF = 1.05). The data met the assumption of independent errors
(Durbin-Watson value = 1.08). The histogram of standardised residuals indicated that the data
contained approximately normally distributed errors, as did the normal P-P plot of standardised
residuals. The scatterplot of standardised predicted values showed that the data met the
assumption of linearity but may have violated the assumption of homoscedasticity. Note that
violating this assumption does not invalidate the analysis, which is accurate based upon the
sample used, but reduces the likelihood that the model generalises to the population (this would
not be certain, but more likely, if all the assumptions were met; Field, 2009). The
heteroscedasticity observed is not severe and is most likely caused by the presence of significant
variables that are not included in the model. Given the opportunistic nature of the data collection
it would be surprising if all significant variables had been identified. However, this does not
detract from the significant factors that have been determined. The data also met the assumption
of non-zero variances (course length, Variance = 13.58; start date, Variance = 15480621.9; use of
peer grading, Variance = .195).
Using the enter method it was found that course length, start date and use of peer grading explain
a significant amount of the variance in the completion rate (F(3, 117) = 57.7, p < .05, R2 = .60,
R2Adjusted = .59). The model summary, ANOVA table and coefficients table from the analysis are
shown in tables 1, 2 and 3, respectively.
Table 1
Model Summary
Model Summaryb
Model
R
R
Square
Adjusted R Square
Std. Error of the
Estimate
Durbin-Watson
1
.773
a
.597
.587
6.905697235734618
1.084
a. Predictors: (Constant), PeerGrading, DateISO, Length
b. Dependent Variable: PercentCompleted
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
348
Table 2
ANOVA Table
ANOVAa
Sum of Squares
df
Mean
Square
F
Sig.
1
Regression
8260.829
3
2753.610
57.741
.000b
Residual
5579.573
117
47.689
Total
13840.402
120
a. Dependent Variable: PercentCompleted
b. Predictors: (Constant), PeerGrading, DateISO, Length
Table 3
Coefficients Table
Coefficientsa
Model
Unstandardised
Coefficients
Standardised
Coefficients
t
Sig.
Collinearity
Statistics
B
Std.
Error
Beta
Toleran
ce
VIF
1
(Constant)
57.38
0
5.029
11.4
09
.000
DateISO
-.001
.000
-.343
-
5.81
6
.000
.991
1.009
Length
-1.751
.209
-.503
-
8.39
1
.000
.958
1.043
PeerGradi
ng
-
9.606
1.472
-.392
-
6.52
7
.000
.955
1.047
a. Dependent Variable: PercentCompleted
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
349
The correlation between start date and completion rate was positive, in that completion rates
increased over time. The other factors course length and whether or not peer grading was used
were both negative correlations (Figures 3 and 4), with longer courses and those which use peer
grading having lower completion rates than shorter or auto-graded courses.
Figure 3. Completion rate plotted against course length in weeks.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
350
Figure 4. Boxplots showing distributions of completion rates according to different assessment
types.
This categorization is not elaborate and, with an R2 of 60%, a reasonable proportion of the
variance remains unexplained, It does however demonstrate that it is possible to gain insights
into the impact of learning design decisions by considering completion rate. Availability of data is
an obstacle to further detailed analyses (such as considering the use of exams, different forms of
auto and peer grading, formative or summative assessment) at this stage.
Attrition Rates During Live Courses
The previous study considered the conversion rate between students who enrol and then go on to
become active in courses, by accessing course materials or logging in to the course site (Jordan,
2014). This part of the analysis sought to extend this by considering the levels of use week-by-
week during live courses, for MOOCs where this level of detail is available. The data collected by
courses only focused upon levels of use during the periods which the course was active.
Participation was defined in two ways; either by the number accessing course materials (‘active
students’), or the number who submitted assignments. Data about active students was available
for 59 courses (figure 5), and those submitting assignments in 54 courses (figure 6). The sample
included data from a range of courses and platforms, including 43 Open2Study courses
(Open2Study, 2013), 17 Coursera courses (Belanger, 2013; Duke University, 2012; Grainger,
2013; University of Edinburgh, 2013; Severance, 2013); one edX course (Breslow et al., 2013);
and two platform-independent courses (Cross, 2013; Weller, 2013). Note that the majority of
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
351
courses included in figures 5, 6 and 7 were four weeks long due to the open availability of data
from the Open2Study platform, upon which all courses are four weeks in duration.
Figure 5. Proportion of active students (accessing course materials) per week since start of course
as a percentage of total enrolment (n=59).
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
352
Figure 6. Proportion of students submitting assessments per week since start of courses as a
percentage of total enrolment (n=54).
The curves shown in Figures 5 and 6 are notable in two main ways. First, while the sample
contains variety of different types of MOOCs (different platforms, institutions, and modes of
teaching and assessment are present), the curves follow a similar overall trend. Around half of a
MOOCs’ enrolled students will not show up, and the first two weeks of a course appear to be
critical in gaining student engagement. Second, after the first two weeks, there is little difference
between the two measures of engagement those accessing course materials and those
submitting assignments. The difference between the two measures is shown in figure 7. After
week 3, the difference is less than five percent for all but two courses. This calls into question the
extent to which ‘lurking’ (that is, selectively accessing course materials but not actively
participating in assessments) is being used as a participation strategy, underlining the need for
further research in this area.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
353
Figure 7. Difference between percentage of active students and percentage of students submitting
assignments per week since the start of courses as a percentage of total enrolment (n=50).
Note that since this work was undertaken, a similar study has been published based on rich,
detailed data from 16 Coursera-based MOOCs provided by the University of Pennsylvania (Perna,
Ruby, Boruch, Wang, Scull, Ahmad & Evans, 2014). This study corroborates the findings here in
that the initial weeks of courses are key for students engagement. Over the course of the MOOCs
in the Perna et al. (2014) sample, broadly similar attrition curves are demonstrated, and the gap
between students accessing materials and taking assignments narrows over time.
Conclusions
The multiple regression analysis highlights that it may be possible to gain insights into the
impacts of different aspects of MOOC course design by considering completion rate across a large
sample of courses. The results here may be useful for educators to consider when designing
MOOCs, although there are limitations to this study and further research would be valuable.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
354
Factors that significantly predicted completion rate included start date, course length and
assessment type. Completion rates were positively correlated with start date; that is, more recent
courses demonstrate higher percentage completion. This is likely due to a decrease in average
total enrolments over time, but may also reflect feedback and iterative design of courses.
On the basis of the negative correlation with course length, coupled with the attrition observed in
the initial weeks of courses, a case could be made for shorter, more modular courses. Greater
signposting would be required between courses for those students looking to create a more
substantial programme of learning. Shorter courses with better guidance about how they could be
combined could also benefit those students who prefer to direct their own learning by making it
easier to find the parts of a course that they value; this would also allow for these students’ MOOC
achievements to be recognised. Modularisation for MOOCs has already been suggested by some
(for example, Bol cited in Harvard Magazine, 2013; Challen & Seltzer, 2014); the evidence here
provides an empirical rationale for such developments, and further research would be valuable to
examine the effects in practice. Note that in contrast to this finding Perna et al. (2014) reported
no relationship between course length and completion rate. Given the similarity between the
attrition curves reported by Perna et al. and those presented here, it is likely that the lack of a
negative correlation is due to small sample size (16 courses from a single institution and platform,
several of which are included in the dataset here).
The negative correlation between use of peer grading for assessments and completion rate
suggests that course designers should carefully consider whether to use this as an assessment
mechanism, or whether automated assessments would meet their educational goals. For example,
in the case of peer grading short essays using a rubric based on factual recall, similar results could
be achieved using multiple choice questions. In contrast, larger, project-based peer graded
assessments may yield more significant learning gains for the students who do complete them as
they are arguably more demanding and there is an aspect of vicarious learning from assessing
others, though the quality of this also requires empirical verification. Further research into the
use of peer grading would be valuable to investigate the reasons behind this finding. For example,
it could be hypothesised that the lower completion rate in courses using peer grading may be due
to peer assessments being more rigorous, disengagement due to having to wait for feedback, or
reasons why students may choose not to attempt them (such as proficiency in English, for
example). Another possible factor influencing increased attrition in peer graded MOOCs may be
disengagement from students from minority cultural backgrounds as MOOC students assessing
their peers have been shown to give higher marks to students from their own country (Kulkarni,
Koh, Le, Papadopoulos, Cheng, Koller & Klemmer, 2013). A better understanding of these issues
is needed to clarify the circumstances in which peer grading could be recommended.
While this study provides some insights into the potential impact of learning design decisions
upon completion rates, it does have its limitations and further empirical work is required. The
principal limitation of this work is the availability of data from courses. This sample only reflects
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
355
courses for which data is publicly available; a more detailed picture would be possible if further
data was made available. The definition of completion rate as a percentage of enrolled students
may be over-simplistic and subject to wide variations in enrolments (Ho et al., 2014). More
nuanced definitions have been called for to reflect the numerous ways students may interact with
MOOCs (DeBoer et al., 2014). However, the fact remains that total enrolments and certificate-
earning completers are the statistics most frequently present in the public domain. As the body of
academic literature related to MOOCs grows, the potential for more detailed and robust meta-
analysis is likely to increase in the future.
Acknowledgments
This research was supported by a grant from the MOOC Research Initiative, funded by the Gates
Foundation.
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
356
References
Belanger, Y. (2013) IntroAstro : An intense experience. Duke University Libraries. Retrieved from
http://hdl.handle.net/10161/6679
Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A.D., & Seaton, D. T. (2013). Studying
learning in the worldwide classroom: Research into edX’s first MOOC. Research and
Practice in Assessment, 8, 13-25.
Challen, G. & Seltzer, M. (2014) Enabling MOOC collaborations through modularity. Proceedings
of the 2014 Learning with MOOCs Practitioner’s Workshop. Retrieved from
http://blue.cse.buffalo.edu/papers/lwmoocs2014-mmoocs/
Cross, S. (2013). Evaluation of the OLDS MOOC curriculum design course: participant
perspectives, expectations and experiences. OLDS MOOC Project, Milton Keynes.
Retrieved from http://oro.open.ac.uk/37836/
DeBoer, J., Ho, A., Stump, G., & Breslow, L. (2014). Changing “course:” reconceptualizing
educational variables for massive open online courses. Educational Researcher, 43(2),
74-84.
Duke University (2012) Introduction to Genetics and Evolution, preliminary report. Duke Today.
Retrieved from http://today.duke.edu/node/93914
Emanuel, E. J. (2013). Online education: MOOCs taken by educated few. Nature, 503(342).
doi.org/10.1038/503342a
Ferguson, R. & Clow, D. (2015) Examining engagement: analysing learner subpopulations in
massive open online courses (MOOCs). In 5th International Learning Analytics and
Knowledge Conference (LAK15), 1620 March 2015, Poughkeepsie, NY, USA, ACM.
Field, A. (2009) Discovering statistics using SPSS, 3rd ed. London: SAGE.
Grainger, B. (2013) Overview statistics for the International Programmes’ Coursera MOOCs.
University of London International Academy. Retrieved from :
http://www.londoninternational.ac.uk/sites/default/files/governance/ltas13/ltas13.3_m
ooc_statistics.pdf
Harvard Magazine (2013) What modularity means for MOOCs. Harvard Magazine, 5th December
2013. Retrieved from http://harvardmagazine.com/2013/12/harvard-mit-online-
education-views-changing
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
357
Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J., & Chuang, I. (2014).
HarvardX and MITx: The first year of open online courses (HarvardX and MITx Working
Paper No. 1). Retrieved from:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2381263
Jordan, K. (2014) Initial trends in enrolment and completion of massive open online courses. The
International Review of Research in Open and Distance Learning, 15(1), 133-160.
Jordan, K. (2015) MOOC completion rates: The data. Retrieved from:
http://www.katyjordan.com/MOOCproject.html
Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing
learner subpopulations in massive open online courses. Third International Conference
on Learning Analytics and Knowledge, LAK ’13 Leuven, Belgium.
Koller, D., & Ng, A. (2013). The online revolution: Education for everyone. Seminar presentation
at the Said Business School, Oxford University, 28th January 2013. Retrieved from
http://www.youtube.com/watch?v=mQ-K-sOW4fU&feature=youtu.be
Koller, D., Ng, A., Do, C., & Chen, Z. (2013). Retention and intention in massive open online
courses: In depth. Educause Review. Retrieved from
http://www.educause.edu/ero/article/retention-and-intention-massive-open-online-
courses-depth-0
Kulkarni, C., Koh, P. W., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., & Klemmer,
S.R. (2013). Peer and self-assessment in massive
online classes. ACM Transactions on Computer-Human Interactions, 9(4) Article
39.
LeBar, M. (2014) MOOCs Completion is not important. Forbes. Retrieved from:
http://www.forbes.com/sites/ccap/2014/09/16/moocs-finishing-is-not-the-important-
part/
Open2Study (2013) Open2study Research Report, September 2013. Retrieved from
https://www.open2study.com/research/download/417
Perna, L.W., Ruby, A., Boruch, R.F., Wang, N., Scull, J., Ahmad, S. & Evans, C. (2014) Moving
through MOOCs: Understanding the progression of users in Massive Open Online
Courses. Educational Researcher 43(9), 421-432.
Reich, J. (2014) MOOC completion and retention in the context of student intent. Educause
Review. Retrieved from http://www.educause.edu/ero/article/mooc-completion-and-
retention-context-student-intent
Massive Open Online Course Completion Rates Revisited: Assessment, Length And Attrition
Jordan
This work is licensed under a Creative Commons Attribution 4.0 International License.
358
Rohatgi, A. (2014) WebPlotDigitizer software tool. Retrieved from
http://arohatgi.info/WebPlotDigitizer/
Rosé, C. P., Carlson, R., Yang, D., Wen, M., Resnick, L., Goldman, P., & Sherer, J. (2014). Social
factors that contribute to attrition in MOOCs. In Proceedings of the first ACM conference
on Learning@ scale conference (pp. 197198). ACM.
Severance, C. (2012) Internet history, technology and security (IHTS). Retrieved from
http://www.slideshare.net/fullscreen/csev/internet-history-technology-and-security-
grand-finale-lecture-20121001/7
Times Higher Education. (2013). World University Rankings 2013-2014. Retrieved from
http://www.timeshighereducation.co.uk/world-university-rankings/
University of Edinburgh. (2013). MOOCs @ Edinburgh 2013 - Report #1. Retrieved from
http://hdl.handle.net/1842/6683
Weller, M. (2013). H18Open reflections. The Ed Techie blog. Retrieved from
http://nogoodreason.typepad.co.uk/no_good_reason/h817open/
© Jordan
... Even though MOOCs offer plenty of advantages, they still have a hard time keeping learners engaged and ensuring they stick with the courses. Completion rates are infamously low, with many courses seeing less than 10% of participants actually finishing (Jordan, 2015). This problem is especially common in MOOCs designed for CPD, where learners are often balancing their studies with busy work schedules. ...
... This problem is especially common in MOOCs designed for CPD, where learners are often balancing their studies with busy work schedules. Without real-time feedback, meaningful social interaction, or hands-on experiences, many students end up feeling isolated, which only adds to the high dropout rates (Jordan, 2015). ...
... MOOCs face low completion rates, with only 5-15% of enrolled learners finishing courses (Jordan, 2015). This is driven by difficulties in sustaining engagement, especially in self-paced courses, where learners often struggle to stay motivated (Zhong, Guo and Qian, 2024). ...
Article
Full-text available
Massive Open Online Courses (MOOCs) are considered effective and attractive models in the field of education and learning. The success of these experiences on an international scale is due, on the one hand, to the wide dissemination of information and communication technologies and, on the other hand, to the specific characteristics of these educational technologies, which do not recognize geographic or age boundaries. They are also cost-effective, as most of these courses are open access. As in other fields, these technologies are also developing in the sports sector. The aim of this work is to provide an analysis and synthesis of research on MOOCs in the context of sports. This study adopts a descriptive systematic literature review (SLR) approach. The MOOC studies reviewed range from 2017 to 2023. The study addresses three key questions: first, which types of platforms are most commonly used for sports-related MOOCs? Second, what are the most effective strategies for ensuring active learner engagement? Finally, what challenges do these technologies face? The results suggest that there are numerous digital solutions for sports MOOCs, including platforms like Coursera and edX. Faced with challenges such as loss of motivation, technological barriers, and cultural differences, the developers of these solutions are implementing innovative strategies, such as discussion forums, video sessions, gamification, among others. Resumé: Les cours en ligne ouverts (MOOC) sont considérés comme des modèles efficaces et attrayants dans le domaine de l'éducation et de l'apprentissage. Le succès de ces expériences à l'échelle internationale http://www.revue-irs.com 3145 est attribuable, d'une part, à la large diffusion des technologies de l'information et de la communication, et, d'autre part, aux caractéristiques de ces technologies éducatives, qui ne reconnaissent ni les frontières géographiques ni les limites d'âge. Elles sont également peu coûteuses, car la plupart de ces cours sont ouverts. Comme dans d'autres domaines, ces technologies se développent également dans le domaine sportif. L'objectif de ce travail est de fournir une analyse et une synthèse des recherches sur les MOOC dans le cadre sportif. Cette étude adopte une approche descriptive de revue systématique de la littérature (SLR). Les études sur les MOOC examinées datent de 2017 à 2023. Elle s'appuie sur trois questions importantes : premièrement, quels types de plateformes sont les plus utilisés pour les MOOC sportifs ? Deuxièmement, quelles sont les stratégies les plus efficaces pour garantir l'engagement actif des apprenants ? Et enfin, quels sont les défis auxquels ces technologies font face ? Les résultats obtenus suggèrent qu'il existe de nombreuses solutions numériques pour les MOOC sportifs, notamment Coursera et EDX. Face aux contraintes auxquelles les MOOC sportifs sont confrontés, telles que la perte de motivation, l'existence de barrières technologiques ou encore les différences culturelles, les développeurs de ces solutions mettent en place des stratégies innovantes, comme des forums de discussion, des sessions vidéo, la gamification, entre autres.
... Online research training has been found to be efficient [13,29]. Although massive open online courses (MOOC) completion rates are generally low, with an average of 12.6% [30], 42.3% of participants in our study completed all stages of the training and reached the course completion exam. Several factors contribute to low MOOC completion rates, including spontaneous enrolment, the diverse backgrounds of participants, varying levels of self-regulation and motivation, course length, assessment methods, grading type, and start dates [30,31]. ...
... Although massive open online courses (MOOC) completion rates are generally low, with an average of 12.6% [30], 42.3% of participants in our study completed all stages of the training and reached the course completion exam. Several factors contribute to low MOOC completion rates, including spontaneous enrolment, the diverse backgrounds of participants, varying levels of self-regulation and motivation, course length, assessment methods, grading type, and start dates [30,31]. Low completion rates may not necessarily indicate poor course quality [32]. ...
Article
Full-text available
Background Research training is important for medical trainees, because it improves their critical thinking, problem-solving, and the application of scientific principles to clinical practice. The COVID-19 Pandemic, which limited trainees’ access to hospitals, had also disrupted traditional research training. The International Emergency Medicine Education Project introduced the online Fundamentals of Research in Medicine course to support trainees. The course was designed as an expert interview. This format intended to foster a relaxed learning environment and promote experience sharing rather than transferring only factual information about research. This study assesses the course’s effect on participants’ perceived knowledge and confidence in research concepts. Methods A prospective observational mixed-methods research was conducted through the International Emergency Medicine Education Project’s online platform. Pre- and post-course surveys measured participants’ perceived knowledge and confidence levels across 16 research-related topics. Quantitative survey data were analysed using the Wilcoxon Signed-Rank test, and qualitative feedback was evaluated to explore participants’ experiences. Results A total of 272 participants enrolled in the course. 168 participants started, and 52 (19.2%) completed the pre- and post-surveys and the course completion exam. Medical students and interns, as well as participants from Africa and Asia, comprised the majority. Most participants were from India. 78.8% of the participants were from low-income or lower-middle-income countries. Participants’ perceived knowledge and confidence significantly improved after completing the course, p < 0.001, with large effect sizes (-0.902 and − 0.819, respectively). Qualitative feedback highlighted the course’s clarity, simplicity, and effectiveness of the informal interview format. Some participants suggested adding more visual aids and detailed explanations for complex topics. Conclusions The online course, designed as an interview format, effectively enhanced participants’ perceived research knowledge and confidence. Future studies should incorporate objective measures of research skill acquisition from online courses and evaluate the long-term impact on participants’ academic and professional development.
... Learner characteristics mainly include motivation, attitude, self-regulation ability and learning experience (eg, Kizilcec & Halawa, 2015;Li & Liu, 2023;Mendoza et al., 2023;Ye et al., 2023). Only a few studies focus on course characteristics, such as course content, difficulty, length, structure and online interaction (eg, Al-Rahmi & Zeki, 2017;Baber, 2020;Jordan, 2015). ...
... Spending too much time on a course is a common reason why learners drop out (Shapiro et al., 2017). Research has shown that shorter courses have higher completion rates (Jordan, 2015). To examine the effect of content richness, the variable that represents the number of videos has been modified to reflect the total duration of the lecture videos. ...
Article
Full-text available
High dropout rates and low pass rates are prevalent problems encountered by online learning platforms, which greatly hinder the development of online education. Drawing upon the theory of attention allocation, this study aimed to investigate the factors influencing the effectiveness of Massive Open Online Courses (MOOCs), as well as the potential moderating effects. To address the limitation of using course completion rates as an overall measurement, this study endeavours to measure MOOCs learning effectiveness by examining dropout rates and pass rates as separate outcome indicators. We use secondary data analysis to investigate our research questions. Specifically, we collect 8602 courses from a Chinese MOOC platform (Zhihuishu) using data‐crawling techniques, and employ regression analyses to examine our research hypotheses. The findings indicate that course quality, content richness and interactivity significantly influence course dropout rates and pass rates. Besides, content richness moderates the relationship between course quality and learning effectiveness. Furthermore, frequent online interaction is associated with lower pass rates in high‐quality courses, but the moderating effect of online interaction on dropout rates is insignificant. This study contributes to the extant literature by examining course‐level factors that affect learning effectiveness. It also offers new theoretical insights and provides valuable suggestions for the design of MOOCs. Practitioner notes What is already known about this topic Attention is a scarce resource, and the concentration level greatly influences one's academic performance. Excessive online interaction may cause distractions and hinder learning outcomes. Information overload can result in anxiety and negative emotions, thereby reducing one's willingness to continue learning. In addition, information overload may reduce perceived usefulness and negatively affect academic performance. High dropout rates and low pass rates are common problems encountered by MOOCs today, hindering the development of online education. What this paper adds This study investigates the effect of course quality, content richness and interactivity on course dropout rates and pass rates. Content richness moderates the relationship between course quality and course dropout rates / pass rates. Courses with more content have higher dropout rates and lower pass rates, particularly for high‐quality courses. Online interaction moderates the relationship between course quality and the pass rates. More frequent online interaction is associated with lower pass rates for high‐quality courses. Implications for practice and/or policy The improvement of course quality can contribute to lower dropout rates and higher pass rates, thereby fostering the healthy growth of online education. When evaluating high‐quality courses, not only the depth of course content but also the integration of resources should be considered. High‐quality online courses should carefully regulate the number of videos and manage online learning resources. In addition, it is recommended that course designers consolidate essential video resources and ensure the effective and appropriate structure of course content. High‐quality online courses should restrict the frequency of online interaction. Additionally, these courses should strategically help learners allocate their attention, with a particular emphasis on prioritizing the course video content.
... In 2020, the percentage of participants registering on these platforms accounted for one-third of all participants ever recorded, excluding China [9]. Noteworthy statistics indicate a substantial increase, with participants surging from 2 million in 2012 to 180 million in 2020, courses escalating from 250 to 16.3K, and university partnerships for platforms rising from 40 in 2012 to 950 in 2020. Additionally, there has been a surge in publications on MOOCs in recent years [10,11]. ...
... While prior research on MOOCs has explored learners' motivation [12,13], behaviour [14], and dropout rates [15,16], limited attention has been given to investigating the success factors crucial for reducing dropout rates and enhancing motivation and quality in MOOCs. Identifying the factors that contribute to retaining learners helps in designing courses that are engaging and conducive to learning. ...
Article
Full-text available
Purpose This paper seeks to explore the influence of success factors, specifically motivation and course quality, on MOOC retention intention. Going beyond a mere examination of these motivational and quality factors, the study investigates students’ motivation, considering needs, interests, course system, content, and service quality. Methodologically, a questionnaire survey was conducted, collecting data from 311 students enrolled in online courses. To ascertain the impact of interest or need-based motivation on students’ retention rates, a Structural Equation Model (SEM) was employed. Subsequently, Necessary Condition Analysis (NCA) was utilized to identify the essential factors and components in each area. SEM results revealed a positive influence of motivational factors and quality issues on students’ behavior. Retention behavior was notably affected by academic and professional needs, along with personal interests. Furthermore, course content and service quality demonstrated a significant effect on students’ perseverance behavior. NCA results identified academic motivation and system quality as having a substantial impact on retention behavior, while personal motivation and technological motivation had a comparatively smaller effect. Practically, the findings suggest that course developers should consider students’ academic and personal requirements when designing online courses. Additionally, providing students with the ability to customize course and system content according to their needs is crucial. Timely problem-solving attitudes from service providers are essential for ensuring student retention.
... Of the 14 participants, 11 completed the course and received certificates of honor from the University of Lisbon. Besides the small sample size, we achieved a completion rate of 78.6%significantly higher than the 29% typical MOOC completion rate [14]. This suggests that applying the HLJ framework and a supportive educational environment may indicate a promising approach for improving engagement and completion rates in STEM-related MOOCs. ...
... MOOCs have tremendous potential, but they are also notorious for having low completion rates (e.g., Cagiltay et al., 2020, Jordan, 2015, Ho et al., 2014, Khalil & Ebner, 2014. For example, in a study of the MITx MOOCs, Cagiltay et al. (2020) found completion rates between 2% and 4% depending on the course subject matter. ...
... To an extent, this shows similarity to patterns of persistence in other forms of digital education where engagement has been shown to follow a heavy-tailed distribution over time, such as MOOCs (Jordan, 2015;Kizilcec et al., 2013). However, the wider literature on persistence is focused towards higher education and high-connectivity online learning. ...
Article
Full-text available
The use of SMS messaging for education has grown in recent years, with particular attention recently during the Covid‐19 pandemic. Mobile phones often have high levels of ownership in low‐income contexts compared to computers, and lower connectivity requirements, which arguably make this a more equitable medium than data‐heavy online instruction, for example. However, given that gender can be a factor to influence mobile device access and use, it is also important to consider educational applications through a gender lens, to avoid further exacerbating digital divides. In this paper, we present an analysis of server log and evaluation data in relation to a literacy‐focused initiative for primary‐aged learners carried out in Kenya as part of the Tusome programme and through the SMS‐based M‐Shule education platform, which does not require an Internet connection or smartphone to run. The extent of engagement with the platform varies according to gender and location within the country. The data also demonstrate a positive impact on learning outcomes regardless of learners' gender and location. Furthermore, the learning gains are shown to be relatively cost‐effective in comparison with educational technology interventions in similar contexts. The findings show that this low‐connectivity adaptive model has a positive impact on learning outcomes. It is a scalable approach to support a range of learners in Kenya, providing more support to learners who need it, and leading to increased foundational learning outcomes overall. As such, the findings will also be of highly relevant to other low‐connectivity contexts. Practitioner notes What is already known about this topic Mobile phones can be used as a means to support learning, through mobile learning and SMS, particularly in low‐connectivity contexts, although there is a lack of rigorous evidence of impact upon learning outcomes. Mobile phone device ownership tends to be higher than computer or wired Internet connections in many low‐income contexts. Software applications which adapt to the learners' level have shown good potential for gender‐equitable learning outcomes in low‐income contexts; however, these often require an Internet connection in addition to computers or tablets to be run on. What this paper adds There is a lack of contextually relevant evidence of the impact of SMS‐based mobile learning applications in low‐resource and low‐connectivity contexts upon learning outcomes. Through analysis of data generated via an experimental design, this study provides evidence that literacy materials delivered through an SMS‐based educational platform—M‐Shule—can have a positive impact upon learning outcomes. Furthermore, gains are equitable in terms of learners' gender, and location, within Kenya. Implications for practice and/or policy Mobile phones can be an effective way of reaching learners to provide additional educational support as part of existing education programmes in low‐connectivity environments. Learning gains using M‐Shule are evidenced as significant and relatively cost‐effective. Existing high‐quality learning materials developed in other media can be effectively adapted to SMS to reach learners particularly who are out‐of‐school or during periods of educational disruption.
... Students receive 3 ECTS for their work what is equal to 75 h of students work. This format aims to increase the course completion rate above the typical MOOC (Jordan, 2015;Evans et al., 2016). Students completed one mandatory module (Introduction), and chose seven additional modules based on their interests, ensuring representation from each of the three groups: Scientific Method (modules 2, 3, 4), Scientific Ethics (modules 5-9), and Societal Dimension of Science (modules 10-15). ...
Article
Full-text available
The article describes the practical implementation of the course titled “Good Chemistry: Methodological, Ethical, and Social Implications” available on the EuChemS e-learning platform. This MOOC course has been used in various ways at several European universities and through blended learning at Jagiellonian University in Krakow (Poland), which is discussed in this article. Since the groups of students participating in the course were not large and it is worth taking a closer look/insight at this issue, the case study method was used. Sample materials and activities for this course are presented. The survey responses of lecturers and students, as well as the activities and posts of Jagiellonian University in Krakow students on the remote learning platform, were analyzed. The strengths and weaknesses of the course were identified, and the need for continuous efforts by lecturers to develop ethical attitudes, critical thinking, and communication skills in students, especially in the field of socioscientific issues, was emphasized.
Article
Введение. В статье выделяется педагогическая проблема содержательного наполнения онлайн-курсов для профессионального самообразования учителей, одним из аспектов которой является недостаточный учет возрастных особенностей, уровня профессиональ-ной компетентности, ментальности и опыта работы пользователей курса в предметной сфере. Для учета выделенных особенностей пользователей предлагаются принципы стажесообразной адресности, деятельностного взаимодействия с учебным контентом, рефлексии. Материалы и методы. Вывод о недостаточности учета психофизиологических особенностей слушателей формулируется на основе опроса респондентов, а также анализа и сопоставления отечественных и зарубежных информационных источников, включающих сами онлайн-курсы. Результаты. Для наполнения содержанием и деятельностной составляющей онлайн-курсов для педагогических кадров сформулированы принципы стажесообразной адресности, деятельностного взаимодействия с учебным контентом, рефлексии. Обсуждение. Совокупность сформулированных принципов может быть дополнена или изменена, так как она не исследована с точки зрения выполнения условий полноты (необходимости и дос-таточности). Заключение. Предложенные принципы могут быть использованы для идеологии содержательного и деятельностного наполнения онлайн-курсов и открытых ресурсов, а также в качестве новых оснований для их классификации. Ключевые слова: онлайн-курсы; принципы проектирования; содержание и структура; информационно-коммуникационное пространство. Основные положения: – описаны подходы и принципы составления онлайн-ресурсов в отечественной и зарубежной практике, – выделена необходимость учета возрастных особенностей, уровня профессиональной компетентности, ментальности и опыта работы пользователей при разработке онлайн курсов, – сформулированы принципы содержательного наполнения при проектировании открытых ресурсов, в том числе онлайн курсов, для педагогов в информационно-коммуникационном пространстве при учете возрастных особенностей, уровня профессиональной ком- петентности, ментальности и опыта работы с ресурсами профессиональной и предметной сферах. Introduction. The article highlights the pedagogical problem of the content filling of online courses for professional self-education of teachers, one of the aspects of which is the insufficient consideration of age characteristics, level of professional competence, mentality and experience of users of the course in the subject area. To account for the highlighted features of users, the principles of experience-oriented targeting, activity interaction with educational content, and reflection are proposed. Materials and methods. The conclusion about the insufficiency of taking into account the psychophysiological characteristics of students is formulated on the basis of a survey of respondents, as well as analysis and comparison of domestic and foreign information sources, including online courses themselves. Results. To fill in the content and activity component of online courses for teachers, the principles of experience-oriented targeting, activity interaction with educational content, and reflection are formulated. Discussion. The totality of the formulated principles can be supplemented or changed, since it has not been studied in terms of fulfilling the conditions of completeness (necessity and sufficiency). Conclusion. The proposed principles can be used for the ideology of meaningful and active content online courses and open resources, as well as new grounds for their classification. Keywords: Online courses; principles of designing online courses; content and structure of an online course; information and communication space. Highlights: Describes the approaches and principles of compiling online resources in domestic and foreign practice, Highlighted the need to take into account age-related characteristics, the level of professional competence, mentality and user experience when developing online courses, The principles of meaningful content when designing open resources, including online courses, were formulated for teachers in the information and communication space, taking into account age characteristics, the level of professional competence, mentality and experience with professional and subject-specific resources.
Article
Full-text available
This paper reports on the progress of users through 16 Coursera courses taught by University of Pennsylvania faculty for the first time between June 2012 and July 2013. Using descriptive analyses, this study advances knowledge by considering two definitions of massive open online course (MOOC) users (registrants and starters), comparing two approaches to measuring student progress through a MOOC course (sequential versus user driven), and examining several measures of MOOC outcomes and milestones. The patterns of user progression found in this study may not describe current or future patterns given the continued evolution of MOOCs. Nonetheless, the findings provide a baseline for future studies.
Conference Paper
Full-text available
In this paper, we explore student dropout behavior in a Massively Open Online Course (MOOC). We use a survival model to measure the impact of three social factors that make predictions about attrition along the way for students who have participated in the course discussion forum.
Article
Full-text available
The past two years have seen rapid development of massive open online courses (MOOCs) with the rise of a number of MOOC platforms. The scale of enrolment and participation in the earliest mainstream MOOC courses has garnered a good deal of media attention. However, data about how the enrolment and completion figures have changed since the early courses is not consistently released. This paper seeks to draw together the data that has found its way into the public domain in order to explore factors affecting enrolment and completion. The average MOOC course is found to enroll around 43,000 students, 6.5% of whom complete the course. Enrolment numbers are decreasing over time and are positively correlated with course length. Completion rates are consistent across time, university rank, and total enrolment, but negatively correlated with course length. This study provides a more detailed view of trends in enrolment and completion than was available previously, and a more accurate view of how the MOOC field is developing.
Conference Paper
Full-text available
As MOOCs grow in popularity, the relatively low completion rates of learners has been a central criticism. This focus on completion rates, however, reflects a monolithic view of disengagement that does not allow MOOC designers to target interventions or develop adaptive course features for particular subpopulations of learners. To address this, we present a simple, scalable, and informative classification method that identifies a small number of longitudinal engagement trajectories in MOOCs. Learners are classified based on their patterns of interaction with video lectures and assessments, the primary features of most MOOCs to date. In an analysis of three computer science MOOCs, the classifier consistently identifies four prototypical trajectories of engagement. The most notable of these is the learners who stay engaged through the course without taking assessments. These trajectories are also a useful framework for the comparison of learner engagement between different course structures or instructional approaches. We compare learners in each trajectory and course across demographics, forum participation, video access, and reports of overall experience. These results inform a discussion of future interventions, research, and design directions for MOOCs. Potential improvements to the classification mechanism are also discussed, including the introduction of more fine-grained analytics.
Article
Full-text available
“Circuits and Electronics” (6.002x), which began in March 2012, was the first MOOC developed by edX, the consortium led by MIT and Harvard. Over 155,000 students initially registered for 6.002x, which was composed of video lectures, interactive problems, online laboratories, and a discussion forum. As the course ended in June 2012, researchers began to analyze the rich sources of data it generated. This article describes both the first stage of this research, which examined the students’ use of resources by time spent on each, and a second stage that is producing an in-depth picture of who the 6.002x students were, how their own background and capabilities related to their achievement and persistence, and how their interactions with 6.002x’s curricular and pedagogical components contributed to their level of success in the course.
Article
Full report available at: http://oro.open.ac.uk/37836/ This report presents an evaluation of the Open Learning Design Studio MOOC (OLDS MOOC) that took place between January and March 2013. This evaluation focuses on the experience of those who registered, participated and actively contributed in the public course space. In particular the evaluation focuses on participant expectations, a detailed analysis of participation rates, use of the course space and technologies, and the effectiveness and challenges presented by collaborative group working. The evaluation also looks at how participants understood and used the series of nine badges on offer. Throughout, a broad evidence base of qualitative and quantitative information is used including data from pre- and post-course surveys, from page view and contributions data available in the public spaces and from hundreds of participant blog, discussion forum and social media posts.
Article
Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9&percnt; of students’ grades within 5&percnt; of the staff grade, and 65.5&percnt; within 10&percnt;. On average, students assessed their work 7&percnt; higher than staff did. Students also rated peers’ work from their own country 3.6&percnt; higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4&percnt; to 9.9&percnt;.
Article
In massive open online courses (MOOCs), low barriers to registration attract large numbers of students with diverse interests and backgrounds, and student use of course content is asynchronous and unconstrained. The authors argue that MOOC data are not only plentiful and different in kind but require reconceptualization—new educational variables or different interpretations of existing variables. The authors illustrate this by demonstrating the inadequacy or insufficiency of conventional interpretations of four variables for quantitative analysis and reporting: enrollment, participation, curriculum, and achievement. Drawing from 230 million clicks from 154,763 registrants for a prototypical MOOC offering in 2012, the authors present new approaches to describing and understanding user behavior in this emerging educational context.
Article
http://ubi-learn.com/the-latest-news/mooc-completion-rates-the-data Massive Open Online Courses (MOOCs) have the potential to enable free university-level education on an enormous scale. A concern often raised about MOOCs is that although thousands enrol for courses, a very small proportion actually complete the course. The release of information about enrollment and completion rates from MOOCs appears to be ad hoc at the moment - that is, official statistics are not published for every course. This data visualisation draws together information about enrollment numbers and completion rates from across online news stories and blogs. How big is the typical MOOC? - while enrollment has reached up to ~180,000, 50,000 students enrolled is a much more typical MOOC size. How many students complete courses? - completion rates can approach 20%, although most MOOCs have completion rates of less than 10%. Clicking on data points on the chart will display further details about each course, including a link to the data source.