Conference PaperPDF Available

How video production affects student engagement: An empirical study of MOOC videos

Authors:

Abstract and Figures

Videos are a widely-used kind of resource for online learning. This paper presents an empirical study of how video production decisions affect student engagement in online educational videos. To our knowledge, ours is the largest-scale study of video engagement to date, using data from 6.9 million video watching sessions across four courses on the edX MOOC platform. We measure engagement by how long students are watching each video, and whether they attempt to answer post-video assessment problems. Our main findings are that shorter videos are much more engaging, that informal talking-head videos are more engaging, that Khan-style tablet drawings are more engaging, that even high-quality pre-recorded classroom lectures might not make for engaging online videos, and that students engage differently with lecture and tutorial videos. Based upon these quantitative findings and qualitative insights from interviews with edX staff, we developed a set of recommendations to help instructors and video producers take better advantage of the online video format. Finally, to enable researchers to reproduce and build upon our findings, we have made our anonymized video watching data set and analysis scripts public. To our knowledge, ours is one of the first public data sets on MOOC resource usage.
Content may be subject to copyright.
How Video Production Affects Student Engagement:
An Empirical Study of MOOC Videos
Philip J. Guo
MIT CSAIL / University of Rochester
pg@cs.rochester.edu
Juho Kim
MIT CSAIL
juhokim@mit.edu
Rob Rubin
edX
rrubin@edx.org
ABSTRACT
Videos are a widely-used kind of resource for online learn-
ing. This paper presents an empirical study of how video
production decisions affect student engagement in online ed-
ucational videos. To our knowledge, ours is the largest-scale
study of video engagement to date, using data from 6.9 mil-
lion video watching sessions across four courses on the edX
MOOC platform. We measure engagement by how long stu-
dents are watching each video, and whether they attempt to
answer post-video assessment problems.
Our main findings are that shorter videos are much more en-
gaging, that informal talking-head videos are more engaging,
that Khan-style tablet drawings are more engaging, that even
high-quality pre-recorded classroom lectures might not make
for engaging online videos, and that students engage differ-
ently with lecture and tutorial videos.
Based upon these quantitative findings and qualitative in-
sights from interviews with edX staff, we developed a set
of recommendations to help instructors and video producers
take better advantage of the online video format. Finally, to
enable researchers to reproduce and build upon our findings,
we have made our anonymized video watching data set and
analysis scripts public. To our knowledge, ours is one of the
first public data sets on MOOC resource usage.
Author Keywords
Video engagement; online education; MOOC
ACM Classification Keywords
H.5.1. Information Interfaces and Presentation (e.g. HCI):
Multimedia Information Systems
INTRODUCTION
Educators have been recording instructional videos for nearly
as long as the format has existed. In the past decade, though,
free online video hosting services such as YouTube have en-
abled people to disseminate instructional videos at scale. For
example, Khan Academy videos have been viewed over 300
million times on YouTube [1].
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
L@S 2014, March 4–5, 2014, Atlanta, Georgia, USA.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-2669-8/14/03..$15.00.
http://dx.doi.org/10.1145/2556325.2566239
Figure 1. Video production style often affects student engagement in
MOOCs. Typical styles include: a.) classroom lecture, b.) “talking
head” shot of an instructor at a desk, c.) digital tablet drawing format
popularized by Khan Academy, and d.) PowerPoint slide presentations.
Videos are central to the student learning experience in the
current generation of MOOCs from providers such as Cour-
sera, edX, and Udacity (sometimes called xMOOCs [7]).
These online courses are mostly organized as sequences of
instructor-produced videos interspersed with other resources
such as assessment problems and interactive demos. A study
of the first edX course (6.002x, Circuits and Electronics)
found that students spent the majority of their time watch-
ing videos [2, 13]. Also, a study of three Coursera courses
found that many students are auditors who engage primarily
with videos while skipping over assessment problems, online
discussions, and other interactive course components [9].
Due to the importance of video content in MOOCs, video
production staff and instructional designers spend consider-
able time and money producing these videos, which are often
filmed in diverse styles (see Figure 1). From our discussions
with staff at edX, we learned that one of their most pressing
questions was: Which kinds of videos lead to the best stu-
dent learning outcomes in a MOOC? A related question that
affects the rate at which new courses can be added is how
to maximize student learning while keeping video production
time and financial costs at reasonable levels.
As a step toward this goal, this paper presents an empirical
study of students’ engagement with MOOC videos, as mea-
sured by how long students are watching each video, and
whether they attempt to answer post-video assessment prob-
lems. We choose to study engagement because it is a neces-
sary (but not sufficient) prerequisite for learning, and because
it can be quantified by retrospectively mining user interaction
logs from past MOOC offerings. Also, video engagement
Finding Recommendation
Shorter videos are much more engaging. Invest heavily in pre-production lesson planning to
segment videos into chunks shorter than 6 minutes.
Videos that intersperse an instructor’s talking head Invest in post-production editing to display the
with slides are more engaging than slides alone. instructor’s head at opportune times in the video.
Videos produced with a more personal feel could Try filming in an informal setting; it might not be
be more engaging than high-fidelity studio recordings. necessary to invest in big-budget studio productions.
Khan-style tablet drawing tutorials are more Introduce motion and continuous visual flow into
engaging than PowerPoint slides or code screencasts. tutorials, along with extemporaneous speaking.
Even high quality pre-recorded classroom lectures If instructors insist on recording classroom lectures,
are not as engaging when chopped up for a MOOC. they should still plan with the MOOC format in mind.
Videos where instructors speak fairly fast and Coach instructors to bring out their enthusiasm and
with high enthusiasm are more engaging. reassure that they do not need to purposely slow down.
Students engage differently with lecture For lectures, focus more on the first-watch experience;
and tutorial videos for tutorials, add support for rewatching and skimming.
Table 1. Summary of the main findings and video production recommendations that we present in this paper.
is important even beyond education. For instance, commer-
cial video hosting providers such as YouTube and Wistia use
engagement as a key metric for viewer satisfaction [6, 16],
which directly drives revenues.
The importance of scale: MOOC video producers currently
base their production decisions on anecdotes, folk wisdom,
and best practices distilled from studies with at most dozens
of subjects and hundreds of video watching sessions. The
scale of data from MOOC interaction logs—hundreds of
thousands of students from around the world and millions of
video watching sessions—is four orders of magnitude larger
than those available in prior studies [11, 15].
Such scale enables us to corroborate traditional video engage-
ment research and extend their relevance to a modern online
context. It also allows MOOC video producers to make more
rigorous decisions based on data rather than just intuitions.
Finally, it could enable our findings and recommendations to
generalize beyond MOOCs to other sorts of informal online
learning that occurs when, say, hundreds of millions of people
watch YouTube how-to videos on topics ranging from cook-
ing to knitting.
This paper makes three main contributions:
Findings from an empirical study of MOOC video engage-
ment, combining data analysis of 6.9 million video watch-
ing sessions in four edX courses with interviews with six
edX production staff. The left column of Table 1 summa-
rizes our seven main findings. To our knowledge, ours is
the largest-scale study of video engagement to date.
Recommendations for instructional designers and video
producers, based on our study’s findings (see the right col-
umn of Table 1). Staff at edX are already starting to use
some of these recommendations to nudge professors to-
ward cost-effective video production techniques that lead
to greater student engagement.
An anonymized public data set of 6.9 million video
watching sessions, along with analysis scripts and instal-
lation instructions to enable full reproducibility of our re-
sults. Located at http://www.pgbovine.net/edX/, ours is
one of the first public data sets on MOOC resource usage.
RELATED WORK
To our knowledge, our study is the first to correlate video
production style with engagement at scale using millions of
viewing sessions.
The closest related work is by Cross et al., who studied some
of these effects in a controlled experiment [4]. They created
Khan-style (tablet drawing) and PowerPoint slide versions of
three video lectures and surveyed 150 people online about
their preferences. They found that the two formats had com-
plementary strengths and weaknesses, and developed a hybrid
style called TypeRighting that tries to combine the benefits of
both. Ilioudi et al. performed a similar study using three
pairs of videos recorded in both live classroom lecture and
Khan-style formats, like those shown in Figure 1a. and c.,
respectively. They presented those videos to 36 high school
students, who showed a slight preference for classroom lec-
ture videos over Khan-style videos [8]. Although these stud-
ies lack the scale of ours, they collected direct feedback from
video watchers, which we have not yet done.
Prior large-scale analyses of MOOC interaction data (e.g., [2,
3, 9, 13]) have not focused on videos in particular. Some of
this work provides the motivation for our study. For instance,
a study of the first edX course (6.002x, Circuits and Electron-
ics) found that students spent the majority of their time watch-
ing videos [2, 13]. And a study of three Coursera courses
Course Subject University Lecture Setting Videos Students Watching sessions
6.00x Intro. CS & Programming MIT Office Desk 141 59,126 2,218,821
PH207x Statistics for Public Health Harvard TV Studio 301 30,742 2,846,960
CS188.1x Artificial Intelligence Berkeley Classroom 149 22,690 1,030,215
3.091x Solid State Chemistry MIT Classroom 271 15,281 806,362
Total 862 127,839 6,902,358
Table 2. Overview of the Fall 2012 edX courses in our data set. “Lecture Setting” is the location where lecture videos were filmed. “Students” is the
number of students who watched at least one video.
found that many students are auditors who engage primarily
with videos while skipping over assessment problems, online
discussions, and other interactive course components [9].
Finally, educators have been using videos and electronic me-
dia for decades before MOOCs launched. Mayer surveys
cognitive science research on the impacts of multimedia on
student learning [11]. Williams surveys general instructional
media best practices from the 1950s to 1990s [15]. And Lev-
asseur surveys best practices for using PowerPoint lectures in
classrooms [10]. These studies have at most dozens of sub-
jects and hundreds of video watching sessions. Our study
extends these lines of work to a large-scale online setting.
METHODOLOGY
We took a mixed methods approach: We analyzed data from
four edX courses and supplemented our quantitative findings
with qualitative insights from interviews with six edX staff
who were involved in producing those courses.
Course Selection
We analyzed data from four courses in the first edX batch
offered in Fall 2012 (see Table 2). We selected courses from
all three edX affiliates at the time (MIT, Harvard, and UC
Berkeley) and strived to maximize diversity in subject matter
and video production styles (see Figure 1).
However, since all Fall 2012 courses were math/science-
focused, our corpus does not include any humanities or social
science courses. EdX launched additional courses in Spring
2013, but that data was incomplete when we began this study.
To improve external validity, we plan to replicate our experi-
ments on more courses once we obtain their data.
Video Watching Sessions
The main data we analyze is a video watching session, which
represents a single instance of a student watching a particular
edX video. Each session contains a username, video ID, start
and end times, video play speed (1x, 1.25x, 1.5x, 0.75x, or
multiple speeds), numbers of times the student pressed the
play and pause buttons, and whether the student attempted an
assessment problem shortly after watching the given video.
To extract video watching sessions, we mined the edX server
logs for our four target courses. The edX website logs user in-
teraction events such as navigating to a page, playing a video,
pausing a video, and submitting a problem for grading. We
segmented the raw logs into video watching sessions based
on these heuristics: Each session starts with a “play video”
event for a particular student and video, and it ends when:
that student triggers any event not related to the current
video (e.g., navigating to another page),
that student ends the current login session,
there is at least a 30-minute gap before that student’s next
event (Google Analytics [5] uses this heuristic for segment-
ing website visits),
the video finishes playing. The edX video player issues
a “pause video” event when a video ends, so if a student
plays, say, a five-minute video and then walks away from
the computer, that watching session will conclude when the
video ends after five minutes.
In Fall 2012, the edX video player automatically started play-
ing each video (and issues a “play video” event) as soon as a
student loads the enclosing page. Many students paused the
video almost immediately or navigated to another page. Thus,
we filtered out all sessions lasting shorter than five seconds,
because those were likely due to auto-play.
Our script extracted 6.9 million total video watching sessions
across four courses during the time period when they were
initially offered in Fall 2012 (see Table 2).
Measuring Engagement
We aim to measure student engagement with instructional
videos. However, true engagement is impossible to measure
without direct observation and questioning, which is infeasi-
ble at scale. Thus, we use two proxies for engagement:
Engagement time: We use the length of time that a student
spends on a video (i.e., video watching session length) as the
main proxy for engagement. Engagement time is a standard
metric used by both free video providers such as YouTube [6]
and enterprise providers such as Wistia [16]. However, its
inherent limitation is that it cannot capture whether a watcher
is actively paying attention to the video or just playing it in
the background while multitasking.
Problem attempt: 32% of the videos across our four courses
are immediately followed by an assessment problem, which
is usually a multiple-choice question designed to check a
student’s understanding of the video’s contents. We record
whether a student attempted the follow-up problem within 30
minutes after watching a video. A problem attempt indicates
more engagement than moving on without attempting.
When we refer to engagement throughout this paper, we mean
engagement as measured through these two proxies, not the
difficult-to-measure ideal of true engagement.
Video Properties
To determine how video production correlates with engage-
ment, we extracted four main properties from each video.
Length: Since all edX videos are hosted on YouTube, we
wrote a script to get each video’s length from YouTube.
Speaking rate: All edX videos come with time-coded subti-
tles, so we approximated the speaking rate of each video by
dividing the total number of spoken words by the total in-
video speaking time (i.e., words per minute).
Video type: We manually looked through each video and cat-
egorized its type as either an ordinary lecture, a tutorial (e.g.,
problem solving walkthrough), or other content such as a sup-
plemental film clip. 89% of all videos were either lectures or
tutorials, so we focus our analyses only on those two types.
Production style: We looked through each video and coded
its production style using the following labels:
Slides – PowerPoint slide presentation with voice-over
Code – video screencast of the instructor writing code in a
text editor, IDE, or command-line prompt
Khan-style – full-screen video of an instructor drawing
freehand on a digital tablet, which is a style popularized
by Khan Academy videos
Classroom – video captured from a live classroom lecture
Studio – instructor recorded in a studio with no audience
Office Desk – close-up shots of an instructor’s head filmed
at an office desk
Note that a video can contain multiple production styles, such
as alternating between PowerPoint slides and an instructor’s
talking head recorded at an office desk. Thus, each video can
have multiple labels.
Interviews With Domain Experts
To supplement our quantitative findings, we presented our
data to domain experts at edX to solicit their feedback and
interpretations. In particular, we conducted informal inter-
views with the four principal edX video producers who were
responsible for overseeing all phases of video production—
planning, filming, and editing. We also interviewed two pro-
gram managers who were the liaisons between edX and the
respective university course staff.
Public Anonymized Data Set and Scripts
We have uploaded an anonymized version of our data set
along with analysis scripts and database installation instruc-
tions to http://www.pgbovine.net/edX/ so that other re-
searchers can reproduce and build upon this paper’s findings.
To our knowledge, ours is one of the first public data sets on
MOOC resource usage.
FINDINGS AND RECOMMENDATIONS
We now detail the findings and recommendations of Table 1.
Figure 2. Boxplots of engagement times in minutes (top) and normalized
to each video’s length (bottom). In each box, the middle red bar is the
median; the top and bottom blue bars are 25th and 75th percentiles,
respectively. The median engagement time is at most 6 minutes.
Shorter Videos Are More Engaging
Video length was by far the most significant indicator of en-
gagement. Figure 2 splits videos into five roughly equal-sized
buckets by length and plots engagement times for 1x-speed
sessions in each group1. The top boxplot (absolute engage-
ment times) shows that median engagement time is at most
6 minutes, regardless of total video length. The bottom box-
plot (engagement times normalized to video length) shows
that students often make it less than halfway through videos
longer than 9 minutes. The shortest videos (0–3 minutes)
had the highest engagement and much less variance than all
other groups: 75% of sessions lasted over three quarters of
the video length. Note that normalized engagement can be
greater than 1.0 if a student paused to check understanding or
scrolled back to re-play an earlier portion before finishing the
video.
To account for inter-courses differences, we made plots indi-
vidually for the four courses and found identical trends.
Students also engaged less frequently with assessment prob-
lems that followed longer videos. For the five length buck-
ets in Figure 2, we computed the percentage of video watch-
ing sessions followed by a problem attempt: The percentages
were 56%, 48%, 43%, 41%, and 31%, respectively.
1Plotting all sessions pulls down the distributions due to students
playing at 1.25x and 1.5x speeds and finishing videos faster, but
trends remain identical. In this paper, we report results only for
1x-speed plays, which comprise 76% of all sessions. Our code and
data are available to re-run on all sessions, though.
Figure 3. Median engagement times versus length for videos from 6.00x (left) and PH207x (right). In both courses, students engaged more with videos
that alternated between the instructor’s talking head and slides/code. Also, students engaged more with 6.00x videos, filmed with the instructor sitting
at a desk, than with PH207x videos, filmed in a professional TV studio (the left graph has higher values than the right one, especially for videos longer
than 6 minutes). Error bars are approximate 95% confidence intervals for the true median, computed using a standard non-parametric technique [14].
This particular set of findings resonated most strongly with
video producers we interviewed at edX. Ever since edX
formed, producers had been urging instructors to split up
lessons into chunks of less than 6 minutes, based solely upon
their prior intuitions. However, they often encountered re-
sistance from instructors who were accustomed to delivering
one-hour classroom lectures; for those instructors, even a 15-
minute chunk seems short. Video producers are now using
our data to make a more evidence-based case to instructors.
One hypothesis that came out in our interviews with video
producers was that shorter videos might contain higher-
quality instructional content. Their hunch is that it takes
meticulous planning to explain a concept succinctly, so
shorter videos are engaging not only due to length but also
because they are better planned. However, we do not yet have
the data to investigate this question.
For all subsequent analyses, we grouped videos by length, or
else the effects of length usually overwhelmed the effects of
other production factors.
Recommendation: Instructors should segment videos into
short chunks, ideally less than 6 minutes.
Talking Head Is More Engaging
The videos for two of our courses—6.00x and PH207x—
were mostly PowerPoint slideshows and code screencasts.
However, some of those videos (60% for 6.00x and 25% for
PH207x) were edited to alternate between showing the in-
structor’s talking head and the usual slides/code display.
Figure 3 shows that, in both courses, students usually engaged
more with talking-head videos. In this figure and all subse-
quent figures that compare median engagement times, when
the medians of two groups look far enough apart (i.e., their er-
ror bars are non-overlapping), then their underlying distribu-
tions are also significantly different (p << 0.001) according
to a Mann-Whitney U test.
To check whether longer engagement times might be simply
due to students pausing or re-playing the video, we compared
the numbers of play/pause events in both groups and found
no significant differences.
Also, 6.00x students attempted 46% of problems after watch-
ing a talking-head video (preceding a problem), versus 33%
for other videos (p << 0.001 according to a chi-square test
for independence). PH207x students attempted 33% of prob-
lems for both video groups, though.
These findings also resonated with edX video producers we
interviewed, because they felt that a human face provided a
more “intimate and personal” feel and broke up the monotony
of PowerPoint slides and code screencasts. They also men-
tioned that their video editing was not done with any specific
pedagogical “design patterns” in mind: They simply spliced
in talking heads whenever the timing “felt right” in the video.
Since we have shown that this technique can improve engage-
ment, we have encouraged producers to take a more system-
atic approach to this sort of editing in the future. Open ques-
tions include when and how often to switch between talking
head shots and textual content. Perhaps video editing soft-
ware could detect transition points and automatically splice
in head shots. Finally, some people were concerned about the
jarring effect of switching repeatedly between talking head
and text, so a picture-in-picture view might work better.
Recommendation: Record the instructor’s head and then
insert into the presentation video at opportune times.
High Production Value Might Not Matter
Although 6.00x and PH207x were both taught by senior fac-
ulty at major research universities and had videos filmed in
roughly the same style—slides/code with optional talking
head—students engaged much more with 6.00x videos. The
two graphs in Figure 3 show that students engaged for nearly
twice as long on 6.00x videos between 6 and 12 minutes, and
for nearly 3x the time on 6.00x videos longer than 12 minutes.
When we presented these findings to edX video producers
and program managers who worked on those two courses,
their immediate reaction was that differences in production
value might have caused the disparities in student engage-
ment: 6.00x was filmed informally with the instructor sitting
at his office desk, while PH207x was filmed in a multi-million
dollar TV production studio.
The “talking head” images at the top of Figure 3 show that
the 6.00x instructor was filmed in a tight frame, often making
direct eye contact with the student, while the PH207x instruc-
tor was standing behind a podium, often looking around the
room and not directly at the camera. The edX production staff
mentioned that the 6.00x instructor seemed more comfortable
seated at his office having a personal one-on-one, office-hours
style conversation with the video watcher. Video producers
called this desirable trait “personalization”—the student feel-
ing that the video is being directed right at them, rather than at
an unnamed crowd. In contrast, the PH207x instructor looked
farther removed from the watcher because he was lecturing
from behind a podium in a TV studio.
The edX production staff worked with each instructor to find
the recording style that made each most comfortable, and the
PH207x instructor still preferred a traditional lecture format.
Despite his decades of lecturing experience and comfort with
the format, his performance did not end up looking engaging
on video. This example reinforces the notion that what works
well in a live classroom might not translate into online video,
even with a high production value studio recording.
Here the supposed constraints of a lower-fidelity setting—a
single close-up camera at a desk—actually led to more en-
gaging videos. However, it is hard to generalize from only
one pair of courses, since the effects could be due to differ-
ences in instructor skill. Ideally we would like to compare
more pairs of low and high production value courses2, but
this was the only pair available in our data set.
Recommendation: Try filming in an informal setting where
the instructor can make good eye contact, since it costs less
and might be more effective than a professional studio.
Khan-Style Tutorials Are More Engaging
Now we focus on tutorials, which are step-by-step prob-
lem solving walkthroughs. Across all four courses, Khan-
style tutorial videos (i.e., an instructor drawing on a digital
2or, even better, record one instructor using both styles.
Figure 4. Median normalized engagement times vs. length for tutorial
videos. Students engaged more with Khan-style tablet drawing tutorials
(a.) than with PowerPoint slide and code screencast tutorials (b.). Error
bars are approximate 95% confidence intervals for the true median [14].
tablet) were more engaging than PowerPoint slides and/or
code screencasts. We group slides and code together since
many tutorial videos feature both styles. Figure 4 shows that
students engaged for 1.5x to 2x as long with Khan-style tuto-
rials. For videos preceding problems, 40% of Khan-style tu-
torial watching sessions were followed by a problem attempt,
versus 31% for other tutorials (chi-square p << 0.001).
This finding corroborates prior work that shows how free-
hand sketching facilitates more engaging dialogue [12] and
how the natural motion of human handwriting can be more
engaging than static computer-rendered fonts [4].
Video producers and program managers at edX also agreed
with this finding. In particular, they noticed how instructors
who sketched Khan-style tutorials could situate themselves
“on the same level” as the student rather than talking at the
student in “lecturer mode.” Also, one noted how a Khan-style
tutorial “encourages professors to use the ‘bar napkin’ style
of explanation rather than the less personal, more disjointed
model that PowerPoint—if unintentionally—encourages.
However, Khan-style tutorials require more pre-production
planning than presenting slides or typing code into a text edi-
tor. The most effective Khan-style tutorials were those made
by instructors with clear handwriting, good drawing skills,
and careful layout planning so as not to overcrowd the can-
vas. Future research directions include how to best structure
Khan-style tutorials and how to design better authoring tools
for creating and editing them. Perhaps some best practices
from chalkboard lecturing could transfer to this format.
Recommendation: Record Khan-style tutorials when pos-
sible. If slides or code must be displayed, add emphasis by
sketching over the slides and code using a digital tablet.
Figure 5. Median engagement times for lecture videos recorded in front
of live classroom audiences. Students engaged more with lectures in
CS188.1x (a.), which were prepared with edX usage in mind, than with
lectures in 3.091x (b.), which were adapted from old lecture videos. Er-
ror bars are approximate 95% confidence intervals for true median [14].
Pre-Production Improves Engagement
So far, we have focused on production (i.e., filming) and post-
production (i.e., editing) techniques that drive engagement.
However, edX video producers we interviewed felt that the
pre-production (i.e., planning) phase had the largest impact
on the engagement of resulting videos. But since the output of
extensive pre-production is simply better planned videos, pro-
ducers cannot easily argue for its benefits by pointing out spe-
cific video features (e.g., adding motion via tablet sketches)
to suggest as best practices for instructors.
To show the effects of pre-production, we compared video
engagement for CS188.1x and 3.091x. Both are math/science
courses with instructors who are regarded as excellent class-
room lecturers at their respective universities. And both in-
structors wanted to record their edX lectures in front of a live
classroom audience to bring out their enthusiasm. However,
due to logistical issues, there was not enough time for the
3.091x instructor to record his lectures, so the video produc-
ers had to splice up an old set of lecture videos recorded for
his on-campus class in Spring 2011. This contrast sets up a
natural experiment where video recording styles are nearly
identical, but no pre-production could be done for 3.091x.
Figure 5 shows that students engaged more with CS188.1x
videos, especially longer ones. Also, for videos preceding
problems, 55% of CS188.1x watching sessions were followed
by a problem attempt, versus 41% for 3.091x (chi-square
p << 0.001).
This finding resonated strongly with edX video producers,
because they had always championed the value of planning
lectures specially for an online video format rather than just
chopping up existing classroom lecture recordings.
Figure 6. Median engagement times versus speaking rate and video
length. Students engaged the most with fast-speaking instructors. Error
bars are approximate 95% confidence intervals for the true median [14].
EdX staff who worked with the CS188.1x instructors reported
that even though they recorded traditional one-hour lectures
in front of a live classroom, the instructors carefully planned
each hour as a series of short, discrete chunks that could eas-
ily be edited later for online distribution. In contrast, the
3.091x production staff needed to chop up pre-recorded one-
hour lecture videos into short chunks, which was difficult
since the original videos were not designed with the MOOC
format in mind. There were often no clear demarcations be-
tween concepts, and sometimes material was presented out
of order or interspersed with time- and location-specific re-
marks (e.g., “Jane covered this in last week’s TA session in
room 36-144”) that broke the flow.
The main limitation here is that we had only one pair of
courses to compare, and they differed in instructors and sub-
ject matter. To improve confidence in these findings, we could
either find additional pairs to compare or, if the 3.091x in-
structor records new live lectures for edX, A/B test the en-
gagement of old and new videos for that course.
Recommendation: Invest in pre-production effort, even if
instructors insist on recording live classroom lectures.
Speaking Rate Affects Engagement
Students generally engaged more with videos where instruc-
tors spoke faster. To produce Figure 6, we split videos into
the usual five length buckets and also five equal-sized buck-
ets (quintiles) by speaking rate. Speaking rates range from
48 to 254 words per minute (mean =156 wpm, sd =31
wpm). Each line represents the median engagement times
for videos of a particular length range. As expected, stu-
dents engaged less with longer videos (i.e., those lines are
lower). Within a particular length range, engagement usu-
ally increases (up to 2x) with speaking rate. And for 6–12
minute videos, engagement dips in the middle bucket (145–
165 wpm); slower-speaking videos are more engaging than
mid-speed ones. Problem attempts also follow a similar trend,
but are not graphed due to space constraints.
Some practitioners recommend 160 words per minute as the
optimum speaking rate for presentations [15], but at least in
our courses, faster-speaking instructors were even more en-
gaging. One possible explanation is that the 160 wpm rec-
ommendation (first made in 1967) was for live lectures, but
students watching online can actually follow along with much
faster speaking rates.
The higher engagement for faster-speaking videos might also
be due to students getting confused and re-playing parts.
However, this is unlikely since we found no significant differ-
ences in the numbers of play and pause events among videos
with different speaking rates.
To hypothesize possible explanations for the effects in Fig-
ure 6, we watched a random sample of videos in each speak-
ing rate bucket. We noticed that fast-speaking instructors con-
veyed more energy and enthusiasm, which might have con-
tributed to the higher engagement for those videos. We had no
trouble understanding even the fastest-speaking videos (254
wpm), since the same information was also presented visu-
ally in PowerPoint slides. In contrast, instructors in the mid-
dle bucket (145–165 wpm) were the least energetic. For the
slowest videos (48–130 wpm), the instructor was speaking
slowly because he was simultaneously writing on the black-
board; the continuous writing motion might have contributed
to higher engagement on those versus mid-speed videos.
Note that speaking rate is merely a surface feature that corre-
lates with enthusiasm and thus engagement. Thus, speeding
up an unenthusiastic instructor might not improve engage-
ment. So our recommendation is not to force instructors to
speak faster, but rather to bring out their enthusiasm and re-
assure them that there is no need to artificially slow down.
Video producers at edX mentioned that, whenever possible,
they tightly edit in post-production to remove instances of
“umm”, “uhh”, filler words, and other pauses, to make the
speech more crisp. Their philosophy is that although speech
pauses are beneficial in live lectures, they are unnecessary on
video because students can always pause the video.
Recommendation: Work with instructors to bring out their
natural enthusiasm, reassure them that speaking fast is okay,
and edit out pauses and filler words in post-production.
Students Engage Differently With Lectures And Tutorials
Lecture videos usually present conceptual (declarative)
knowledge, whereas tutorials present how-to (procedural)
knowledge. Figure 7 shows that students only watch, on av-
erage, 2 to 3 minutes of each tutorial video, regardless of the
video’s length. Figure 8 shows that students re-watch tutori-
als more frequently than lectures.
These findings suggest that students will often re-watch and
jump to relevant parts of longer tutorial videos. Adding hy-
perlink bookmarks or visual signposts on tutorial videos, such
as big blocks of text to signify transitions, might facilitate
skimming and re-watching. In contrast, students expect a lec-
ture to be a continuous stream of information, so instructors
should provide a good first-time watching experience.
Figure 7. Median engagement times versus video length for lecture and
tutorial videos. Students engaged with tutorials for only 2 to 3 minutes,
regardless of video length, whereas lecture engagement rises and falls
with length (similar to Figure 2). Error bars are approximate 95% con-
fidence intervals for the true median [14].
Figure 8. Percentage of re-watch sessions – i.e., not a student’s first time
watching a video. Tutorials were more frequently re-watched than lec-
tures; and longer videos were more frequently re-watched. (Binomial
proportion confidence intervals are so tiny that error bars are invisible.)
More generally, both our quantitative findings and interviews
with edX staff indicate that instructors should adopt different
production strategies for lectures and tutorials, since students
use them in different ways.
Recommendation: For lecture videos, optimize the first-
time watching experience. For tutorials, length does not
matter as much, but support re-watching and skimming.
LIMITATIONS
This paper presents a retrospective study, not a controlled ex-
periment. Also, we had access to the full server logs for only
seven Fall 2012 edX courses, which were all math and science
focused. Of those, we picked four courses with diverse pro-
duction styles, subjects, and from different universities (Ta-
ble 2). To improve external validity, these analyses should be
replicated on additional, more diverse courses.
Our engagement findings might not generalize to all online
video watchers, since edX students in the first Fall 2012
batch, who are more likely to be self-motivated learners and
technology early adopters, might not be representative of the
general online video watching population.
As we mentioned in the METHODOLOGY section, we cannot
measure a student’s true engagement with videos just from
analyzing server logs. Our proxies—engagement time and
problem attempts—might not be representative of true en-
gagement. For instance, a student could be playing a video
in the background while browsing Facebook. In the future,
running a controlled lab study will provide richer qualitative
insights about true engagement, albeit at small scale.
Also, we cannot track viewing activities of students who
downloaded videos and watched offline. We know that the
majority of students watched videos online in the edX video
player, since the numbers in the “Students” column of Table 2
closely match the total enrollment numbers for each course.
However, we do not have data on which students downloaded
videos, and whether their behaviors differ from those who
watched online.
Our data set contains only engagement data about entire
videos. We have not yet studied engagement within videos
such as which specific parts students are watching, skipping,
or re-watching. However, we are starting to address this lim-
itation in ONGOING WO R K (see next section).
Lastly, it is important not to draw any conclusions about stu-
dent learning solely from our findings about video engage-
ment. MOOCs contain many components that impact learn-
ing, and different kinds of students value different ones. For
instance, some learn more from discussion forums, others
from videos, and yet others from reading external Web pages.
The main relationship between video engagement and learn-
ing is that the former is often a prerequisite for the latter; if
students are watching a video only for a short time, then they
are unlikely to be learning much from it.
ONGOING WORK: WITHIN-VIDEO ENGAGEMENT
An alternative way to understand student engagement with
MOOC videos is to measure how students interact with spe-
cific parts of the video. We have recently begun to quantify
two dimensions of within-video interaction:
Interactivity – How often do students pause the video
while watching? To measure the degree of interactivity,
we compute the mean number of pause events per second,
per unique student. This metric controls for variations in
viewer counts and video lengths. High interactivity could
indicate more active engagement with the video content.
Selectivity – Do students selectively pause more at specific
parts of the video than others? This behavior might reflect
uneven points of interest within the video. As a proxy for
selectivity, we observe how the frequency of pause events
vary in different parts of the video. Specifically, we com-
pute the standard deviation of pause events across all sec-
onds in a video. Higher selectivity videos attract more stu-
dents to pause more at some parts than at others.
Here are two preliminary sets of findings. However, we have
not yet interviewed edX production staff to get their interpre-
tations or recommendations.
Figure 9. Students interacted (paused) more while watching tutorial
videos than lecture videos.
Figure 10. Students usually paused more selectively when watching tu-
torial videos than lecture videos.
Tutorial watching is more interactive and selective
Figure 9 shows that students interacted (paused) more within
tutorial videos than lecture videos. This behavior might re-
flect the fact that tutorial videos contain discrete step-by-step
instructions that students must follow, whereas lectures are
often formatted as one continuous stream of content.
Figure 10 shows that students usually paused tutorial videos
more selectively than lecture videos. This behavior might in-
dicate that specific points in a tutorial video – possibly bound-
aries between distinct steps – are landmarks where students
pause to reflect on or practice what they have just learned.
This data could be used to automatically segment videos into
meaningful chunks for faster skimming and re-watching.
Khan-style tutorials are more continuous
Figure 11 shows that students paused slides/code tutorials
more selectively than Khan-style tutorials. One likely expla-
nation is that Khan-style videos flow more continuously, so
there are not as many discrete landmarks for pausing. In con-
trast, instructors of slides/code tutorials gradually build up
text on a slide or a chunk of code, respectively, and then ex-
plain the full contents for a while before moving onto the next
slide or code snippet; those are opportune times for pausing.
Figure 11. Students paused more selectively when watching slides/code
tutorials than Khan-style tutorials.
Future Directions
Analyzing students’ video interaction patterns allows educa-
tors to better understand what types of online videos encour-
age active interaction with content. The preliminary find-
ings in this section provide an alternative perspective using
micro-level, second-by-second interaction data that comple-
ments the engagement time analyses in the rest of this paper.
A possible future direction is to explore why students pause at
certain points within the video. There are conflicting factors
at play: Students might pause more because they consider a
point to be important, or they might find the given explanation
to be confusing and decide to re-watch until they understand
it. Direct student observation in a lab setting could address
these questions and complement our quantitative findings.
CONCLUSION
We have presented, to our knowledge, the largest-scale study
of video engagement to date, using data from 6.9 million
video watching sessions across four edX courses.
Our findings (Table 1) reflect the fact that, to maximize stu-
dent engagement, instructors must plan their lessons specifi-
cally for an online video format. Presentation styles that have
worked well for centuries in traditional in-person lectures do
not necessarily make for effective online educational videos.
More generally, whenever a new communication medium ar-
rives, people first tend to use it just like how they used existing
media. For instance, many early television shows were sim-
ply radio broadcasts filmed on video, early digital textbooks
were simply scanned versions of paper books, and the first
online educational videos were videotaped in-person lectures.
As time progresses, people eventually develop creative ways
to take full advantage of the new medium. The findings from
our study can help inform instructors and video producers on
how to make the most of online videos for education.
Acknowledgments: Thanks to Anant Agarwal and our edX
interview subjects for enabling this research, Olga Stroilova
for helping with data collection, and Rob Miller for feedback.
REFERENCES
1. Khan Academy YouTube Channel.
http://www.youtube.com/user/khanacademy/about.
2. Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S.,
Ho, A. D., and Seaton, D. T. Studying learning in the
worldwide classroom: Research into edX’s first MOOC.
Research and Practice in Assessment 8 (Summer 2013).
3. Coetzee, D., Fox, A., Hearst, M. A., and Hartmann, B.
Should Your MOOC Forum Use a Reputation System?
CSCW ’14, ACM (New York, NY, USA, 2014).
4. Cross, A., Bayyapunedi, M., Cutrell, E., Agarwal, A.,
and Thies, W. TypeRighting: Combining the Benefits of
Handwriting and Typeface in Online Educational
Videos. CHI ’13, ACM (New York, NY, USA, 2013).
5. Google. How Visits are calculated in Analytics.
https://support.google.com/analytics/answer/
2731565?hl=en.
6. Google. YouTube Analytics. http://www.youtube.com/
yt/playbook/yt-analytics.html#details.
7. Haber, J. xMOOC vs. cMOOC.
http://degreeoffreedom.org/xmooc-vs- cmooc/,
2013.
8. Ilioudi, C., Giannakos, M. N., and Chorianopoulos, K.
Investigating Differences among the Commonly Used
Video Lecture Styles. In Proceedings of the Workshop
on Analytics on Video-based Learning, WAVe ’13
(2013).
9. Kizilcec, R. F., Piech, C., and Schneider, E.
Deconstructing disengagement: analyzing learner
subpopulations in massive open online courses. In
Proceedings of the Third International Conference on
Learning Analytics and Knowledge, LAK ’13, ACM
(New York, NY, USA, 2013), 170–179.
10. Levasseur, D. G., and Sawyer, J. K. Pedagogy Meets
PowerPoint: A Research Review of the Effects of
Computer-Generated Slides in the Classroom. Review of
Communication 6, 1 (2006), 101–123.
11. Mayer, R. E. Multimedia Learning. Cambridge
University Press, 2001.
12. Roam, D. The Back of the Napkin (Expanded Edition):
Solving Problems and Selling Ideas with Pictures.
Portfolio Hardcover, 2009.
13. Seaton, D. T., Bergner, Y., Chuang, I., Mitros, P., and
Pritchard, D. E. Who does what in a massive open
online course? Communications of the ACM (2013).
14. Wade, A., and Koutoumanou, E. Non-parametric tests:
Confidence intervals for a single median. https://
epilab.ich.ucl.ac.uk/coursematerial/statistics/
non_parametric/confidence_interval.html.
15. Williams, J. R. Guidelines for the use of multimedia in
instruction. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting 42, 20 (1998),
1447–1451.
16. Wistia. Does length matter? It does for video!
http://wistia.com/blog/
does-length- matter-it- does- for-video, Sept. 2013.
... Many educators believe student engagement is crucial to the teaching and learning process because it affects students' accomplishments, learning, and retention. Guo et al (2014) concurred that in online learning, as well as other types of learning, student engagement is seen as a vital prerequisite for learning to be achieved successfully. ...
Chapter
Open online courses are often used in formal education to provide added value for the students by helping gain new skills and competences or as extra material tasks in addition to the formal education course. The course described in this paper is developed and piloted fully open online. It is also integrated into formal education. This paper presents a case on integration of open databases course into formal education together with student feedback on the course quality and effectiveness of course delivery process.KeywordsMOOCDatabasesFormal educationOpen resources
Article
In response to the abrupt shift to remote learning in 2020, the authors offer evidence-based practices for virtual instruction, along with tips, suggestions, and technologies for immediate implementation.
Chapter
The COVID-19 pandemic has resulted in emergency remote teaching taking place globally. Despite the abrupt and rapid transition as well as the temporary nature of emergency remote teaching, it is possible to implement quality online teaching. Instructors can benefit from a review of findings and strategies found in online learning literature. This chapter discusses the challenges of emergency remote teaching and recommends suitable teaching strategies that can be quickly implemented by instructors. The focus is on strategies that can help to engage students by promoting learner-content interaction, learner-instructor interaction, and learner-learner interaction. This chapter also discusses strategies that can build a community of inquiry during emergency remote teaching. Future research directions are proposed.
Chapter
Online learning is considered to be self-regulated learning, which means that the learner has to initiate, manage, and sustain the learning process. Past studies suggested that the use of integrated videos could lower the extraneous cognitive load of learners. However, it was not clear whether this benefit is produced from the closer spatial proximity between the two sources of information or the integration (rather than segmentation) of the two sources of information. The current study examined how spatial distance and integration of an instructor’s close-up video on PowerPoint slides reduce cognitive load when the material is low or high in terms of difficulty. Four conditions were formed by combining two levels of integration (high vs. low) and spatial proximity (high vs. low). Participants were randomly assigned to 1 out of the 4 conditions to watch one video of easy material and one video of difficult material in a counterbalanced order. Perceived mental effort and learning performance were measured for each condition. Results showed that there was a significant effect of task difficulty for both recall and transfer tests. Moreover, there was a two-way interaction with difficulty and spatial distance on the transfer test: When the material was difficult, participants performed poorer when the instructor was presented near the textual information than when she was presented further away. There was no effect of spatial distance when the material was easy. Future designers can consider customizing the online learning systems based on learners’ experience with the content and their familiarization with online technologies, as well as other factors to increase the motivationMotivation of the learners.
Chapter
Despite the successful, widespread adoption of MOOCs in recent years, their low retention rates cast doubt on their effectiveness. This research analyzes the influence of gamification and course video lengths on the attrition rates of three MOOCs, where students received reinforcing awards as they answered assessment questions. The variables gender, educational level, previous experience in MOOCs, and age were considered as covariates. A factorial design, a survival analysis, and a risk analysis for four weeks were used to determine the percentage of attrition from the MOOC. The results indicated that video duration and gamification decrease attrition. The most significant predictor of survival is the use of reinforcers in gamification. The most significant predictor of the risk of attrition is the number of videos seen between weeks one and two. In the longitudinal study, weeks one and two presented the highest risks of desertion, regardless of the manipulated factors. Finally, the discussion presents pedagogical strategies that directly benefit the survival rates in MOOCs and notes the differences between our findings and others in the existing literature.KeywordsGamificationDropoutMOOCVideo lengthHigher educationEducational innovation
Chapter
This work presents a successfully implemented digital model for a Massive Flexible Digital Masterclass (MFDM) and a general overview of the pedagogy followed. The model facilitates developing competencies and transversal skills necessary to solve real-world industrial challenges. Since COVID19 accelerated the need for efficient and high-quality digital courses, digital technological tools became fundamental for the model's success. The enabling technologies presented in this work that help maintain the students' motivation and engagement include interactive video services and online games, simulations, and Chatbots. These tools provide timely feedback that enables developing competencies and skills to search for information, delve into acquired knowledge, and apply learned concepts to produce solutions. This work's pedagogic model was implemented and assessed in the Mechanical Vibrations course (2019–2020). We followed the ABET quality criteria for engineering courses (students' perception assessment) and compared the students' academic achievement. The results complied with ABET's high-quality standards and showed that the students were engaged and interested throughout the course.KeywordsMassive digital classroomsChallenge-based learningDigital educational toolsChatbotFeedbackEducational innovationHigher education
Article
This quasi-experimental study examines the effect of short instruction videos on students’ business statistics learning. Two hundred and thirty-one Dutch students attended 6-week online seminars on Business Statistics. One hundred and nineteen students were in an experimental group, and 112 in a control group. Students in the experimental group watched short instructional videos and studied online quizzes at their own pace. In the control group, students followed teachers’ instructions throughout the seminars. It was found students watching short videos significantly outperformed those following teachers’ virtual instruction. Short videos were especially useful for those who were weak at math. The research sheds light on the design of hybrid learning, particularly for business statistics education at the university level. • HIGHLIGHTS • A quasi-experimental research to examine the effects of short instructional videos on students’ statistics learning performance vs. the virtual lectures with teachers. • Evidence of the benefits of short videos in statistics education for students who are good at math. • Practical experiences sharing for designers of instructional videos. • Recommendations for creating short instructional videos in higher education.
Conference Paper
In response to the COVID-19 crisis and government strategies of quarantine, bans on gatherings, or even a complete lockdown, many schools and universities transferred from traditional face-to-face learning to online remote learning. This increased the need to develop web- or mobile-based applications to support e-learning requirements. Moreover, the need for providing academic institutions with platforms that manage e-lectures and e-exams during the pandemic or any other crises like wars and disasters. This paper provides a web-based learning system that controls course content in the University of Basrah in Iraq. The proposed e-learning system has several features, such as playing video lectures beside text lectures, providing online quizzes, and creating e-certificates. The system was developed according to the agile software development method with the Scrum technique and using HTML, CSS, PHP programming language, with MySQL for its database. In addition, the suggested e-learning system includes three authorized users: administrator, teacher, and students. Finally, an evaluation session was conducted to measure the usability and effectiveness of the system. The results showed that incorporating video lectures and e-quizzes into the e-learning system enhanced students’ engagement and improved the interaction between students and teachers during the pandemic.
Article
The ACOUCOU platform is a web-based, interactive, acoustics training platform that includes a set of free educational materials in various technical fields of acoustics. Educational materials are designed to serve as a modern self-development tool for students and engineers, as well as a comprehensive solution for professional education in the work environment. On the other hand, the provided materials of the platform can be a useful tool, supporting teachers, company researchers, and academic lecturers in the process of teaching acoustics. The ACOUCOU platform is a part of a strategic plan for expanding and strengthening acoustic knowledge web-based tools and supporting the development of innovative teaching methods based on attractive and effective delivery of digital content, and best practices at national and international levels. It addresses the challenge of a lack of experts in the acoustics field and the growing needs of the market.
Conference Paper
Full-text available
Many educational organizations are motivated to create and share instructional videos, but there are no guidelines about the presentation styles. In practice, the presentation style of video lectures ranges from simple video capturing of classroom teaching, up to highly elaborate authoring of video presentations that include close-ups and video-cuts of instructors, slides, animations, and interactive drawing boards. In particular, there is limited research about the effects of each presentation style on student learning performance and attitudes. In this work, we examine the effects of video presentation styles in supporting the teaching of mathematics in the secondary education. In addition to a control group that studied through a paper-book, two groups of students attended two distinct styles of video lectures: 1) video capture of class teaching (Talking head style), and 2) close-up video capture of an interactive drawing board with voice-over (Khan style). The participants of our study consisted of 36 students (15 boys and 21 girls, 16 years old), who received the respective three treatments (paper book, talking head, khan style), over the course of three math modules in three weeks' time. We found that learning effects show up only after the second week and that the Talking Head style was more effective than the book for complex topics. Copyright © 2013 for the individual papers by the papers' authors.
Conference Paper
Full-text available
As MOOCs grow in popularity, the relatively low completion rates of learners has been a central criticism. This focus on completion rates, however, reflects a monolithic view of disengagement that does not allow MOOC designers to target interventions or develop adaptive course features for particular subpopulations of learners. To address this, we present a simple, scalable, and informative classification method that identifies a small number of longitudinal engagement trajectories in MOOCs. Learners are classified based on their patterns of interaction with video lectures and assessments, the primary features of most MOOCs to date. In an analysis of three computer science MOOCs, the classifier consistently identifies four prototypical trajectories of engagement. The most notable of these is the learners who stay engaged through the course without taking assessments. These trajectories are also a useful framework for the comparison of learner engagement between different course structures or instructional approaches. We compare learners in each trajectory and course across demographics, forum participation, video access, and reports of overall experience. These results inform a discussion of future interventions, research, and design directions for MOOCs. Potential improvements to the classification mechanism are also discussed, including the introduction of more fine-grained analytics.
Article
Full-text available
The present essay offers a comprehensive review of the effects of computer-generated slides in the classroom, beginning with an overview of the ongoing debate over whether. To date, much of this debate has been testimonial in nature; in an effort to move beyond testimonials, this essay will attempt to ground the pedagogical debate over PowerPoint in various learning theories. Extant research on such slides is examined in four subcategories: (1) student reactions; (2) learning outcomes; (3) learning styles; and (4) slide variation effects. This essay closes with a discussion of how various research findings help inform (but by no means settle) the debate over PowerPoint and pedagogy.
Article
Full-text available
“Circuits and Electronics” (6.002x), which began in March 2012, was the first MOOC developed by edX, the consortium led by MIT and Harvard. Over 155,000 students initially registered for 6.002x, which was composed of video lectures, interactive problems, online laboratories, and a discussion forum. As the course ended in June 2012, researchers began to analyze the rich sources of data it generated. This article describes both the first stage of this research, which examined the students’ use of resources by time spent on each, and a second stage that is producing an in-depth picture of who the 6.002x students were, how their own background and capabilities related to their achievement and persistence, and how their interactions with 6.002x’s curricular and pedagogical components contributed to their level of success in the course.
Article
Full-text available
Massive open online courses (MOOCs) collect valuable data on student learning behavior: essentially complete records of all student interactions in a self-contained learning environment, with the benefit of large sample sizes. We present an overview of how the 108,000 participants behaved in 6.002x - Circuits and Electronics, the first course in MITx (now edX). We divide participants into tranches based on the extent of their assessment activities, ranging from browsers (who constituted ~76% of the participants but accounted for only 8% of the total time spent in the course) to certificate-earners (7% of the participants who accounted for 60% of the total time). We examine how the certificate earners allocated their time amongst the various course components and study what fraction of each they accessed. We analyze transitions between course components, showing how student behavior differs when solving homework vs. exam problems. This work lays the foundation for future studies of how use of various course components, and transitions among them, influence learning in MOOCs.
The use of multimedia in instructional presentations has mushroomed in recent years due to the increased capabilities of computers and the inclusion of multimedia capabilities in most CBT authoring systems and presentation tools. While providing a wealth of opportunity to instructional developers, multimedia is often used ineffectively and may cause a decrease in learning performance. Many claims have been made as to the added effectiveness that multimedia can bring to training programs and presentations. The purpose of this paper is to provide researched-based guidelines for the use of multimedia that can be used by multimedia developers that may not be instructional technologists.
Article
Pedagogy 6.1 (2006) 1-6 Think of the narratives of teaching with which you are most familiar—memoirs like Jay Parini's The Art of Teaching, reviewed in this issue, or Jane Tompkins's A Life in School: What the Teacher Learned, and especially movies, like Mona Lisa Smile or Dead Poets Society. Who is the teacher in these narratives? In what ways is he or she us? Others have critiqued narratives such as these (see especially Brunner 1994 and Bauer 1998), and it is not our intention to rehearse those arguments here. We would, however, invite you to examine the metaphors for teaching—and especially for teacher identity—that narratives such as these promote, particularly in light of the provocations to reflect on our identities as teachers offered in this issue. In Professing and Pedagogy: Learning the Teaching of English (reviewed in this issue), Shari Stenberg (2005: 71) focuses on one metaphor for the teacher that permeates the genre: teacher as hero. Stenberg claims that In this metaphor, the teacher is seen as complete, after having undergone a period of "training" or apprenticeship and after finally "owning" the material—the knowledge or methods of the field. Stenberg argues that such metaphors for teaching (and others like it, such as the pervasive metaphor of teacher as scholar) rely on a view of teacher development that focuses "primarily on consumption, whether that means acquiring a set of practices or a body of theory" (54). In other words, the development part stops presumably some time in graduate school once the new teacher has consumed enough theory—pedagogical or literary—or at least enough nuts-and-bolts "what works" skills to command the authority and identity of teacher. Kristine Johnson's commentary in this issue complements Stenberg's exploration of teacher development. In "The Millennial Teacher: Metaphors for a New Generation," Johnson, a new graduate teaching assistant, tries on her own metaphors for teacher identity (teacher as cultural critic, teacher as midwife, and teacher as resource) within the larger context of a generational shift that places her within the same cultural moment as the students she is teaching. The narrative interrogates her first year of teaching, using composition theory in particular to reflect on the identities she experimented with—and rejected, in some cases. She frames her exploration in terms of effect and identity:
Conference Paper
Massive open online courses (MOOCs) rely primarily on discussion forums for interaction among students. We investigate how forum design affects student activity and learning outcomes through a field experiment with 1101 participants on the edX platform. We introduce a reputation system, which gives students points for making useful posts. We show that, as in other settings, use of forums in MOOCs is correlated with better grades and higher retention. Reputation systems additionally produce faster response times and larger numbers of responses per post, as well as differences in how students ask questions. However, reputation systems have no significant impact on grades, retention, or the students' subjective sense of community. This suggests that forums are essential for MOOCs, and reputation systems can improve the forum experience, but other techniques are needed to improve student outcomes and community formation. We also contribute a set of guidelines for running field experiments on MOOCs.
Conference Paper
Recent years have seen enormous growth of online educational videos, spanning K-12 tutorials to university lectures. As this content has grown, so too has grown the number of presentation styles. Some educators have strong allegiance to handwritten recordings (using pen and tablet), while others use only typed (PowerPoint) presentations. In this paper, we present the first systematic comparison of these two presentation styles and how they are perceived by viewers. Surveys on edX and Mechanical Turk suggest that users enjoy handwriting because it is personal and engaging, yet they also enjoy typeface because it is clear and legible. Based on these observations, we propose a new presentation style, TypeRighting, that combines the benefits of handwriting and typeface. Each phrase is written by hand, but fades into typeface soon after it appears. Our surveys suggest that about 80% of respondents prefer TypeRighting over handwriting. The same fraction of respondents prefer TypeRighting over typeface, for videos in which the handwriting is sufficiently legible.