Content uploaded by Olga Viberg
Author content
All content in this area was uploaded by Olga Viberg on Aug 21, 2018
Content may be subject to copyright.
Contents lists available at ScienceDirect
Computers in Human Behavior
journal
homepage:
www.elsevier.com/locate/comphumbeh
Review
The current landscape of learning analytics in higher education
Olga Viberg
a,∗
,Mathias Hatakka
b
,Olof Bälter
a
,Anna Mavroudi
a
a
The Royal Institute of Technology
(KTH), School of Electrical Engineering and Computer Science, Lindstedtsvägen 3, 10044, Stockholm, Sweden
b
Örebro University School of Business, Informatics, Fakultetsgatan 1, 70381 Örebro, Sweden
ARTICLE
INFO
Keywords:
Learning analytics
Literature review
Higher education
Research methods
Evidence
ABSTRACT
Learning analytics can improve learning practice
by transforming the ways we support learning processes. This
study is based on the analysis of 252 papers on learning analytics in higher education published between 2012
and 2018. The
main research
question is: What is
the current scienti c knowledge about the application offi
learning analytics in higher education? The focus is on research approaches, methods and the evidence for
learning analytics. The evidence was examined in relation to four
earlier validated propositions: whether
learning analytics i) improve learning outcomes, ii) support learning and teaching, iii) are deployed widely, and
iv)
are used ethically. The results demonstrate that overall there is little evidence that shows improvements in
students' learning outcomes (9%) as well as learning
support and teaching (35%). Similarly, little evidence was
found for the third (6%) and the forth (18%) proposition. Despite the fact
that the identi ed
potential forfi
improving learner practice is high, we cannot currently see much transfer of the suggested potential into higher
educational practice over the years. However, the analysis of the existing evidence for learning
analytics in-
dicates that there is a shift towards a deeper understanding of
students learning experiences for the last years.’
1. Introduction
The pervasive integration of digital technology into
higher educa-
tion (HE) in uences both teaching and learning practices, and
allowsfl
access to data, mainly available from online learning environments,
that can be used to improve students learning. Online learning facil-’
itating the use of asynchronous and synchronous interaction and
communication within a virtual environment (Broadbent & Poon,
2015), has succeeded in becoming an integral part of HE, and now it“
needs to turn its focus, from providing to university education, toaccess
increasing its ( , p. 15). To this end, HE institutions arequality”Lee, 2017
implementing Learning Analytics (LA) systems to better understand and
support student learning ( This studySchumacher & Ifenthaler, 2018).
presents a literature review with the objective of mapping
the current
landscape
of contemporary LA research in HE.
There is an evolving interest in LA among not only practitioners but
also researchers in Technology-Enhanced Learning (TEL). LA emerges
as a fast-growing and multi-disciplinary area of TEL ( ),Ferguson, 2012
which forms its own domain ( ). In LA, information
aboutStrang, 2016
learners and learning environments
is used to
access, elicit, and ana-“
lyse them for modelling,
prediction, and optimization of learning pro-
cesses ( , p. 288).”Mah, 2016
De nitions of LA
vary. Some de ne it explicitly in terms of
the usefi fi
of
student-generated data for the prediction of educational outcomes,
with the purpose of tailoring
education (Junco & Clem, 2015; Xing,
Guo, Petakovic, & Goggins, 2015). Others
de ne LA as a means to helpfi
educators examine, understand, and support students study behaviours’
and change their learning environments (Drachsler & Kalz, 2012; Rubel
& Jones, 2016). While there is no generally accepted de nition of LA,fi
many refer to it as the measurement, collection, analysis and reporting“
of
data about
learners
and their contexts, for purposes of understanding
and optimizing learning and the environments in which it occurs”
( , p. 34).Long & Siemens, 2011
LA, academic analytics, and educational data mining (EDM) are
closely related research areas. The goal of academic analytics is to
support the institutional, operational, and nancial decision-makingfi
processes ( ), while theLawson, Beer, Rossi, Moore, & Fleming, 2016
overall purpose of LA and EDM is to understand how students learn.
Based on the analysis of large-scale educational data, LA and EDM aim
to support research and practice in education ( ).Siemens & Baker, 2012
Both EDM and LA re ect the
emergence of data-intensive approaches tofl
education and there are similarities between EDM and LA, which sug-
gests several areas of overlap ( ).
There are also,Siemens & Baker, 2012
however, several
distinctions between them (Siemens & Baker, 2012 ).
First, one key
distinction concerns the type of discovery that is priori-
tised: EDM has a primarily focus
on
automated
discovery, whereas LA
https://doi.org/10.1016/j.chb.2018.07.027
Received 26 October 2017; Received in revised form 20 July 2018; Accepted
22 July 2018
∗
Corresponding author. KTH, Lindstedtsvägen 3, 10044, Stockholm. Sweden.
E-mail addresses: oviberg@kth.se Mathias.Hatakka@oru.se balter@kth.se amav@kth.se(O. Viberg), (M. Hatakka), (O. Bälter), (A. Mavroudi).
Computers in Human Behavior 89 (2018) 98–110
Available online 24 July 2018
0747-5632/ © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/BY-NC-ND/4.0/).
T
T
T
T
T
T
T
T
T
has a
stronger focus on leveraging human judgement. Second, EDM
models are often used as the basis for automated adaptation, conducted
by a computer system, whereas LA models are often developed to in-
form instructors and learners. Third, EDM researchers use reductionist
frameworks: they reduce phenomena to components and focus on the
analysis of individual components and relationships
between them. By
contrast, LA researchers have a
stronger focus on understanding com-
plex systems as wholes.
In this study, we are particularly interested in
how LA research has been employed across di erent higher educationalff
settings, disciplines, institutional types and states.
1.1. Research background
Although the
LA research eld is still in its infancy, it has alreadyfi
been the focus of a number of literature reviews, which are helpful, but
are mainly aimed at researchers and not practitioners (Ferguson &
Clow, 2017). Some of the reviews focus explicitly on the use of LA in
higher educational settings (e.g., Avella, Kebritchi, Nunn,
& Kanai,
2016; Clow, 2013; Ferguson
& Clow, 2017; Ihantola et al., 2016;
Leitner, Khalil, & Ebner, 2017; Sin & Muthu,
2015),
whereas others
focus on educational contexts in general (e.g., Dawson, Ga evi ,š ć
Siemens,
& Joksimovic, 2014; Ferguson, 2012; Ferguson et al., 2016;
Jivet, Sche el, Specht, & Drachsler, 2018; Nistor, Derntl, & Klamma,ff
2015; Papamitsiou &
Economides, 2014; Peña-Ayala, 2018).
The reviews focusing explicitly on HE have already
identi ed sev-fi
eral important aspects of the LA research to be considered. Avella et al.
(2016), for example, examined LA methods, bene ts for and challengesfi
in HE. Some of the
common methods used are data visualisation, social
network analysis, prediction and relationship
mining. Whilst data
tracking, collection and evaluation were highlighted as some
of the
challenges associated with LA research, targeted student learning out-
comes and behaviour were mentioned as potential bene ts. Anotherfi
recent eld
overview ( ) focused on thefiLeitner, Khallil,
& Ebner, 2017
analysis of the current research trends of LA, limitations, methods and
the key stakeholders.
The results showed that the usage of massive
online open courses (MOOCs), enhancement of learning performance,
student behaviour, and benchmarking of learning environments were
the key areas
of the LA research focus; the
limitations included the’
needed time to prepare data or getting the results, the
size of the
available dataset and examined group, and ethical reasons. Among the
used methods, prediction, distillation of data for human judgement, and
outlier detection were found to be the most common methods used
in
the HE domain. The main identi ed
stakeholders are researchers, ratherfi
than learners. This contradicts with the adopted in our study and by a
large
number of LA scholars de nition of LA ( )fiLong & Siemens, 2011
that emphasises mainly learners and
their learning environments.
One
of the latest reviews ( ) included theFerguson & Clow, 2017
exploration of the
evidence of whether LA improve learning practice in
HE based on four propositions of LA: 1) they improve learning out-
comes, 2)
they support learning and teaching, 3) are deployed widely,
and 4) are used ethically. Based on these propositions, the authors
pinpointed that many studies did not contain strong evidence orfor
against one of these propositions. Most of the evidence based on the
analysis of 28 papers relates to the proposition that LA improve
learning support and teaching, comprising retention, completion and
progression, which has been categorised as evidence that LA improve
teaching in universities. The
weaknesses of the present research include
lack of geographical spread, gaps in
knowledge (e.g., in terms informal
learning and a lack of negative evidence), little evaluation of com-
mercially available tools, and little attention to ethics. Some other
studies ( ) examined both
LAIhantola et al., 2016; Sin & Muthu, 2015
and EDM research in HE con rmingfiseveral of the ndings presentedfi
above.
Whereas the above-discussed research was conducted with a focus
on HE explicitly,
the studies discussed below target the educational
context broadly, including the review of the studies
conducted in the
HE
context. One of the earlier reviews ( ) investigates theFerguson, 2012
technological, educational and political factors driving the develop-
ment of LA in education. This review
elaborates on the discussion about
the relationships between LA, EDM, and academic analytics. Dawson,
Ga evi , Siemens, and Joksimovic (2014)š ć performed a citation network
analysis to identify the emergence of trends and disciplinary hierarchies
that in uence the eld's development. The results show that the mostfl fi
commonly cited papers are conceptual and review-based, which implies
a need for scholars to de ne a space for LA. This is also con rmed byfi fi
Ferguson and Clow
(2017) Papamitsiou and Economides (2014). con-
ducted a systematic literature review of empirical LA and EDM re-
search. Most
of the studies were found to be explorative or experi-
mental and conducted within virtual learning environments (VLE) or
LMSs. present a re ective summary of theDrachsler and Kalz (2016) fl
ongoing research on LA and MOOCs and provide a conceptual frame-
work to
position current research on MOOCs and the LA innovation
cycle. One of the recent reviews ( ) investigated theJivet et al., 2018
extent to which theories and models from learning
sciences have been
integrated into
the development
of learning dashboards aimed at lear-
ners. The results show that: i) dashboards evaluations rarely consider
concepts from learning sciences, ii) there is a low number of validated
instruments used to assess either the learners' skills
or the tools, and iii)
the major focus is on evaluating a dashboard's acceptance, usefulness
and ease of use as perceived by learners, rather than on whether the
dashboard brings any bene t to learners. Finally,fiPeña-Ayala (2018)
conducted a detailed review of LA with the aim of providing an idea of
the LA toil, its research lines, and trends to inspire the development of
novel approaches for improving teaching and
learning practices. This
study shows that a
small number of research studies proposed to
ac-
knowledge the progress of the eld provides representative conceptsfi
and determines the role of the
diverse stakeholders. Furthermore, it
pinpoints the need of a well-sounded theoretical frame to guide the LA
endeavour.
Even though there have been attempts to analyse various
aspects of
LA research both in HE and education in general, none of these studies
have systematically structured and summarised the publications related
to LA
in HE on a larger scale with a speci c focus on the examination offi
research methods, approaches and
the evidence of whether LA improve
learning practice in HE. We aim to ll in
this gap by presenting anfi
extensive coverage of research that concerns the use of LA in HE and
that aims to
target not only researchers but also practitioners.
1.2.
Research questions
The main research question is:
What is the current scienti c knowledge about the application of learningfi
analytics in higher education?
This overall research question is operationalised by the following
sub-questions:
•
•
•
•
•
•
•• What research approaches and methods are used?
•
•
•
•
•
•
•• What is the evidence of the learning analytics research?
2. Method
We followed method for
literatureWebster and Watson's (2002)
reviews covering publications from 2012
to 2017 and also included the
LAK
conference proceedings from 2018.
We chose these years since LA
is a young eld and its
adoption and the emergence of peer-reviewedfi
journal articles and conference
proceedings has increased since 2012.
2.1.
Literature
search strategy
We initially searched for relevant publications through PRIMO
(Peer-Reviewed Instructional Materials Online Database), which in-
cludes numerous databases, such as Web of Science and Scopus. To
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
99
ensure reliability we followed guidelinesWebster and Watson's (2002)
which suggest
starting with contributions published in leading journals,
when identifying relevant literature. Consequently, we manually sear-
ched for LA papers in four key high-ranked journals, namely Computers
in Human behaviour the Internet and Higher Education Computers &, ,
Education British Journal of Educational Technology., and To further
complement the study's data set we also manually checked four special
issues
on LA, published by the Journal of Asynchronous Learning Net-
works Journal of Educational Technology and(2012, vol. 16, issue 3), the
Society American Behavioural Scientist(2012, vol. 15, issue 3), the
Journal Online Learning Journal(2013, vol. 57, issue 10), the (2016, vol.
20, issue 2), (2017, vol. 22, issue 3).Technology, Knowledge and Learning
Articles published in the eld-speci c werefi fi Journal of Learning Analytics
also included. In order to o er a comprehensive picture of LA research,ff
we also included the proceedings of the International Learning Analy-
tics and Knowledge
(LAK) conferences for the years 2012 2018. This–
conference was chosen as it serves as a forum for LA research and
provides a solid ground for analysing emerging LA research.
In the databases we searched for learning analytics and higher“ ” “
education in the articles titles, keywords and/or abstracts. To ensure” ’
reliability and validity we carefully examined the title, abstract and
keywords of the articles. The initial search resulted
in 1434 papers, but
after applying our selection criteria, listed below, the nal data setfi
comprised 252 research papers.
The selection of papers
was based on the following criteria:
1. We included empirical and theoretical research studies that focus on
LA in HE.
2. We excluded LA review articles that explicitly aimed to cover LA as
a research eld in a systematic way.fi
3. We excluded papers with a focus on LA and MOOCs, as the context is
so di erent (e.g., time pressure, social context, teacher-student re-ff
lations) from campus environments. Also, there is already fairly
recent summaries of ongoing research (see e.g.,
Drachsler & Kalz,
2016; Leitner et al., 2017).
4. Studies
with a single emphasis on academic analytics or EDM
were
omitted. However, we have included papers where the elds of LA,fi
academic analytics and EDM explicitly overlap, represented either
in the articles' titles, abstracts or keywords.
5. For the conference proceedings, we only included papers published
as part of the main conference. Workshop papers and posters were
excluded.
6. Only peer-reviewed papers published in English during the
period
January 2012 March 2018 were included.–
2.2. Data analysis
All studies were analysed in order to assess the research in
terms of
the studies research approaches,
methods of
data collection
and ana-’
lysis, and the evidence of whether LA improve learning practice in HE.
The papers were coded independently by all the authors. Five percent of
all reviewed articles were coded by the authors together to verify the
coding. When discrepancies in the coding were found we discussed the
di erences and re-coded the papers until we agreed on the mapping.ff
2.2.1. Research approaches
The analysis of research approaches followed a categorisation de-
veloped by and later used in otherGrönlund and Andersson (2006)
studies (e.g., ).
The categories for the researchViberg & Grönlund, 2013
approaches are presented in .Table 1
2.2.2. Methods of
data collection
The analysis of methods of data collection similarly followed the
categorisation presented by
. WhenGrönlund and Andersson (2006)
examining methods
of
data collection,
we categorised them according
to single versus mixed method of collections as papers could belong to
several categories. For example, a
paper could be both interpretative
and use a survey, or both a product description (i.e., a new LA tool) and
an experiment. presents the categories we used for the methodsTable 2
of
data collection.
2.2.3. Methods of data analysis
When examining methods of data analysis, we rst categorised themfi
according to single versus mixed method of analysis (i.e., qualitative
and quantitative methods are used
in the same
study). Secondly, as the
majority of LA research applies computational methods, we focused our
analysis on the examination of these methods, and followed the cate-
gorisation suggested by (See ).Merceron (2015) Table 3
2.2.4. Research evidence in the eld of LAfi
Examining the evidence of whether LA improve learning practice in
HE, we
adopted
the four validated propositions by Ferguson and Clow
(2017) 1.1mentioned in section to structure evidence in the eld of LA.fi
When determining whether
papers ful lledfiFerguson and Clow (2017)
four propositions, we used three classes:
yes , no , and potentially ,“ ” “ ” “ ”
where the last should be interpreted as not being able to identify evi-
dence supporting this proposition, but the
authors made
arguments in
the discussion that their results could lead to improvements in the fu-
ture.
3. Results
In this literature
review, 252
papers are included ( ): 136 pa-Fig. 1
pers (54%) are conference papers and 116 journal publications (46%).
3.1.
Research approaches
Our analysis shows in that most papers (57%) in our sampleFig. 2
undertake a approach. For example,descriptive
research Lonn, Krumm,
Waddington, and Teasley (2012) illustrate an early warning system
(EWS)
for a mentoring program that supports students. Santos, Verbert,
Govaerts, and Duval (2013) describe an evaluation study of the use of a
dashboard and the
extent to which it addresses the students needs. The’
descriptive approach is frequently associated with studies that present
or evaluate a speci c LA tool (e.g.,fiDimopoulos, Petropoulou, & Retails,
2013 Ali, Hatala, Ga evi), with experimental studies (e.g., š ć,&
Jovanovic, 2012 Nguyen,), as well as interpretative studies (e.g.,
Rienties, Toetenel, Ferguson,
& Whitelock, 2017; Waddington, Nam,
Lonn, & Teasley, 2016).
Another frequent approach is (26%). In our re-theory use research
view, we use a broad de nition of theory
and it encompasses theore-fi“
tical frameworks, models, theories (as a body of
knowledge in a broader
sense) and theoretical concepts used ( , p. 43). We found”Viberg, 2015
62 di erent theories used. There are no dominating
theories;
ratherff
there is a plethora of theories used to explain di erent aspects of LA,ff
such as human behaviour ( ), learning and knowledgeGilmore, 2014
Table 1
Research approaches. Categories adopted from Grönlund and Andersson
(2006).
Research approach De nitionfi
Descriptive Describes a phenomenon in its appearance without any use
of theory
Philosophical Re ects upon a phenomenon without data and any use offl
theory
Theoretical Re ects on a phenomenon based on some theory butfl
without
empirical data
Theory use Applies a theory/theories or models as a framework for the
conducted study
Theory generating Analyse data in a systematic manner with a purpose
of
building a theory
Theory testing Test a theory using data in systematic manner
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
100
outcomes ( ), or technology acceptance and useKovanovi et al., 2015ć
( ). While the theoretical
development of the eld isNistor et al., 2014 fi
still in its infancy, a few eld-speci c theories have been developed
andfi fi
applied. Agudo-Peregrina, Iglesias-Pradas, Conde-González, and
Hernández-García (2014), for example, use the conceptual framework
of analytics in education, developed by Van Barneveld, Arnold, and
Campbell
(2012) Tempelaar, Rienties, and Giesbers
(2015). apply
Buckingham Shum and Crick's (2012) framework of dispositional LA to
investigate the predictive power of learning dispositions, outcomes of
formative assessments and other system-generated data in modelling
students' performance.
Twenty- ve
papers
(11%) show evidence
offitheory generation
( ) These papers analyse data with the purpose of taking stepsFig. 2 .
forward in theory building. de Freitas, Gibson, Du Plessis, Halloran,
Williams, Ambrose et al. (2015) propose a foundational LA model
of
HE, focusing on the dynamic interaction of stakeholders with data
supported
by visual analytics. presentWest, Heath, and Huijser (2016)
the of factors relevant to theLet's Talk Learning Analytics Framework
institutional implementation of LA. Wise, Zhao, and Hausknecht (2013)
explicate a pedagogical model
for analytics interventions based
on the
principles of integration, diversity, agency, re ection,
parity,
and dia-fl
logue. presentedLang, Macfadyen, Slade, Prinsloo, and Sclater (2018)
“ ”LA Code of Ethics v1.0 and sought input
and feedback on
this draft
code from individual practitioners across the LA spectrum.
Four
percent of papers are ( ) re ecting on LAphilosophical Fig. 2 ,fl
without any data or theory use. , for instance, identi esDaniel (2015) fi
current challenges facing HE institutions and explores the Big Data
potential in addressing these challenges. The potential of LA to support
a shift in focus from the assessment of individual student performance
in isolation to
the assessment of their performance as a team are dis-
cussed by .Williams (2017)
Two percent of studies are (seetheoretical Picciano, 2012; Prinsloo &
Slade, 2016; Arnold & Sclater, 2017; Lodge, Alhadad, Lewis, & Ga evi ,š ć
2017; Prinsloo & Slade, 2017). Only
one study attempts to test a theory
(Ali, Asadi, Ga eviš ć, Jovanovic, & Hatala, 2013).
3.2.
Methods of data collection
In the
papers analysed, methods of data collection such as inter-
pretative study, experiment, literature study, survey, product descrip-
tion, argument and ethnography are used (see ; the total percent isFig. 3
higher than 100% since
many papers use more than one method).
The majority (72%) of the papers use
a
single method of data col-
lection and only 28% of the papers employ mixed data collection
methods (see ).Fig. 4
Irrespective of
the studies use
a single or mixed method of data
collection, the most common method is interpretative methods, 68% of
the studies use interpretative data collection methods, such as inter-
views or focus groups (e.g., McCoy & Shih, 2016; Tsai, Moreno-Marcos,
Tammets, Kollom, & Ga evi , 2018š ć ). We have broadened the de nitionfi
of
the interpretative studies, presented by Grönlund and Andersson
(2006), by including studies that used data from large data sets for–
example, from LMS for the purpose of interpreting the studied phe-–
nomenon (e.g., ). The second most commonlyRienties & Toetenel, 2016
applied method is experiment
(18%), the third one is product descrip-
tion (15%), followed
by surveys
(11%). The
studies that used mixed
methods for data collection mainly concerned a speci c
LA tool thatfi
was introduced and further evaluated through some interpretative
method, such as a case study. Thus, those papers were classi ed bothfi
under the product description category and also as interpretative ‘ ’ ‘ ’
studies. Additionally, some authors presented a tool
and evaluated it
through an experiment (e.g., ). Surveys and interpretiveAli et al., 2012
methods of data
collection are also among the commonly used combi-
nations for data collection (e.g., Rodríguez-Triana, Prieto, Martínez-
Monés, Asensio-Pérez, & Dimitriadis, 2018; Santos et al., 2013). In all,
72% of the papers combining di erent methods of data collectionff
mixed interpretative studies with some other method.
3.3.
Methods of data analysis
A mixed method approach, i.e., the studies where both qualitative
and quantitative methods of analysis were used, was employed in 19%
of
all papers.
The application of the mixed methods for data analysis
Table 2
Methods of data collection. Categories adopted from .Grönlund and Andersson (2006)
Methods Description
Argument Logical argument but
not
based in any
particular theory or relating explicitly or by clear implication to any theory.
Ethnography Any attempt to understand actions by systematic observation and interpretation.
Experiment Field and quasi- experiments included. This category applies to systematic/structured tests even though in eld settings many environmental variablesfi
clearly are not controlled, and tests may be rather
explorative.
Interpretative Any kind of data collection more strictly performed than a case
story but not necessarily a strictly explained or described method for interpretation. Case“ ”
study belongs here, as do
more limited studies where qualitative and quantitative data is used. Studies that use data from large data sets, e.g., LMS, for the
purpose of interpreting the studied phenomenon.
Literature study Only documents used. Not necessarily a strict method or even
explicitly labelled as a literature study.
Product description IT product, method or similar, described by the manufacturer or someone else.
Survey This includes qualitative overviews of several documents and cases as well.
Unclear Even the widely
de ned categories above fail to capture the method.fi
Table 3
Computational methods for data analysis. Categories adopted
from .Merceron (2015)
Methods Description
Prediction A major task tackled by prediction methods is to predict performance of students. The most common methods include regression and
classi cation.fi
Clustering Clustering techniques are used to group objects so that similar
objects are in the same cluster and dissimilar objects in di erentff
clusters.
Relationship Mining This category
includes such methods as association rule mining, correlation mining, sequential pattern mining and casual data
mining.
Distillation of data for human judgement This category
includes statistics and visualizations that help humans make sense of
their ndings and analyses.fi
Discovery with models This category encompasses approaches in which the model obtained in a previous study is
included in the data to discover more
patterns.
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
101
has increased to 40% 2017, compared to 18% 2016, 7% 2015 and 13%
2014. The 2018 LAK conference
has 36% of the studies applying both
qualitative and quantitative methods.
Following Merceron's categorisation ( ), our results show thatTable 3
predictive methods
(including regression and classi cation) were thefi
most frequent methods (32%), followed by relationship mining
(24%)
and distillation of
data for human judgement (24%) including statistics
and visualisations that help people make sense of their ndings. Finally,fi
37
studies (15%) applied the discovery-with-models approaches and 28
papers (11%) used clustering techniques. As shown in , predictiveFig. 5
methods of data analysis were one of the most common methods in
2012 (25%) and 2013 (29%) and leading in 2014 (47%), 2015 (56%),
but the use
of
such methods has signi cantly decreased for the
yearsfi
2016 (20%) and 2017 (15%). The higher number (48%) for 2018 does
not illustrate a whole picture, as the sample for this year is
limited.
Moreover, there is an increase for the relationship mining methods for
the year 2017
(51%) compared
to 2016 (14%), 2015 (24%) and 2014
(8%). The relative popularity between the methods is stable over
the
years.
3.4. What is the evidence of the LA research?
In this section, the ndings of the LA research in terms of its evi-fi
dence for learning and teaching in HE are presented.
Overall, the results
of our analysis show that there is little evidence in a stronger sense, i.e.,
when the reviewed papers' results show improvements
in, for–
example, students learning outcomes as well as
learning support and’
teaching - about LA in higher educational settings ( ). This evi-Fig. 6
dence is presented by the yes category in . The potentially“ ” Fig. 6 “ ”
category includes studies that were categorised as suggesting, in many
papers explicitly, that there is the potential to improve e.g., learning
outcomes and/or to support teaching.
The
presentation of the results
below focuses on the
evidence
of LA in the stronger sense. The ndingsfi
are presented according to the four propositions (Ferguson & Clow,
2017 1.1) mentioned in section .
1. LA
improve learning outcomes,
2. LA
improve learning support and teaching,
3. LA
are taken up
and used widely,
including deployment at scale,
4. LA
are used in an ethical way.
The proposition with most evidence (35%) in LA is that LA improve
learning support and teaching in higher education ( ). There is littleFig. 6
evidence in terms of improving students learning outcomes. Only 9%’
(23 papers out of all the 252 reviewed
studies) present evidence in this
respect. Moreover, there is even less evidence for the
third proposition.
In only 6% of the papers . This suggestsLA are taken up and used widely
that LA research has so far been rather uncertain to this proposition.
Finally, our results reveal that 18% of the research studies even men-
tion ethics or privacy ( ). This is a rather small number con-“ ” “ ” Fig. 6
sidering that LA research, at
least its empirical strand, should seriously
approach the relevant ethics.
Fig. 1. Number of papers included in the literature
review.
Fig. 2. Research approaches (%).
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
102
Our results also demonstrate that the overall potential of LA is so
far
higher than the actual evidence ( ) for the rst two propositions.Fig. 6 fi
The potential of LA to improve learning outcomes is signi cantly higherfi
than its current evidence and the potential of LA to improve learning
support and
teaching is even higher, compared to what has been ac-
tually exhibited ( ). For the last two propositions we examined theFig. 6
research papers only with a focus on its evidence.
Furthermore, the results show that the evidence in its stronger sense
for LA across the years 2012 2018 has been rather equally distributed–
( ). The evidence for the proposition thatFig. 7 LA improve learning
support and teaching, for example, has been dominating across
all the
years: 2012 (33%), 2013 (30%), 2014 (33%), 2015 (39%), 2016 (41%),
2017 (34%), 2018 (20%). The numbers for the last year are not nalfi
yet, as our ndings for this period are only based on the analysis of thefi
LAK conference papers published in March 2018. The more detailed
results for each proposition are presented in the following sections.
Overall, we found only one study ( ) thatKwong, Wong, & Yue, 2017
falls under all four propositions. There is a small number of papers that
support the propositions 1 and 2, as for example, the case of Nguyen,
Huptych, and Rienties (2018) who focus on the intersection of learning
design and academic performance, the case of thatWorsley (2018)
focused on student engagement and the identi cation of e caciousfi ffi
learning practices, and the study of Tempelaar, Rienties, and Nguyen
(2017) who examined learning strategies in conjunction with the use of
worked examples.
There are
even less studies that support three pro-
positions (e.g., Broos, Verbert, Langie, Van
Soom, & De Laet, 2018;
Hart, Daucourt, & Ganley, 2017; Millecamp, Gutiérrez, Charleer,
Verbert, & De Laet, 2018).
3.4.1. Do LA improve learning outcomes?
One of the larger expectations of LA research and practice relates to
the proposition that it will improve students learning outcomes.’
However, this proposition has been con rmed by only few studies (9%)fi
( ). In our sample, we found only one study from 2012 (Fig. 6 Arnold &
Fig. 3. Methods of data collection.
Fig. 4. Single-vs Mixed methods of data collection in percentage per year.
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
103
Pistilli,
2012)
that strongly supports this proposition. The papers pub-
lished from 2013 do include some more positive (and a few
negative)
evidence: 2013 (4%), 2014 (10%), 2015 (7%), 2016 (12%), 2017
(19%), 2018
(8%). The distribution over the years
seems to be rather
equal with a slight increase for the years 2016 and 2017 ( ). ThereFig. 8
are only few negative results reported
for this proposition. More studies
(16%) explicitly emphasize
the potential for LA to improve learning
outcomes, but do not present any concrete improvements of learning
outcomes (see e.g., Casquero, Ovelar, Romo, Benito, & Alberdi, 2013;
Tabuenca, Kalz, Drachsler, & Specht, 2015; Tempelaar et al., 2017;
Fig. 5. Computational methods for data analysis (%).
Fig. 6. Evidence
for
learning analytics in higher education (%).
Fig. 7. Learning analytics evidence across the years 2012 2018 (%).–
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
104
Wise, Perera, Hsiao, Speer, & Marbouti, 2012).
The studies results that provide some evidence in improvements of’
learning outcomes focus mainly on three areas: i) ,knowledge acquisition
including improved assessment marks and
better grades, ii) skill devel-
opment cognitive gainsand iii) .
3.4.1.1. Knowledge acquisition. Improvements in students' academic
performance were identi ed by several researchers.fiTempelaar,
Cuypers, van de Vrie, Heck, and van der Kooij (2013), for example,
explored the role of test-directed learning by investigating the intensity
of use of two test-directed platforms and academic performance, which
was measured through the exam containing a mathematics and
statistics part and three quizzes. The results demonstrated that
students bene ted from the opportunity of test-directed learning.fi
Others ( )Whitelock, Twiner, Richardson, Field, & Pulman, 2015
examined the use of a natural language analytics tool (OpenEssayist)
to provide feedback to students when preparing an essay for summative
assessment and showed that the students gained signi cantly higherfi
overall grades than the students
who did not have access to the tool.
Moreover, by applying the Coarsened ExactGuarcello et al. (2017)
Mining analytical approach to determine the impact of Supplementary
Instruction (SI) participation on course performance - in particular the
passing of the course - showed that higher exam performance by SI-
attending students was attributable in part to the SI intervention.
The
intervention was designed to help students prepare for exams through
active learning strategies and
peer-facilitated study. In another
example,
Mangaroska, Sharma, Giannakos, Trætteberg, and
Dillenbourg (2018), focused on understanding students' debugging
behaviour using learner-centric analytics. The study's results showed
that students who
processed the information presented in the used tool
that mirrored students' behaviour in programming tasks (focusing on
debugging habits) and acted upon it, improved their performance and
achieved higher level of success than those who failed to do it.
3.4.1.2. Skill development. Advances in students' time management
skills, presentation skills, re ection and collaborative problem solvingfl
skills were exhibited. Improvements in students' time management
skills were identi ed by , who
investigated thefiTabuenca
et al. (2015)
e ects of tracking and monitoring time devoted to learn
with a mobileff
tool on self-regulated learning. Their ndings showed positive
e ectsfi ff
and also revealed that noti cations pushed at random
time of the dayfi
do not produce signi cant improvements in time management. In afi
recent study, focused on the development ofOchoa et al. (2018)
students' communication skills and presented a scalable system to
provide automatic
feedback
to entry-level students to develop basic
oral presentation skills. The system's evaluation indicated
that the
system's feedback highly corresponded human feedback,
and that
students perceived the feedback useful to
develop their oral
presentation skills.
Furthermore, some LA studies have focused on problem solving
skills. , for example, by applying a
qualitatively-in-Worsley (2018)
formed multimodal LA approach presented some neutral evidence in
the developments of the students' collaborative problem-solving skills.
In particular, ndings
indicate that a) bimanual coordination (i.e.,fi
using both hands) seems to correlate with
learning gains, b) partici-
pants who were able to realize the bene fits of physical disengagement
were only able to do so if they also completed a su cient amount offfi
being physically engaged, i.e., working with their hands. Kwong et al.
(2017) focused the development of students' re ection skills, amongfl
others, through a game-based approach. By deploying learning trails,
the Trails of Integrity and Ethics - immersed students in collaborative“ ”
problem solving tasks centred on ethical dilemmas - students interests’
in
learning about issues of academic integrity and ethics in classrooms
have been stimulated.
3.4.1.3. Cognitive gains. Evidence in terms of students' cognitive gains
has been also found.
For example,
Ga evi , Mirriahi and Dawson (2014)š ć
examined students use of a video annotation tool. Two di erent’ff
instructional
approaches were employed (i.e., graded and non-graded
self-re ection annotations within two courses in performing
arts) andfl
exposed that students in the course with graded self-re ection adoptedfl
more linguistic and psychological related processes in comparison to
the course with non-graded self-re ections.flChiu and Fujita (2014)
investigated how statistical discourse analysis can
be used to overcome
shortcomings when analysing knowledge processes. The results showed
how attributes at individual and message levels a ected knowledgeff
creation processes. Asynchronous messages created a micro-sequence
context; opinions and asking about purpose preceded new information;
anecdotes, opinions, di erent opinions,
elaborating ideas, and askingff
about purpose or
information preceded theorizing. These results
present how informal thinking precedes
formal thinking and how
social metacognition a ects
knowledge creation. Finally,ffSonnerberg
and Bannert (2015) through the analysis of the e ects of metacognitiveff
prompts on learning processes and outcomes during a computer-based
showed prompting increased the number
of metacognitive activities,
especially monitoring, which, in turn,
increased the transfer
performance.
Fig. 8. Proposition 1. LA improve learning outcomes (%).
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
105
3.4.2. Do LA improve learning support and teaching?
Of the four positions, , in-LA improve learning support and teaching
cluding retention, progress and completion has the most evidence
(35%; ). The overall potential to improve learning support andFig. 6
teaching is even higher (62%), which suggests that we have to consider
how to transfer this potential into practice. The distribution
over the
years 2012 2017 varies in general from 30% to 41%
( ).–Fig.
9
Overall, our sample exhibits more evidence on supporting institu-
tions and teachers, compared to the learners themselves.
In terms of supporting institutions, the
Learning Analytics Readiness
Instrument - aimed to prepare educational institutions for
a successful
analytics implementation was introduced by–Arnold, Lonn, and Pistilli
(2014). Furthermore, its beta version was used to survey 560 partici-
pants representing 21 institutions ( )Oster, Lonn, Pistilli, & Brown, 2016
to understand the processes institutions use when discerning their
readiness to
implement LA. One of the latest studies (Tsai, Moreno-
Marcos, Tammets, Kollom, & Ga evi , 2018š ć ) presents the SHEILA
(Supporting Higher
Education to Integrate LA) framework informing
institutional strategies and policy responses of LA.
LA researchers have also analysed student retention. While 13% of
papers published between the years 2012 and 2016 had retention as
their primary research focus, there is a decrease of such a focus for the
last two years (see e.g., ; ).Casey, 2017 Chen, Johri & Rangwala, 2018
Overall, the ndings show that the use of VLE data, assessment scores,fi
institutional variables, di erences in technology use, students' onlineff
engagement and course design and usage can predict, to
various de-
grees, student retention. For example, course design with
a
heavy re-
liance on content and cognition (assimilative activities)
seemed to lead
to lower completion and pass rates ( ).Rienties, Toetenel, & Bryan, 2015
Most predictive studies on student retention present and evaluate early
warning systems (EWSs). , for example, showedJunco and Clem (2015)
that digital
textbooks analytics are an e ective EWS to identify studentsff
at risk
of academic failure. Similarly, an EWS
that supports students
( ) exposed whether mentors should take one of threeLonn
et al., 2012
actions: encourage students to keep doing well, explore students“ ” “ ” ’
progress in more detail, or immediately engage students to assess“ ”
potential academic di culties.ffi
Learning design is another common research focus. In studies on
learning design, most research was focused explicitly in terms of im-
proving teacher support rather than improving learner support. Some
researchers linked the learning design of courses with LMS usage and
learning performance. , for example, found thatRienties et al. (2015)
learning design strongly in uences how students engage online, wherefl
the students' LMS activity was considered as a proxy for student en-
gagement. Another study examined how students used an adaptive
learning system based upon their needs and interests by analysing their
use patterns (Liu et al., 2017). Students who
had more prior knowledge
(i. e., higher Diagnostic Test score) were found to make
more attempts
at testing themselves and had higher post-test scores. The results also
indicated a lack of alignment between the assessments and the content
students were directed
to based upon their performance on the as-
sessments. Several
of
the latest
publications similarly focused various
learning design aspects:
e.g., the mismatch between learning design
assumptions and
students actual behaviour ( ), the’Nguyen et al., 2018
visualisation of LA based on a learning design driven
approach
(Echeverria, Martinez-Maldonado, Granda, Chiluiza, Conati &
Buckingham Shum, 2018 Tempelaar), and the use of worked examples
(
et al., 2017).
To improve teaching
and learning support, LA research also focus
on, and found some evidence for, understanding how students learn,
using learner data to study behavioural practices such as learning
styles,
emotions, gestures and
electro-dermal activation, speech, and
online learner behaviour types. For example, multimodal learning
analytics (MMLA) was applied to identify students' various behavioural
practices in two di erent experimental conditions (principle-based vs.ff
exam-based) in order to understand how learning interventions work
( ). The analyses revealed
di erent behavioursWorsley & Blikstein, 2015 ff
that were related to principle-based
reasoning, success and learning.
Ezen-Can, Grafsgaard, Lester, and Boyer (2015) examined multimodal
features related to posture and gesture in order to classify students'
dialogue acts within tutorial dialogue for the purpose of improving
natural language interaction. They found that these unsupervised
models of students' dialogue acts classi cation
are signi cantly im-’fi fi
proved with the addition of automatically extracted posture and gesture
information.
Finally, some evidence in terms of supporting development of
learner models was also revealed. A new tool for visualisation of stu-
dent learning model
during a gameplay session was found to be e ec-ff
tive ( ). The results of aMinovic, Milavanovic, Sosevic, & Gonzales,
2015
statistical analysis indicated that the tool
helped students in identifying
and solving learning problems. Furthermore,
the participants expressed
that the tool gave good insight into individual student learning process
as well
as good foundation for tracking groups' progress. An alternative
approach for dynamic student
behaviour modelling grounded in the
analysis of time-based student-generated trace data, with the purpose of
classifying students according to their time-spent behaviour, was pre-
sented and tested by .Papamitsiou, Karapistoli, and Economides (2016)
The results indicate that a time-spent driven description of the students’
behaviour could have an added value in
dynamically reshaping the
respective models.
Fig. 9. Proposition 2. LA
improve learning support and teaching (%).
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
106
When considering the design of implementing
LA, a Pedagogical
Model for the LA Intervention, primarily aiming to support teachers
( ) and the Align Design Framework (Wise et al., 2013 Wise, Vytasek,
Hausknecht, & Zhao, 2016) were presented. The later was validated as a
tool for learning design, supporting students use of analytics as part of’
the learning process.
3.4.3. Have LA be taken
up and used widely?
This proposition focuses on the level of usage of LA, and is con-
cerned with institutional and policy perspectives. Judging whether the
proposed methods, systems, theories would scale to entire institutions
or other institutions is
not an easy task. We deemed the scalability
condition as ful lled only if is there is some deployment of the toolfi
coupled with students or teachers involvement. According to
this’
judgement, 94% of the papers do not
scale ( ) and the distributionFig. 6
over
the years does not indicate an increase over the years ( ).Fig. 10
Rienties et al. (2015) presented one of the studies that was deployed at
a large scale. They showed that learning design activities seem to have
an impact on learning performance, in particular when modules rely on
assimilative activities. Furthermore, ex-Rienties and Toetenel (2016)
hibited that the development of relevant communication tasks that
align with the course learning gaols seems
likely to enhance academic
retention. The learning designs of four language education modules and
online engagement of 2111 learners were contrasted using weekly
learning design data by .
TheNguyen, Rienties, and Toetenel (2017)
findings from
the study revealed that learning designs were able to
explain up to 60% of
the variability in student online activities.
Herodotou et al. (2017) raised the need for devising appropriate in-
tervention
strategies
to support students at risk. Kwong et al. (2017)
reported the LA on the initial stages of a
large-scale,
government-
funded project which inducts students in Hong Kong into consideration
of
academic integrity and ethics. A total of 658
undergraduate students
from all disciplines from four institutions and 46 undergraduate student
Hall Tutors participated
in the study;
the results
indicated that situated
learning using AR with mobile devices has been e ective for students toff
learn abstract concepts of academic integrity and ethics. Moreover,
Millecamp et al. (2018) introduced a learning dashboard that supports
the dialogue between a student and a study advisor used at ten pro-
grams
at one university. Another step up in scalability is taken by Broos
et al. (2018)
whose LA dashboard was run over several institutions,
with consideration to e.g.,
data ownership. The dashboard was also
used to provide feedback to students participating in a region-wide
positioning test before entering a university program. In summary,
there are very few studies that have been
employed at scale
over the
years.
3.4.4. Have LA been used in an ethical way?
This proposition is about the should we questions, rather than the‘ ’
‘ ’can we ones addressed by the other propositions. Overall only 18% of
the articles mention or/and consider ethics and privacy in relation
to
Fig. 10. Proposition
3. LA are used and taken up widely (%).
Fig. 11. Proposition 4. LA are used in an ethical way (%).
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
107
the conducted research, to a various degree. There is an increase for the
year 2017 36% ( ) and there might be an
increase for 2018: 16%–Fig. 11
of the LAK conference
publications support evidence in this regard,
compared to the previous years' LAK studies, where very few studies
considered ethics.
Among the empirical studies where the issues of ethics have been a
concern (e.g., data privacy and security, informed consent), only few
papers approach them in a systematic way. Such examples include a
study by , who con-Ferguson, Wei, He, and Buckingham Shum (2013)
sidered ethics of visual
literacy, assessment for learning and participa-
tory design and deployment in their research. Another example presents
the development
of a LA dashboard by
, where theBroos et al. (2018)
authors not only took into consideration data ownership, data privacy
and portability issues, but these issues
a ected the technical design offf
the dashboard.
There are also a few papers that explicitly aimed to emphasize the
issues
of ethics and privacy. , for instance,Slade and
Prinsloo (2013)
acknowledged ethical considerations in LA research and proposed six
principles to guide higher educational institutions in addressing these
issues. In another paper,
explored students'Prinsloo and Slade (2016)
vulnerability in relation to
fostering the potential of LA. Rubel and
Jones (2016) argued that LA poses moral and policy issues for students'
privacy; there are crucial questions that need
to be addressed to ensure
that LA are commensurate with students privacy and autonomy.’
Kwong et al. (2017) enlightened students on ethics by immersing them
using augmented reality
in collaborative problem solving tasks centred
on ethical dilemmas, in real locations where
such dilemmas might arise.
4. Discussion and conclusion
This study presented an
extensive coverage of research on the use of
LA in
HE and examined this with a focus on the used research ap-
proaches and methods, as well the evidence of LA research for learning
in HE.
Our analysis shows that the LA eld is still an evolving area offi
practice and research in which descriptive studies and interpretative
data collection
methods prevail, in line with Papamitsiou and
Economides (2014). However, the fact that 26%
of the reviewed pub-
lications were categorised as theory use studies, and
11% as theory‘ ’ ‘
generating indicates
that the eld is maturing. We also found that only’fi
19% of the reviewed studies used both qualitative and quantitative
methods of data analysis, with an increase in 2017. This is a small
number if we consider that good-quality research results - that would
bene t learners and teachers in HE for purposes of understanding andfi“
optimizing learning and the
environments in which
it occurs (”Long &
Siemens,
2011,
p. 34) - call for a much more
frequent application of
well-designed mixed-method approaches. A more frequent application
of qualitative methods, in combination with quantitative ones, will help
us to understand complex learning environments. This is similarly ac-
knowledged
by , who examined the evidenceFerguson and Clow (2017)
of LA in the
subjects
of psychology and education. The increase of
mixed-methods for the year
2017 suggests that there might be a re-
levant development in this regard. Yet, to be able to claim that there is a
stable trend, we need to investigate this further.
Predictive methods have been one of the dominating methods for
several years. However, since 2016 the use
of these methods
has con-
siderably decreased. The
decrease,
together with an increase of re-
lationship mining methods and the rather stable use of methods for the
distillation of data for human judgement, suggests that LA research in
HE is shifting from prediction of, e.g., retention and grades, towards a
deeper understanding of students learning’experiences.
Even though the above-mentioned shift is a positive sign, this
study's results show that so far there is little evidence (9%) that the
research ndings demonstrate improvements in learning outcomes,fi
including knowledge acquisition, skill development and cognitive
gains, as well as learning support and teaching. Out of the four
propositions, , has the mostLA improve learning support and teaching
evidence, which con rms ndings, basedfiFerguson and Clow s (2017)’fi
on a much smaller reviewed sample.
Despite the fact that the identi ed potential for improving learningfi
support and teaching, as well as improvements in learning outcomes is
high, we cannot currently see much transfer of the suggested potential
into higher educational practice over the years, which would be ex-
pected. This
raises a
question of
how we can facilitate this transfer
process to bene t learners. In order to understand how LA's potentialfi
can be bene cial for HE and to provide guidelines for both researchersfi
and practitioners, we need to examine this potential further, with
a
speci c focus
on opportunities, barriers and challenges.fi
Moreover, we need to rigorously consider
what research questions
we want to
answer in order to improve learning practice in HE. Few
have actually asked the question: what kind of data, that we may, or
may not
have today, would be valuable to analyse? In other words, we
should carefully consider what data do we need and what we would like
to do with the data. In this, the sciences of learning in combination with
pedagogical knowledge might help.
Considering the limited number of studies that we deemed to be
scalable (6%), it seems as the research has only used available
data to
see what is possible to conclude from a LA perspective. To be able to
scale up LA research we have to consider several challenges, for ex-
ample, data sharing, understanding complex learning environments,
methods that would work e ciently and the generalisability of theffi
presented results, i.e., to what extent can we rely on them in other
educational settings? Will, for example, the identi ed predictors be thefi
same in another course, another subject, another institution, another
country? One way to progress in this path is to open up the used data
sets for other researchers.
It is worrying that more than 80% of the papers do not mention
ethics at all. Moreover, there are only few studies that approach ethical
issues (e.g., data privacy and security, informed consent) in a sys-
tematic way. However, we should not jump to the conclusion that most
studies are done in an unethical way, but we
call for more explicit re-
flection on ethics to rise in the coming years. The increase of the studies
that re ect on the ethical
issues for the year 2017 (36%) might indicatefl
that there is already a positive move in this direction.
This study has several limitations, which might be rather seen as the
potential for future research. Firstly, we did not speci cally focus on thefi
analysis of the existing potential
of the LA evidence. As the found po-
tential
is high, we need to examine it further in order to provide
guidelines of how it could be transferred into the actual evidence.
Secondly, our examination of the
methods of data analysis focused
mainly on computational methods. The further investigation
of the
other employed
methods will contribute to a more holistic picture.
Finally, we did not examine the actual use of theories. Thus, to better
understand what theories have been used and in what ways in relation
to the development of the LA research area and
its impact on higher
educational practice is a subject for future research. One exception is
Jivet et al. (2018), who examined which theories from learning sciences
have been integrated into the development of learning dashboards.
This study contributes to the literature by providing a systematic
and detailed literature review of LA in HE. The results suggest that even
though the LA eld is maturing, the overall potential of LA is so farfi
higher than the actual evidence, which poses a question of how we can
facilitate the transfer of this potential into learning and
teaching
practice. Moreover, a further validation of developed LA tools, methods
and more empirical
cross-disciplinary and cross-institutional studies
that are used in an ethical way are needed to deeper understand how LA
can contribute to high quality and e ciency in HE. Ourffi findings imply
that the eld is moving forward and we need to build on the existingfi
literature instead of reinventing the wheel. We argue that this study's
results would be a good starting point for researchers interested
in the
field of LA. For practitioners, the review act as an overview of what
LA
can contribute to academic institutions.
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
108
References
Agudo-Peregrina, Á. F., Iglesias-Pradas, S., Conde-González, M.Á., & Hernández-García,
Á. (2014). Can we predict success from log data in VLEs?
Classi cation of interactionsfi
for learning analytics and their relation with performance in VLE-supported F2F and
online learning. , 542 550Computers in Human Behavior, 31 –.
Ali, L., Asadi, M., Ga evi , D., Jovanovic, J., & Hatala, M. (2013). Factors in uencingš ć fl
beliefs for adoption of a learning analytic tool: An empirical study. Computers &
Education, 62, 130 148–.
Ali, L., Hatala, M., Ga evi , D., & Jovanovic, J. (2012). A qualitative evolution of aš ć
learning analytics tool. , 470 489Computers & Education, 58 –.
Arnold, K. E., Lonn, S., & Pistilli, M. D. (2014). An exercise in institutional re ection: Thefl
learning analytics readiness instrument (LARI). Proceedings of the fourth international
conference on learning analytics and knowledge (pp. 163 167). ACM–.
Arnold, K., & Pistilli, M. (2012). Course signals at Purdue: Using learning analytics to
increase student success. Proceedings of the second international learning
analytics and
knowledge conference (pp. 267 270). ACM–.
Arnold, K. E., & Sclater, N. (2017). Student perceptions of their privacy in learning
analytics
applications. Proceedings of the seventh international learning analytics &
knowledge conference (pp. 66 69). ACM–.
Avella, J., Kebritchi, M., Nunn, S., & Kanai, T. (2016). Learning analytics methods,
bene ts, and challenges in higher education: A systematic literature review.fiOnline
Learning, 20(2), 13 29–.
Broadbent, J., & Poon, W. (2015). Self-regulated learning strategies and academic
achievement in online higher education learning environments: A systematic review.
The Internet and Higher Education, 27,1–15.
Broos, T., Verbert, K., Langie, G., Van Soom, C., & De Laet, T. (2018). Multi-institutional
positioning test feedback dashboard for aspiring
students: Lessons learnt from a case
study in anders.flProceedings of the 8th
international conference on learning analytics
and knowledge (pp. 51 55). ACM–.
Buckingham Shum, S., & Crick, R. (2012). Learning dispositions and transferable com-
petencies: Pedagogy, modelling and learning analytics. Proceedings of the
2nd inter-
national conference on learning analytics and knowledge (pp. 92 101). ACM–.
Casey, K. (2017). Using keystroke analytics to improve pass-fail classi ers.fiJournal of
Learning Analytics, 4(2), 189 211–.
Casquero, O.,
Ovelar, R., Romo, J., Benito, M., & Alberdi, M. (2013). Students' personal
networks in virtual and personal learning environments: A case study in higher
education using learning analytics approach. (1),Interactive learning environments, 24
49 67–.
Chen, Y., Johri, A., & Rangwala, H. (2018). Running out of
STEM: A comparative study
across STEM majors of college students at-risk of dropping out early. Proceedings of
the 8th international conference on learning analytics and knowledge (pp. 270 279).–
ACM.
Chiu, M., & Fujita, N. (2014). Statistical
discourse analysis of
online Discussions: Informal
cognition, social metacognition, and knowledge
creation. Proceedings of the fourth
international conference on learning analytics and knowledge (pp. 217 225). ACM–.
Clow, D. (2013). An overview of learning analytics. (6),Teaching in Higher Education, 18
683 695–.
Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and
chal-
lenges. (5), 904 920British Journal of Educational Technology, 46 –.
Dawson, S., Ga evi , D., Siemens, G., & Joksimovic, S. (2014). Current state and futureš ć
trends: A citation network analysis of the learning analytics eld.fiProceedings of the
fifth international conference on learning analytics and knowledge (pp. 231 240). ACM–.
Dimopoulos, I., Petropoulou, O., & Retails, S. (2013). Assessing students' performance
using the learning analytics enriched rubrics. Proceedings of the third international
conference on learning analytics and knowledge conference (pp. 195 198). ACM–.
Drachsler, H., & Kalz, M. (2016). The MOOC and learning
analytics innovative cycle
(MOLAC): A re ective summary of ongoing research and its challenges.flJournal of
Computer Assisted Learning, 32, 281 290–.
Echeverria, V., Martinez-Maldonado, R., Granda, R., Chiluiza, K., Conati, C., & Shum, S. B.
(2018). Driving data storytelling from learning design. Proceedings of the 8th inter-
national conference on learning analytics and knowledge (pp. 131 140). ACM–.
Ezen-Can, A., Grafsgaard, J., Lester, J., & Boyer, K. (2015). Classifying student dialogue
acts with multimodal learning analytics. Proceedings of the fth international conferencefi
on learning analytics and knowledge (pp. 280 289). ACM–.
Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges.
International Journal of Technology
Enhanced Learning, 4(5/6), 304–317.
Ferguson, R., Brasher, A., Cooper, A., Hillaire, G., Mittelmeier, J., Rienties, B., ...
Vuorikari, R. (2016). Research evidence of the use of learing analytics; implications
for education policy. In R. Vuorikari, & J. Castano-Munoz (Eds.). A European frame-
work for action on learning
analytics (pp. 1 152).
Luxembourg: Joint Research Centre–
Science for Policy Report.
Ferguson, R., & Clow, D. (2017). Where is the evidence? A call to action for learning
analytics. Proceedings of the seventh international learning analytics & knowledge con-
ference (pp. 56 65). ACM–.
Ferguson, R., Wei, Z., He, Y., & Buckingham Shum, S. (2013). An evaluation of learning
analytics
to identify exploratory dialogue in online discussions. Proceedings of the third
international conference on learning analytics and knowledge conference (pp. 85 93).–
ACM.
de Freitas, S., Gibson, D., Du Plessis, C., Halloran, P., Williams, E., Ambrose, M., et al.
(2015). Foundations of dynamic learning analytics: Using university
student data
to
increase retention. (6), 1175 1188 British Journal of Educational Technology, 46 –.
Ga evi , D., Mirriahi, N., & Dawson, S. (2014). Analytics of the e ects of video use andš ć ff
instruction to support re ective learning.flProceedings of the fourth international con-
ference on learning analytics and Knowledge (pp. 123 132). ACM–.
Gilmore, D. (2014). Go man s front stage and backstage behaviours in online education.ff’
Journal of Learning Analytics, 1(3), 187 190–.
Grönlund, Å., & Andersson, A. (2006). e-Gov research quality improvements since 2003:
More rigor, but research (perhaps) rede ned.fiProceedings of 5th international con-
ference, EGOV 2005.
Krakow, Poland. Heidelberg, Germany: Springer.
Guarcello, M., Levine, R., Beemer, J., Frazee, J., Laumakis, M., & Schellenberg, A. (2017).
Balancing student Success: Assessing supplemental instruction through coarsened
Exact matching. (3), 335 352Technology, Knowledge and Learning, 22 –.
Hart, S., Daucourt, M., & Ganley, C. (2017). Individual di erences related to collegeff
students' course performance in Calculus II. (2),Journal of Learning Analytics, 4
129 153–.
Herodotou, C., Rienties, B., Boroowa, A., Zdrahal,
Z., Hlosta, M., & Naydenova, G. (2017).
Implementing predictive learning analytics on a large scale: The teacher's perspec-
tive. Proceedings of the seventh international learning analytics & knowledge conference
(pp. 267 271). ACM–.
Ihantola, P., Vihavainen, A., Ahadi, A., Butler, M., Börstler, J., Edwards, S., ... Toll, D.
(2016). Educational data mining and learning analytics in programming: Literature
review and case studies. (pp. 41Proceedings of the 2015 ITiCSE working group reports –
63). ACM.
Jivet, I., Sche el, M., Specht, M., & Drachsler, H. (2018). License to evaluate: Preparingff
learning analytics dashboards for educational practice. Proceedings of the 8th inter-
national conference on learning analytics & knowledge (pp. 31 40). ACM–.
Junco, R., & Clem, C. (2015). Predicting course outcomes with digital textbook usage
data. The Internet and Higher Education, 27,54–63.
Kovanovi , V., Ga evi , D., Dawson, S., Joksimovi , S., Baker, R., & Hatala,
M. M. (2015).ć š ć ć
Penetrating the black box of time-on-task estimation. Proceedings of the fth interna-fi
tional conference on learning analytics and knowledge (pp. 184 193). ACM–.
Kwong, T., Wong, E., & Yue, K. (2017). Bringing abstract academic integrity and ethical
concepts into real-life situations. Technology. (3),Knowledge and Learning, 22
353 368–.
Lang, C., Macfadyen, L. P., Slade, S., Prinsloo, P., & Sclater, N. (2018). The complexities of
developing a personal code of ethics for learning analytics practitioners: Implications
for institutions and the eld.fiProceedings of the 8th international conference on learning
analytics and knowledge (pp. 436 440).
ACM–.
Lawson, C., Beer, C., Dolene, R., Moore, T., & Fleming, J. (2016). Identi cation of Atfi“
Risk students using learning analytics: The ethical dilemmas of intervention stra-”
tegies in higher education institution. Educational Technology Research & Development,
64(5), 957 968–.
Lee, K. (2017). Rethinking the accessibility of online higher education: A historical
overview. The Internet and Higher Education, 33,15–23.
Leitner, P., Khallil, M., & Ebner, M. (2017). Learning analytics in higher education a–
literature review. In A. Peña-Ayala (Ed.). Learning Analytics: Fundaments, Applications,
and Trends: A View of the
Current Stat of the Art to Enhance e-Learning (pp. 1 23).–
Chum: Springer.
Liu, M., Kang, J., Zou, W., Lee, H., Pan, Z., & Corliss, S. (2017). Using data to understand
how to better design adaptive learning. (3),Technology, Knowledge and Learning, 22
271 298–.
Lodge, J. M., Alhadad, S. S., Lewis, M.
J., & Ga eviš ć, D. (2017). Inferring learning from
big
Data: The importance of a transdisciplinary and multidimensional approach.
Technology, Knowledge and Learning, 22(3), 385 400–.
Long, P., & Siemens, G. (2011). Penetrating the fog: Analytics in learning and education.
Educause Review, 46(5), 31 40–.
Lonn, S., Krumm, A., Waddington, R., & Teasley, S.
(2012). Bridging the gap from
knowledge to action: Putting analytics in the hands of academic advisors. Proceedings
of the second international conference on learning analytics and knowledge conference (pp.
184 187). ACM–.
Mah, D.-K. (2016). Learning analytics and digital badges: Potential impact on student
retention in higher education. (3), 285 305Technology, Knowledge and Learning, 21 –.
Mangaroska, K., Sharma, K., Giannakos, M., Trætteberg, H., & Dillenbourg, P. (2018,
March). Gaze insights into debugging behavior using learner-centred analysis.
Proceedings of the 8th international conference on learning analytics and knowledge (pp.
350 359). ACM–.
McCoy, C., & Shih, P. (2016). Teachers as producers of data analytics: A case study of a
teacher-focused educational data science program. (3),Journal of Learning Analytics, 3
193 214–.
Merceron, A. (2015). Educational data mining/learning analytics: Methods, tasks and
current trends. Proceedings of DeLFI Workshops 2015 co-located with 13
th
e-Learning
Conference
of the German Computer Society, München, Germany, September 1.
Retrieved
from https://pdfs.semanticscholar.org/1d3a/
de2c0a5a60be82030616b99ebd8426238098.pdf 2018-05-17 .
Millecamp, M., Gutiérrez, F., Charleer, S., Verbert, K., & De Laet, T. (2018, March). A
qualitative evaluation of a learning dashboard to support advisor-student dialogues.
Proceedings of the 8th international conference on learning analytics and knowledge (pp.
56 60). ACM–.
Minovic, M., Milavanovic, M., Sosevic, U., & Gonzales, M. (2015). Visualisation of student
learning model in serious games. Computers in Human Behavior, 47,98–107.
Nguyen, Q., Huptych, M., & Rienties, B. (2018). Linking students' timing of engagement to
learning design and academic performance. Proceedings of the 8th international con-
ference on learning analytics and knowledge (pp. 141 150). ACM–.
Nguyen, Q., Rienties, B., & Toetenel, L. (2017). Unravelling the dynamics of instructional
practice: A longitudinal study on learning design and VLE activities. Proceedings of the
seventh international learning analytics & knowledge conference (pp. 168 177). ACM–.
Nistor, N., Baltes, B., Dasc lu, M., Mih il , D., Smeaton, G.,
& Tr u an-Matu, . (2014).ă ă ă ă ş Ş
Participation in virtual academic communities of practice under the
in uence offl
technology acceptance and community factors. A learning analytics application.
Computers in Human Behavior, 34, 339 344–.
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
109
Nistor, N., Derntl, M., & Klamma, R. (2015). Learning Analytics: Trends and issues of
the
empirical research of the years 2011-2014. In G. Conole et al (Ed.). Design for teaching
and learning
in a networked world: Ec-tel 2015, LNCS 9307
(pp. 453 459). Springer–.
Ochoa, X., Domínguez, F., Guamán, B., Maya, R., Falcones, G., & Castells, J. (2018). The
RAP system: Automatic feedback of oral presentation skills using multimodal analysis
and low-cost sensors. Proceedings of the 8th international conference on learning ana-
lytics and knowledge (pp. 360 364). ACM–.
Oster, M., Lonn, S., Pistilli, M. D., & Brown, M. G. (2016, April). The learning analytics
readiness instrument. Proceedings of the sixth international conference
on learning
analytics & knowledge (pp. 173 182). ACM–.
Papamitsiou, Z., & Economides, A. (2014). Learning analytics and educational data
mining in practice: A systematic literature review of empirical evidence. Educational
Technology & Society, 17(4), 49 64–.
Papamitsiou, Z., Karapistoli, E., & Economides, A. (2016). Applying classi cation tech-fi
niques on temporal trace data for shaping student behavior models. Proceedings of the
sixth international conference on learning analytics & knowledge (pp. 299 303). ACM–.
Peña-Ayala, A. (2018). Learning analytics:
A glance of evolution, status, and trends ac-
cording to a proposed
taxonomy. .WIREs Data Mining Knowl Discov https://doi.org/
10.1002/widm.1243 2018, 8: null.
Picciano, A. (2012). The evolution of big data and learning analytics in American higher
education. (3), 9 20Journal of Asynchronous Learning Networks, 16 –.
Prinsloo, P., & Slade, S. (2016). Student vulnerability, agency, and learning analytics: An
exploration. (1), 159 182Journal of Learning Analytics, 3 –.
Prinsloo, P., & Slade, S. (2017). An elephant in the learning analytics room: The ob-
ligation to act. Proceedings of the seventh international learning analytics & knowledge
conference (pp. 46 55). ACM–.
Rienties, B., & Toetenel, L. (2016). The impact of 151 learning designs on student sa-
tisfaction and performance: Social learning (analytics) matters. Proceedings of the sixth
international conference on learning analytics & knowledge (pp. 339 343). ACM–.
Rienties, B., Toetenel, L., & Bryan, A. (2015). Scaling up learning design: Impact of“ ”
learning design activities on LMS behavior and
performance. Proceedings of the fthfi
international conference on learning analytics and knowledge (pp. 315 319). ACM–.
Rodríguez-Triana, M. J., Prieto, L. P., Martínez-Monés, A., Asensio-Pérez, J. I., &
Dimitriadis, Y. (2018). The teacher in the loop: Customizing multimodal learning
analytics
for blended learning. Proceedings of the 8th international conference on
learning analytics and knowledge (pp. 417–426).
ACM.
Rubel, A., & Jones, K. (2016). Student privacy in learning analytics: An
information ethics
perspective. (2), 143 159The Information Society, 32 –.
Santos, J., Verbert, K., Govaerts, S., & Duval, E. (2013). Addressing learner issues with
StepUp!: An evaluation. Proceedings of the third international conference on learning
analytics and knowledge conference (pp. 14 22).
ACM–.
Schumacher, C., & Ifenthaler, D. (2018). Features students really expect from learning
analytics. , 397 407Computers in Human Behavior, 7 –.
Siemens, G., & Baker, R. (2012). Learning analytics and educational data mining: Towards
communication and collaboration. Proceedings of the second international conference on
learning analytics & knowledge (pp. 252 254). ACM–.
Sin, K., & Muthu, L. (2015). Application of big data in educational data mining and
learning analytics - a literature review. (4),ICTAC Journal of Soft Computing, 5
1035 1049–.
Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American
Behavioral Scientist, 57(10), 1510 1529–.
Sonnerberg, C., & Bannert, M. (2015). Discovering the e ects of metacognitive promptsff
on the sequential structure of SRL-processes using process mining techniques. Journal
of Learning Analytics, 2(1), 72 100–.
Strang, K. (2016). Beyond engagement analytics: Which online mixed-data factors predict
student learning outcomes? (4), 1 21Education and Information Technologies, 20 –.
Tabuenca, B., Kalz, M., Drachsler, H., & Specht, M. (2015). Time will tell: The role of
mobile learning analytics in self-regulated learning. ,Computers & Education, 89
53 74–.
Tempelaar, D., Cuypers, H., van de Vrie, E., Heck, A., & van
der Kooij, H. (2013).
Formative assessment and learning analytics. Proceedings of the third
international
conference on
learning analytics and knowledge conference (pp. 205 209). ACM–.
Tempelaar, D., Rienties, B., & Giesbers, B. (2015). In search for the most informative data
for feedback generation: Learning analytics in a data-rich context. Computers in
Human Behavior, 47, 157 167–.
Tempelaar, D., Rienties, B., & Nguyen, Q. (2017). Adding disposition to create pedagogy-
based learning analytics. (1), 15 35Journal of Higher Education Development, 12 –.
Tsai, Y. S., Moreno-Marcos, P. M., Tammets, K., Kollom, K., & Ga evi , D. (2018). SHEILAš ć
policy
framework: Informing institutional strategies and policy processes of learning
analytics. Proceedings of the 8th international conference on learning analytics and
knowledge (pp. 320 329). ACM–.
Van Barneveld, A., Arnold, K. E.,
& Campbell, J. P. (2012). Analytics
in higher education:
Establishing a common language. Educause Learning Initiative, 1,1–11.
Viberg, O. (2015). Design and use of mobile technology in distance language education:
Matching learning practices with
technologies-in-practiceDoctoral dissertation. Sweden:
Örebro University.
Viberg, O., & Grönlund, Å. (2013a). Systematising the eld of mobile assisted languagefi
learning. (4), 72 90International Journal of Mobile and Blended Learning, 5 –.
Waddington, T., Nam, S., Lonn, S., & Teasley, S. (2016). Improving early warning systems
with categorized course resource usage. (3), 263 290Journal of Learning Analytics, 3 –.
Webster, J., & Watson,
R. (2002). Analyzing the past to prepare for the future: Writing a
literature review. (2), xiii xxiiiManagement Information Systems Quarterly, 26 –.
West, D., Heath, D., & Huijser, H. (2016). Let's Talk learning analytics: A framework for
implementation in relation to student retention. (2), 30 50Online Learning, 20 –.
Whitelock, D., Twiner, A., Richardson, J., Field, D., & Pulman, S. (2015). OpenEssayist: A
supply and demand learning analytics tool for drafting academic essays. Proceedings
of the fth international conference on learning analytics and knowledgefi(pp. 208 212).–
ACM.
Williams, P. (2017). Assessing collaborative learning: Big data, analytics and university
futures. (6), 978 989Assessment & Evaluation in Higher Education, 42 –.
Wise, A., Perera, N., Hsiao, Y.-T., Speer, J., & Marbouti, F. (2012). Microanalytic case
studies of individual participation patterns in an asynchronous online discussion in
an undergraduate blended course. , 108 117The Internet and Higher Education, 15 –.
Wise, A., Vytasek, J., Hausknecht, S., & Zhao, Y. (2016). Developing learning analytics
design knowledge in the middle space : The student tuning model and align design“ ”
framework for learning analytics work. (2), 155 182Online Learning, 20 –.
Wise, A., Zhao, Y., & Hausknecht, S. (2013). Learning analytics for online discussion: A
pedagogical model for intervention with embedded and extracted analytics.
Proceedings of the third international conference on learning analytics and knowledge
conference (pp. 48 56). ACM–.
Worsley, M. (2018). (Dis)engagement matters: Identifying e cacious learning practicesffi
with multimodal learning analytics. Proceedings of the 8th international conference on
learning analytics and knowledge (pp. 365 369). ACM–.
Worsley, M., & Blikstein, P. (2015). Leveraging multimodal learning analytics to di er-ff
entiate student learning strategies. Proceedings of the fth international conference onfi
learning analytics and knowledge (pp. 360 367). ACM–.
Xing, W., Guo, R., Petakovic, E., & Goggins, S. (2015). Participation-based student nalfi
performance prediction model through interpretable Genetic Programming:
Integrating learning analytics, educational data mining and theory. Computers in
Human Behavior, 47, 168 181–.
O. Viberg et al. Computers in Human Behavior 89 (2018) 98–110
110