ArticlePDF Available

Let’s not forget: Learning analytics are about learning

Authors:

Abstract

The analysis of data collected from the interaction of users with educational and information technology has attracted much attention as a promising approach for advancing our understanding of the learning process. This promise motivated the emergence of the new research field, learning analytics, and its closely related discipline, educational data mining. This paper first introduces the field of learning analytics and outlines the lessons learned from well-known case studies in the research literature. The paper then identifies the critical topics that require immediate research attention for learning analytics to make a sustainable impact on the research and practice of learning and teaching. The paper concludes by discussing a growing set of issues that if unaddressed, could impede the future maturation of the field. The paper stresses that learning analytics are about learning. As such, the computational aspects of learning analytics must be well integrated within the existing educational research.
64 TechTrends • January/February 2015 Volume 59, Number 1
Abstract
e analysis of data collected from the
interaction of users with educational and
information technology has attracted much
attention as a promising approach for advancing
our understanding of the learning process. is
promise motivated the emergence of the new
research eld, learning analytics, and its closely
related discipline, educational data mining.
is paper rst introduces the eld of learning
analytics and outlines the lessons learned
from well-known case studies in the research
literature. e paper then identies the critical
topics that require immediate research attention
for learning analytics to make a sustainable
impact on the research and practice of learning
and teaching. e paper concludes by discussing
a growing set of issues that if unaddressed, could
impede the future maturation of the eld. e
paper stresses that learning analytics are about
learning. As such, the computational aspects of
learning analytics must be well integrated within
the existing educational research.
Keywords: educational research, Learning
analytics, learning sciences, learning technology,
self-regulated learning
Introduction
Over the past several years, we have
witnessed a growing trend for increased student
demand for participation in higher education.
While previous reports demonstrated the need
for higher education and contrasted this with
an argument surrounding the nite capacity to
support such growth (OECD, 2013), it was not
until 2012 and the hype linked to the massive
open online courses (MOOC) that there was
intensive public debate about the future role of
the university and scalable education models
(Kovanović, Joksimović, Gašević, Siemens, &
Hatala, 2014). In essence, the rapid advances
in technology and its subsequent broad scale
adoption provided the necessary infrastructure,
and the necessary tipping point for public
acceptance of online learning, to enable the
delivery of education at such a large scale. While
there is much promise amidst the proliferation
of MOOCs and online and blended modes
of learning more generally, these models also
promulgate a new suite of education challenges.
For instance, the noted poor attrition rates,
and the sheer volume of students enrolled in a
MOOC necessitates a more independent study
model that is in stark contrast to the more
accepted socio-constructivist approaches to
learning (Bayne & Ross, 2014).
Despite the challenges of online delivery, the
adoption of educational technologies has aorded
a new opportunity to gain insight into student
learning. As with most IT systems, the student’s
interactions with their online learning activities
are captured and stored. ese digital traces (log
data) can then be ‘mined’ and analysed to identify
patterns of learning behaviour that can provide
insights into education practice. is process has
been described as learning analytics. e study
of learning analytics has been dened as the
“measurement, collection, analysis and reporting
of data about learners and their contexts, for
purposes of understanding and optimizing
learning and the environments in which it occurs”
(Siemens & Gašević, 2012). Learning analytics is
Let’s not forget:
Learning analytics are
about learning
By Dragan Gašević, University of Edinburgh, Shane Dawson, University of South
Australia, George Siemens, University of Texas at Arlington
©Association for Educational Communications and Technology 2015
Volume 59, Number 1 TechTrends • January/Februar y 2015 65
a bricolage eld drawing on research, methods,
and techniques from numerous disciplines such
as learning sciences, data mining, information
visualization, and psychology. is paper reviews
the learning analytics research to outline a few of
the major topics that the learning analytics eld
needs to address in order to deliver its o cited
promise for transforming education practice. In
so doing, we argue that learning analytics needs
to build on and better connect with the existing
body of research knowledge about learning and
teaching. Specically, in this paper, we suggest
how learning analytics might be better integrated
into existing educational research and note the
implications for learning analytics research and
practice.
Course Signals: Lessons Learned
Predicting student learning success and
providing proactive feedback have been two of
the most frequently adopted tasks associated with
learning analytics (Dawson, Gašević, Siemens,
& Joksimovic, 2014). is paper provides an
evaluation of the current state of the eld of
learning analytics through analysis of articles and
citations occurring in the LAK conferences and
identied special issue journals. e emerging
eld of learning analytics is at the intersection
of numerous academic disciplines, and therefore
draws on a diversity of methodologies, theories
and underpinning scientic assumptions.
rough citation analysis and structured mapping
we aimed to identify the emergence of trends
and disciplinary hierarchies that are inuencing
the development of the eld to date. e results
suggest that there is some fragmentation in
the major disciplines (computer science and
education. In this context, the best known
application of analytics in education is Course
Signals developed at Purdue University (Arnold
& Pistilli, 2012). Using the trace data collected
by the Blackboard learning management system
(LMS) and data from the institutional Student
Information System (SIS), Course Signals uses
a data-mining algorithm to identify students at
risk of academic failure in a course. Specically,
Course Signals identies three main outcome
types – a student at a high risk, moderate risk,
and not at risk of failing the course. ese
three outcomes are symbolically represented as
trac light where each light represents one of
the three levels of risk (red, orange, and green
respectively). e trac lights serve to provide
an early warning “signal” to both instructor and
student. is signal is designed to prompt a form
of intervention that is aimed at improving the
progression of the student identied as at risk of
failure. Early studies of Course Signals showed
high levels of predictive accuracy and signicant
benets in the retention of the students who
took at least one course adopting the early alert
soware versus those who took a course without
the Course Signals tool (Arnold & Pistilli, 2012).
While Course Signals is a well-known example,
there have been many other predictive algorithms
aimed towards the identication of students at
risk of failure or retention (Jayaprakash, Moody,
Lauría, Regan, & Baron, 2014). Any predictive
model is generally accompanied by a dashboard
to aid sensemaking by visualizing the trace data
and prediction results (Ali, Hatala, Gašević, &
Jovanović, 2012).
Although establishing lead indicators
of academic performance and retention are
essential steps for learning analytics, there has
been a dearth of empirical studies that have
sought to evaluate the impact and transferability
of this initial work across domains and contexts
(Dawson et al., 2014).
Although establishing lead indicators
of academic performance and retention are
essential steps for learning analytics, there has
been a dearth of empirical studies that have
sought to evaluate the impact and transferability
of this initial work across domains and contexts
(Dawson et al., 2014).
e limited empirical research to date has
revealed some signicant issues that the eld
needs to consider and address in the future. e
most signicant is that learning analytics tools
are generally not developed from theoretically
established instructional strategies, especially
those related to provision of student feedback.
For instance, Tanes, Arnold, King, & Remnet
(2011) undertook a content analysis of the
feedback messages sent by instructors to
students aer receiving the Course Signals
alerts. e authors noted that instructive or
process feedback types were rarely observed
in the instructors’ messages to students. is
nding is in marked contrast to the vast volume
of research demonstrating that feedback is most
eective when information is provided “at the
process level” (for review, see Hattie & Timperley
(2007)). Rather than receiving messages with
detailed instructive feedback on how to address
identied deciencies in their learning, students
identied at risk would exclusively receive
multiple messages carrying low level summative
feedback. Consistent with educational research,
no eect of summative feedback on learning
success was identied.
While the simplicity of the trac light
metaphor of Course Signals was clear to the
target users and a simple and eective way to
66 TechTrends • January/February 2015 Volume 59, Number 1
prompt action, the tool design did not have
sucient theoretically informed functionality
to encourage adoption of eective instructional
and intervention practices. is is not
surprising, as Course Signals was initially
designed as an academic analytics tool (Arnold
& Pistilli, 2012). It is only more recently that
the soware has been promoted within the
domain of learning analytics. However, as an
academic analytics tool, Course Signals is well
suited to its proposed intent and addresses
the needs of the envisioned stakeholders (e.g.
university administrators, government ocials
and funders,). at is, access to data forecasts
concerning various institutional trends for
resource planning.
What we learn from this case study is that
learning analytics resources should be well
aligned to established research on eective
instructional practice. In so doing we can move
from static prediction of a single academic
outcome, to more sustainable and replicable
insights into the learning process. is is
consistent with observations from instructors
who appreciated features of the LOCO-
Analyst learning analytics tool that allowed
for establishing links between the students
activities (e.g., discussion messages posted) with
“the domain topics the students were having
diculties with” (Ali et al., 2012, p. 485). at
is, instructors expressed their preferences of
learning analytics features that oer insights
into learning processes and identify student
gaps in understanding over simple performance
measures. With such insights, instructors can
identify weak points in the learning activities
performed by their students; topics the students
have struggled with, and provide instructive
and process related feedback on how to improve
their learning.
Direction
As noted above, it is essential that future
learning analytics developments and innova-
tions draw on, and advance educational re-
search and practice. To do so, we posit that the
eld of learning analytics needs to ground data
collection, measurement, analysis, reporting
and interpretation processes within the exist-
ing research on learning. In this paper, we build
on three axioms that Winne (2006)identied as
commonly accepted foundations of research
knowledge about learning in educational psy-
chology: learners construct knowledge, learn-
ers are agents, and data includes randomness.
We use these three axioms to interrogate the
critical issues for the development of the learn-
ing analytics eld.
e Winne and Hadwin (1998) model of
self-regulated learning is based on the COPES
models. at is, the model builds on conditions,
operations, products, evaluation, and standards
learners adopt in order to explain how they
construct knowledge. In essence, learners
construct knowledge by using (cognitive,
digital, and physical) tools to perform operations
on raw information in order to create products
of learning. For example, a student can use
online discussions (as a tool) to synthesize and
integrate (as operations performed) knowledge
gained from dierent sources of information
in order to develop a critical perspective (as
a product of learning) to a problem under
study. In this process, learners use standards
to evaluate products of their learning and
eectiveness of the operations performed and
tools used as a part of metacognitive monitoring
and control. A group of individual standards
makes up a learning goal that learners set when
they are working on a specic learning task. For
example, a goal can be composed of the level
of cohesiveness of the argument created in an
online discussion message when developing
their critical perspective, the number, types and
trustworthiness of information sources they
would consult when building their argument, or
the time they decide to spend on the collection
of the sources.
e notion that learners are agents implies
they have “the capability to exercise choice in
reference to preferences” (Winne, 2006, p. 8).
e choices learners make are inuenced by
the (internal and external) conditions, which
in turn can aect the standards learners use in
their metacognitive monitoring and control.
Examples of external conditions include course
instructional designs such as, grading an online
discussion and providing appropriate scaolds
to guide how students participate and use the
learning tools. Examples of internal conditions
are metacognitive awareness and skills (e.g.,
whether learners are aware that discussions can
be an eective mean to develop critical thinking,
and if so, how skilled they are at doing so),
the level of motivation to participate in online
discussions, or prior knowledge about the topic
discussed online.
Eects of Instruction Conditions
To date, learning analytics has been focused
on the investigation of the eects of operations
performed by using proxy measures of learning
derived from trace data – i.e., counts of logs in
activity or access to discrete resources and time
spent online. However, far less attention has been
dedicated to the other elements of COPES, such
Volume 59, Number 1 TechTrends • January/Februar y 2015 67
as how and to what extent the conditions aect
the operations performed, the products, and the
standards used for metacognitive monitoring.
A lack of consideration of these elements raises
signicant concerns as to the validity of learning
analytics results and interpretation. For example,
Gašević, Dawson, Rogers and Gasevic (2014)
demonstrate that the association of trace data
about students’ activity in an LMS with academic
performance is moderated by instructional
conditions. e Gašević et al. (2014) analysis
of the results of a regression model created
by combining data from nine undergraduate
courses in an Australian university showed that
only three variables – number of logins and
number of operations performed on discussion
forums and resources were signicant predictors
of academic performance. e authors noted that
these three variables explained approximately
21% of the variability in academic performance.
However, in a practical sense, these predictors
cannot be reasonably translated into actionable
recommendations to facilitate student learning.
Furthermore, there is a limited degree of feedback
that can be provided to instructors without
a detailed understanding of the pedagogical
intent associated with their tool selection and
associated learning activities. us, critical
insights of such learning analytics could hardly
be used to inform the course learning designs
as previously suggested by Lockyer, Heathcote,
& Dawson (2013). When regression analysis
models were created for each course separately,
the variables indicative of the LMS tools relevant
for the learning design intention of each course
emerged. For example, in the communication
course with an emphasis on writing, the use of
Turnitin for plagiarism detection and assignment
descriptions were signicant predictors of the
students’ grades. is course-specic regression
model explained more than 70% of the variability
of the nal grades of the communication students.
In contrast, in the graphics course, no signicant
predictor was identied within the available
trace data for predicting students’ grades. is
nding reects the course design and technology
choices of the instructor. In this case, the course
did not utilize the institutional LMS. Instead
the course learning activities were performed in
public social media soware. As such, any counts
of log-ins, tools and resources within the LMS
course site, were eectively redundant for this
particular course.
e reasons for the diversity observed in
the ndings of the Gašević et al. (2014) study
may be attributed to the diering instructional
models and technology choices across the
courses. For instance, educational research
has shown that instructors have a signicant
inuence on a learner’s choice of tools within
an LMS (McGill & Klobas, 2009) and the
learning approach they follow (Trigwell,
Prosser, & Waterhouse, 1999). e dierence in
instructional conditions is likely to shed light
on the inconsistent results of the trace data-
based predictors of academic success that are
oen reported in the literature (Jayaprakash
et al., 2014; Macfadyen & Dawson, 2012). is
supports the earlier proposition stressing the
importance of framing future analytics studies
within the existing education research.
Eects of Internal Conditions
Learners are active agents in their learning
process. is simple statement has many
signicant implications. Learner agency implies
that even when learners receive the same
instructional conditions, they may choose to
adopt alternate study methods. As such, we
need to give greater emphasis to the importance
of internal conditions for facilitating student
learning. Existing studies about student
choice and use of learning tools have revealed
signicant dierences in both the quantity of
tool use and how specic tools are adopted to
complete a learning task. Building from the
work of Winne (2006), Lust, Elen, & Clarebout
(2013) posit that the use of learning tools can
be considered a self-regulated learning process
whereby the choices a student makes about
the tool are based on (internal) conditions
and individual learning goals. In their study
with undergraduate students of education in
a blended course, Lust et al. (2013) identied
four disparate groups of students based on
their use of learning tools. e groups were
classied as: i) no-users, low level adoption of
any tool in the LMS suggested to them in the
course design (e.g., quizzes, web lectures, and
discussion forums); ii) intensive active learners
– used all tools suggested by the course design
and used those tools actively; iii) selective users
– only used a selected number of tools oered to
them; iv) intensive supercial users – used all the
tools and spent more time than other groups,
predominantly on cognitively passive activities
such as reading discussion posts in lieu of
contributing to the forum. A future multivariate
analysis performed by Lust et al. (2013) revealed
that the dierences between user groups –
where groups were formed as a consequence
of exercising their learner agency – was as high
as the eects of instructional conditions (e.g.,
grade vs. non-graded tool use) on the tool use
reported in other studies (Gašević, Mirriahi, &
Dawson, 2014).
68 TechTrends • January/February 2015 Volume 59, Number 1
Eects of Learning Products and Strategy
Learning products and standards used
for learning are essential factors that need to
be captured to describe learning processes
comprehensively. Although the frequency of
activity and time on task are sound indicators of
the extent to which learners use a tool, the high
volume of these measures cannot be directly
interpreted as a high quality of learning. What is
of importance is the specic learning strategies
that are adopted by individual students. Learning
strategy can be reective of the metacognitive
monitoring and control operations, as these main
metacognitive operations are based on learning
standards. For example, in a study of the eects
of teaching on acceptance of a video annotation
soware tool for self-reection, Gašević, Mirriahi,
Dawson, and Joksimovic (2014) identied that
students, in performing arts, had a high level
of annotations created in a course where the
annotation tool used for self-reection on video
recordings of an individual’s performance, was
optional (i.e., not graded). e high level of the
annotations created was as high as it was in a
prior course where the tool was mandatory and
contributed to their nal course grades.
Simply counting the number of operations
performed within the video annotation study
(Gašević, Mirriahi, Dawson, et al., 2014) did
not provide an eective measure for the quality
of learning products (i.e., text of annotations)
nor the adopted learning strategy. However,
where counts fail the Coh-Metrix measures
succeed (McNamara, Graesser, McCarthy, &
Cai, 2014). e Coh-Metrix analysis showed
a signicant decline in the cohesiveness
and comprehensiveness of the text of self-
reections (i.e., learning products) in the
learners’ video annotations. Moreover, aer
representing learning strategies as transition
graphs1 of the activities learners performed
and calculating the density of those graphs
as a measure of metacognitive monitoring as
suggested by Hadwin, Nesbit, Jamieson-Noel,
Code, & Winne (2007), a considerable decline
in metacognitive monitoring was also observed.
is is in part due to the private nature of the
annotation when undertaken it the absence of
a graded component. For instance, notes taken
without any intention for sharing with others
typically do not have the same readability
as notes prepared for sharing with peers or
instructors. However, the decrease is a sign for
concern as metacognitive monitoring is the
“key SRL process” (Greene & Azevedo, 2009,
p. 18) to promote understanding. is nding
1
has much signicance for learning analytics
research. In essence, continued focus on event
activities ignores any examination of the quality
of learning products and strategy adopted.
Summary and Future
Consideration
e discussion oered in the paper reects
the impetus for building the eld of learning
analytics upon and contributing to the existing
research on learning and education. Clearly,
the counting of certain types of activities that
learners performed with online learning tools
can be correlated with academic performance.
However, the true test for learning analytics is
demonstrating a longer term impact on student
learning and teaching practice. In this context,
the eld of learning analytics can benet from
past lessons in information seeking. As a
developing eld in information seeking, Wilson
(1999, p. 250) noted that “many things were
counted, from the number of visits to libraries,
to the number of personal subscriptions to
journals and the number of items cited in
papers. Very little of this counting revealed
insights of value for the development of theory
or, indeed, of practice.” Signicant progress in
research and practice only really commenced
when information seeking was framed within
“robust theoretical models of human behaviour”
(Wilson, 1999, p. 250). e eld of learning
analytics must adopt a similar approach.
While it is oen perceived that education is
rife with data, very little is related to capturing
the conditions for learning (internal and
external). For example, external conditions, such
as instructional design, social context, previous
learning history with the use of a particular
tool, and revisions in the course content can
radically change the results, interpretation of
ndings, and the actionable value of learning
analytics. Similarly, the measurement of internal
conditions such as achievement goal orientation,
cognitive load, or epistemic beliefs are yet to be
fully understood in relation with their collection
and measurement with/from trace data. e
work of Zhou and Winne (2012) could provide
future research direction on how to integrate the
collection of variables about internal conditions
with the collection of trace data. e authors
suggested that the use of a highlighting tool for
reading text in an online learning tool could be
framed within the achievement goal orientation
framework. Essentially, each highlight (i.e.,
goal-orientation) can be associated with a
dierent tag, that is easy to understand and use
by learners; such as, “interesting” for mastery
Volume 59, Number 1 TechTrends • January/Februar y 2015 69
approach goal orientation; and “important to
get a good grade” for performance approach
goal-orientation. Similar instrumentation
and measurement approaches could be
incorporated into the existing learning tools,
so that more theoretically founded trace data
about internal conditions, temporally proximal
to the points in them when learning activities
are performed, are collected. Not only can this
type of instrumentation increase the theoretical
foundation of the measurement in learning
analytics, but this type of measurement provides
valuable contributions to educational research
to overcome the well-known limitations of self-
reported measures (Zhou & Winne, 2012).
e analysis of learning products and
strategy has received limited attention in the
existing research of learning analytics, despite
its demonstrated importance for educational
research (Hadwin et al., 2007; McNamara et
al., 2014). Although learning products can
have dierent forms and thus, require dierent
measurement approaches, presently, the
primary emphasis in the learning analytics eld
has been in memory recall through the use of
either scores in completing online quizzes or
crude proxies such as course grades, which do
not accurately measure learning products but
simply academic performance at a given point in
time. However, many other important learning
products are available in trace data already
collected by learning tools. e best example
is unstructured text – e.g., created in online
discussions, tags, or blogs. In order to analyze
these textual products of learning, there is a
need to scale up qualitative research methods.
e use of text mining and natural language
processing methods to conduct based content
and discourse analysis is a critically important
research direction (McNamara et al., 2014).
Learning strategy, as discussed in this paper,
can be indicative of dynamic processes activated
while learning. For analysis of learning strategy
and associated processes, modeling and analysis
of latent variables – oen not possible to detect
with simple counts of certain learning operations
– is required. For such dynamic processes to be
understood, the process nature of learning needs
to be accounted for and learning modelled as a
process by building on the existing knowledge
from the areas such as graph theory and process
mining (Reimann, Markauskaite, & Bannert,
2014).
Although much work has been done on
visualizing learning analytics results – typically in
the form of dashboards (Verbert, Duval, Klerkx,
Govaerts, & Santos, 2013)– their design and use
is far less understood. e design of dashboards
can lead to the implementation of weak and
perhaps detrimental instructional practices as
a result of promoting ineective feedback types
and methods (Tanes et al., 2011). For example, a
common approach is to oer visualizations with
the comparison of the students with the class
average. Corrin and de Barba (2014) investigated
the eects of such comparisons promoted by
dashboards and observed that students who
had strong academic standing interpreted (i.e.,
misinterpreted) the comparisons as if they did
well in a class aer seeing they were above the
class average, even though they actually under-
performed compared to both their previous
academic performance and goals set before
enrolling into the class. Likewise, the negative
eect of such comparison dashboards on the
students with low levels of self-ecacy is a
hypothesis commonly heard in the discussions
within the learning analytics community. In
order to design eective learning analytics
visualizations and dashboards, it is essential
to consider instructional, learning and
sensemaking benets for learning. Building on
the existing educational research in which the
foundations in distributed cognition and self-
regulated learning seem to be very promising
venues for the future research (Liu, Nersessian,
& Stasko, 2008; Zhou & Winne, 2012).
Finally, special attention to the development
of learning analytics culture and policies around
them needs to be paid. Although it may seem
promising to automate many measurements
and predictions about learning and teaching, the
sole focus on outcomes, as the primary target
of learning analytics, without consideration
of learning and teaching processes can have
detrimental consequences. In such cases, as
suggested by Goodhart’s law (Elton, 2004),
certain measures – proxies of learning and
constructs associated with learning – can cease
to be good measures. As a comparable analogy
to teaching to the test rather than teaching to
improve understanding, learning analytics that
do not promote eective learning and teaching
are susceptible to the use of trivial measures
such as increased number of log-ins into an
LMS, as a way to evaluate learning progression.
In order to avoid such undesirable practices,
the involvement of the relevant stakeholders
– e.g., learners, instructors, instructional
designers, information technology support, and
institutional administrators – is necessary in all
stages of the development, implementation, and
evaluation of learning analytics and the culture
that the extensive use of data in education carries.
70 TechTrends • January/February 2015 Volume 59, Number 1
Address correspondence regarding this article to Dragan
Gašević; Digital Education; University of Edinburgh; Old
Moray House; Holyrood Rd; Edinburgh EH8 8AQ; United
Kingdom; email: dgasevic@acm.org; phone: +44 131 651
6138
End notes
Transition graphs are constructed from a
contingency matrix in which rows and columns
were all events logged by the video annotation
tool. e rows denoted the start and the columns
the end nodes of the transition edges. To create a
transition edge from event A to event B, number
one was written in the matrix cell intersecting
row A and column B. e number in that cell was
incremented by one for any future appearance of
the edge from event A to event B.
References
Ali, L., Hatala, M., Gašević, D., & Jovanović, J. (2012).
A qualitative evaluation of evolution of a learning
analytics tool. Computers & Education, 58(1), 470–489.
doi:10.1016/j.compedu.2011.08.030
Arnold, K. E., & Pistilli, M. D. (2012). Course signals at
Purdue: using learning analytics to increase student
success. In Proceedings of the 2nd International Conference
on Learning Analytics and Knowledge (pp. 267–270). New
York, NY, USA: ACM. doi:10.1145/2330601.2330666
Bayne, S., & Ross, J. (2014). e pedagogy of the Massive
Open Online Course: the UK view. e Higher Education
Academy. Retr ieved from https://www.heacademy.ac.uk/
resources/detail/elt/the_pedagogy_of_the_MOOC_
UK_view
Corrin, L., & de Barba, P. (2014). Exploring students’
interpretation of feedback delivered through learning
analytics dashboards. In Proceedings of the ascilite 2014
conference. Dunedin, NZ.
Dawson, S., Gašević, D., Siemens, G., & Joksimovic, S.
(2014). Current State and Future Trends: A Citation
Network Analysis of the Learning Analytics Field. In
Proceedings of the Fourth International Conference on
Learning Analytics And Knowledge (pp. 231–240). New
York, NY, USA: ACM. doi:10.1145/2567574.2567585
Elton, L. (2004). Goodhart’s Law and Performance Indicators
in Higher Education. Evaluation & Research in Education,
18(1-2), 120–128. doi:10.1080/09500790408668312
Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2014).
Learning analytics should not promote one size ts all:
e eects of instructional conditions in predicating
learning success. Submitted to e Internet and Higher
Education.
Gašević, D., Mirriahi, N., & Dawson, S. (2014). Analytics
of the Eects of Video Use and Instruction to Support
Reective Learning. In Proceedings of the Fourth
International Conference on Learning Analytics And
Knowledge (pp. 123–132). New York, NY, USA: ACM.
doi:10.1145/2567574.2567590
Gašević, D., Mirriahi, N., Dawson, S., & Joksimovic, S. (2014).
What is the role of teaching in adoption of a learning
tool? A natural experiment of video annotation tool use.
Submitted for Publication to Computers & Education.
Greene, J. A., & Azevedo, R. (2009). A macro-level analysis
of SRL processes and their relations to the acquisition
of a sophisticated mental model of a complex system.
Contemporary Educational Psychology, 34(1), 18–29.
doi:10.1016/j.cedpsych.2008.05.006
Hadwin, A. F., Nesbit, J. C., Jamieson-Noel, D., Code, J.,
& Winne, P. H. (2007). Examining trace data to explore
self-regulated learning. Metacognition and Learning, 2(2-
3), 107–124. doi:10.1007/s11409-007-9016-7
Hattie, J., & Timperley, H. (2007). e Power of Feedback.
Review of Educational Research, 77(1), 81–112.
doi:10.3102/003465430298487
Jayaprakash, S. M., Moody, E. W., Lauría, E. J. M., Regan,
J. R., & Baron, J. D. (2014). Early Alert of Academically
At-Risk Students: An Open Source Analytics Initiative.
Journal of Learning Analytics, 1(1), 6–47.
Kovanović, V., Joksimović, S., Gašević, D., Siemens, G.,
& Hatala, M. (2014). What public media reveals about
MOOCs? Submitted for Publication to British Journal of
Educational Technology.
Liu, Z., Nersessian, N. J., & Stasko, J. T. (2008). Distributed
cognition as a theoretical framework for information
visualization. IEEE Transactions on Visualization and
Computer Graphics, 14(6), 1173–1180.
Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing
Pedagogical Action Aligning Learning Analytics With
Learning Design. American Behavioral Scientist, 57(10),
1439–1459. doi:10.1177/0002764213479367
Lust, G., Elen, J., & Clarebout, G. (2013). Students’
tool-use within a web enhanced course: Explanatory
mechanisms of students’ tool-use pattern. Computers
in Human Behavior, 29(5), 2013–2021. doi:10.1016/j.
chb.2013.03.014
Macfadyen, L. P., & Dawson, S. (2012). Numbers Are Not
Enough. Why e-Learning Analytics Failed to Inform an
Institutional Strategic Plan. Educational Technology &
Society, 15(3).
McGill, T. J., & Klobas, J. E. (2009). A task–technology
t view of learning management system impact.
Computers & Education, 52(2), 496–508. doi:10.1016/j.
compedu.2008.10.002
McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z.
(2014). Automated Evaluation of Text and Discourse with
Coh-Metrix. Cambridge, UK: Cambridge University Press.
OECD. (2013). Education at a Glance 2013: OECD
Indicators. Retrieved from http://dx.doi.org/10.1787/eag-
2013-en
Reimann, P., Markauskaite, L., & Bannert, M. (2014).
e-Research and learning theory: What do sequence and
process mining methods contribute? British Journal of
Educational Technology, 45(3), 528–540. doi:10.1111/
bjet.12146
Siemens, G., & Gašević, D. (2012). Special Issue on Learning
and Knowledge Analytics. Educational Technology &
Society, 15(3), 1–163.
Tanes, Z., Arnold, K. E., King, A. S., & Remnet, M. A. (2011).
Using Signals for appropriate feedback: Perceptions and
practices. Computers & Education, 57(4), 2414–2422.
doi:10.1016/j.compedu.2011.05.016
Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations
between teachers’ approaches to teaching and students’
approaches to learning. Higher Education, 37(1), 57–70.
doi:10.1023/A:1003548313194
Volume 59, Number 1 TechTrends • January/Februar y 2015 71
Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J.
L. (2013). Learning Analytics Dashboard Applications.
American Behavioral Scientist, 57(10), 1500–1509.
doi:10.1177/0002764213479363
Wilson, T. D. (1999). Models in information behaviour
research. Journal of Documentation, 55(3), 249–270.
doi:10.1108/EUM0000000007145
Winne, P. H. (2006). How Soware Technologies Can
Improve Research on Learning and Bolster School
Reform. Educational Psychologist, 41(1), 5–17.
doi:10.1207/s15326985ep4101_3
Winne, P. H., & Hadwin, A. F. (1998). Studying as self-
regulated learning. In D. J. Hacker, J. Dunlosky, & A.
C. Graesser (Eds.), Metacognition in educational theory
and practice (pp. 277–304). Mahwah, NJ, US: Lawrence
Erlbaum Associates Publishers.
Zhou, M., & Winne, P. H. (2012). Modeling academic
achievement by self-reported versus traced goal
orientation. Learning and Instruction, 22(6), 413–419.
doi:10.1016/j.learninstruc.2012.03.004
Calling all Instructional Designers
2015 CALL FOR NOMINATIONS
THE AECT
DESIGN & DEVELOPMENT
AWARDS
Each year the awards listed below are sponsored by the Design and Development Division of AECT.
Don’t miss this opportunity to be recognized for your work and present that work at the 2015
conference —submit your nomination(s) as soon as possible!
Outstanding Journal Article Award
Outstanding Book Award
Outstanding Practice Award
Outstanding Practice by a Graduate Student in Instructional Design
Award for Graduate Student Research in Instructional Design
Information about each of the awards, including how to submit nominations, is detailed on page 91. Please
note that you may nominate your own work as well as the work of others. Nominations and accompanying
materials must be received by March 15, 2015.
See submission details on page 91.
... This information can be displayed through dashboards to keep learners aware of their performance, and to assist instructors in the detection of critical learner behaviours (Urrutia, Cobos, Dickens, White, & Davis, 2016). However, current LA strategies used for identifying learners who may need further assistance have not been founded on pedagogical strategies for instruction (Gašević, Dawson, & Siemens, 2015). Concretely, the information displayed in the dashboards does not usually consider the course characteristics or the pedagogical intentions of the instructors. ...
Article
Detecting learners who face problems in MOOCs usually poses difficulties due to the high instructor-learners ratio, the diversity of the population, and the asynchronous participation mode. Existing solutions mainly draw on self-reported problems in discussion forums and on dashboards displaying learners’ activity traces. However, these approaches cannot scale up easily or do not consider the course learning design. This paper presents a conceptual framework aimed at guiding MOOC instructors in the identification of potential learners' problems and indicators of such problems, considering the learning design of the course (e.g., types of activities, difficulty, etc.). An instrumental qualitative case study served for the evaluation and refinement of the framework. The results showed that the framework positively helped instructors to reflect on potential learners’ problems they had not considered beforehand, and to associate such problems with a set of indicators related to their learning designs.
... Recognizing the limitations of an overly data-centric approach used in learning analytics research, researchers have proposed to adopt a more holistic approach when designing research so that student learning behaviors and patterns can be captured in a comprehensive manner, and big data modeling and interpretation can be guided via sound theories (Lockyer et al., 2013;Rienties and Toetenel, 2016). This has resulted in an increasing amount of research using a combined approach, which employs both self-reported and observational instruments to measure student learning experience in a complementary manner (Gašević et al., 2015). ...
Article
Full-text available
Advances in Internet and computer-based technologies have increased the growth of alternative learning spaces, creating entirely new ways of conceptualizing and assessing the learning experience of students (Wong, 2019).
... Research on self-regulated learning by combining theory-driven and data-driven approaches Recognizing the limitations of the theory-driven and datadriven approaches to researching self-regulated learning, researchers have proposed the adoption of a combined approach when designing research to gain a more comprehensive picture of students' self-regulated learning. Employing both self-reported and observational measures of students' self-regulated learning, the combined approach enables investigations of students' learning to be more holistic by modeling and interpreting big data while being guided via sound theories (Lockyer et al., 2013;Gašević et al., 2015;Rienties and Toetenel, 2016). ...
Article
Full-text available
Combining theory-driven and data-driven approaches, this study used both self-reported and observational measures to examine: (1) the joint contributions of students' self-reported undergraduates' motivation and emotion in their self-regulated learning, their observed online learning interactions, and their academic success in blended course designs; and (2) the extent to which the self-reported and observational measures were consistent with each other. The participants in the study were 54 social sciences undergraduates in the Czech Republic. The participants' self-reported self-efficacy, intrinsic goals, and anxiety were assessed using a Czech version of three scales from the Motivated Strategies for Learning Questionnaire. Their online engagement was represented by students' observed frequency of interactions with the six online learning activities recorded in the learning management system. The results of a hierarchical regression analysis showed that the self-reported and observational measures together could explain 71% of variance in academic success, significantly improving explanatory power over using self-reported measures alone. Departing from the theory-driven approach, students were clustered as better and poorer self-regulated learners by their self-reports, and one-way ANOVAs showed that better self-regulated learners had significantly more frequent online interactions with four out of six online learning activities and better final exam results. Departing from the data-driven approach, students were clustered as higher and lower online-engaged learners by the observed frequency of their interaction with online learning activities. One-way ANOVAs showed that higher online-engaged learners also reported having higher self-efficacy and lower anxiety. Furthermore, the strong TYPE Empirical Study PUBLISHED CITATION Han F, Vaculíková J and Juklová K (2022) The relations between Czech undergraduates' motivation and emotion in self-regulated learning, learning engagement, and academic success in blended course designs: Consistency between theory-driven and data-driven approaches. Frontiers in Psychology 02 frontiersin.org association between the students' profiles in both self-reported measures and observational measures in cross-tabulation analyses showed that the majority of better self-regulated learners by self-reporting also had higher online engagement by observation, whereas the majority of poorer self-regulated learners by self-reporting were lower online-engaged learners, demonstrating consistency between theory-driven and data-driven approaches.
... The Doer Effect is the finding that completing more active learning activities, like practice questions, is more strongly related to positive learning outcomes than passive learning activities, like reading, watching, or listening to course materials [22,32]. The COPES model of self-regulated learning suggests that learners construct knowledge influenced by operations and tools [14]. How the construction of knowledge in the Doer Effect differs between learner demographics and technologies is still an open question. ...
Preprint
Full-text available
The Doer Effect states that completing more active learning activities, like practice questions, is more strongly related to positive learning outcomes than passive learning activities, like reading, watching, or listening to course materials. Although broad, most evidence has emerged from practice with tutoring systems in Western, Industrialized, Rich, Educated, and Democratic (WEIRD) populations in North America and Europe. Does the Doer Effect generalize beyond WEIRD populations, where learners may practice in remote locales through different technologies? Through learning analytics, we provide evidence from N = 234 Ugandan students answering multiple-choice questions via phones and listening to lectures via community radio. Our findings support the hypothesis that active learning is more associated with learning outcomes than passive learning. We find this relationship is weaker for learners with higher prior educational attainment. Our findings motivate further study of the Doer Effect in diverse populations. We offer considerations for future research in designing and evaluating contextually relevant active and passive learning opportunities including leveraging familiar technology, increasing the number of practice opportunities, and aligning multiple data sources.
Article
Full-text available
The integration of advanced learning analytics and data-mining technology into higher education has brought various opportunities and challenges, particularly in enhancing students' self-regulated learning (SRL) skills. Analyzing developed features for SRL support, it has become evident that SRL support is not a binary concept but rather a continuum, ranging from limited to advanced levels of SRL support. This article introduces the rubric, designed to evaluate the degree of self-regulated learning support available within technology enhanced learning environments. Following rubric design best practices, we took a multifaceted methodological approach to ensure rubric validity and reliability: consulting Zimmerman's theoretical model, comparing technological features distilled from empirical studies that demonstrated significant effectiveness, consulting SRL experts, and iterative development and feedback. Across three phases of SRL the rubrics describe evaluation criteria and in detail define performance levels (Limited, Moderate and Advance). By employing the rubric, educators and researchers can 1) gain insights into the extent of implemented SRL approaches, 2) further develop SRL support of learning environments, and 3) better support students on their journey towards becoming self-regulated learners. Finally, the reliability analysis demonstrated a high degree of agreement among different raters evaluating the same course, indicating that the rubric is a reliable tool for obtaining relevant evaluations of SRL support in higher education. We conclude by discussing the significance of the rubric in promoting self-regulated learning within the current pedagogical and technological landscape.
Article
Artificial intelligence is increasingly implemented in higher education, offering customized interventions and timely feedback to enhance learning experiences. While these tools have the potential to improve educational outcomes, they also introduce ethical risks and sociotechnical implications such as reduced learner autonomy. Current ethical discussions often focus on computational issues and overlook the nuanced impacts from students’ perspectives, which may increase students’ vulnerability. Taking a student-centered approach, we apply the Story Completion Method to investigate students’ concerns about adopting analytics-based AI tools in education. Seventy-one participants responded to the story prompts, which we analyzed qualitatively to uncover perceptions about how these tools may reshape pedagogical aspects such as learner autonomy, learning environments and approaches, interactions and relationships, and pedagogical roles. Our findings reveal that these potential impacts not only occur in isolation but also interact with one another. This study makes two primary contributions: first, it marks a novel application of the speculative design method to explore students’ perceptions of AIEd tools. Second, it provides a qualitative analysis of key themes derived from student responses, offering design implications for AIEd systems that are sensitive to student concerns and ethical considerations. These insights offer a foundation for future research and contribute to a more student-centered approach to the ethical development of AIEd.
Chapter
Education is already hovering on the edge of the Education 4.0 revolution fueled by technology integration and student participation. Therefore, Education 4.0 eliminates rigid structures, lectures and focuses on teachers as Education 4.0 is meant for digital generation providing a flexible learning process that is continuous. Central to this shift is the Smart Hybrid Learning System, which brings together models such as the Challenge Based learning and the Case Based learning in dynamic online environments. Enhancing learning solution; online learning platforms, LMS, AI, IoT provide rich engagement solutions. Today's school structures fail to capture such changing needs, which puts emphasis on old fashioned approaches. Such classroom practices as the flipped and smart classroom helps overcome these gaps but issues with implementation is still an issue with issues to do with infrastructure and digital divide. Education 4.0 is the strategic direction towards mass, choice and flexibility for learning in the era of volatility.
Chapter
Digital technologies have significantly disrupted and transformed the social organization and behaviour of individuals. Digital technologies have not only become the medium connecting individuals but also human remains connected to them.
Article
Full-text available
Research in educational psychology involves empirical investigation into the learning process with an aim to refine psychological theories of learning and their application to real-world settings where they can be used to benefit learners. Emergent methodological processes involved in learning analytics include the study of event-based data produced by individuals in learning environments where they use technology. Paradigms for substantive-methodological synergy can be used to align the strengths of educational psychology and learning analytics research. The Journal of Educational Psychology invites such collaborations. This issue illustrates the advancements to educational theory and practice that can be attained when learning analytics practices are aligned to reflect the assumptions within psychological theories of learning and learning analytics methods including feature engineering and multimodal modeling are leveraged. Exemplars demonstrate learning analytics’ potential contribution to the refinement and application of theories of learning and motivation.
Article
Full-text available
This study examined the extent to which instructional conditions influence the prediction of academic success in nine undergraduate courses offered in a blended learning model (n = 4134). The study illustrates the differences in predictive power and significant predictors between course-specific models and generalized predictive models. The results suggest that it is imperative for learning analytics research to account for the diverse ways technology is adopted and applied in course-specific contexts. The differences in technology use, especially those related to whether and how learners use the learning management system, require consideration before the log-data can be merged to create a generalized model for predicting academic success. A lack of attention to instructional conditions can lead to an over or under estimation of the effects of LMS features on students' academic success. These findings have broader implications for institutions seeking generalized and portable models for identifying students at risk of academic failure.
Conference Paper
Full-text available
The delivery of feedback to students through learning analytics dashboards is becoming more common in higher education. However, it is not clear what ability students have to interpret this feedback in ways that will benefit their learning. This paper presents the preliminary results of a mixed methods study into students' interpretation of feedback delivered through learning analytics dashboards and the influence this feedback has on students' self-regulated learning. The findings from a preliminary analysis of the data from the first two phases will be discussed and the future phases of the research outlined. The outcomes of this research provide new insights into how dashboards can be designed to provide effective feedback in blended learning environments.
Article
Full-text available
Learning analytics offers higher education valuable insights that can inform strategic decision-making regarding resource allocation for educational excellence. Research demonstrates that learning management systems (LMSs) can increase student sense of community, support learning communities and enhance student engagement and success, and LMSs have therefore become core enterprise component in many universities. We were invited to undertake a current state analysis of enterprise LMS use in a large research-intensive university, to provide data to inform and guide an LMS review and strategic planning process. Using a new e-learning analytics platform, combined with data visualization and participant observation, we prepared a detailed snapshot of current LMS use patterns and trends and their relationship to student learning outcomes. This paper presents selected data from this "current state analysis" and comments on what it reveals about the comparative effectiveness of this institution's LMS integration in the service of learning and teaching. More critically, it discusses the reality that the institutional planning process was nonetheless dominated by technical concerns, and made little use of the intelligence revealed by the analytics process. To explain this phenomenon we consider theories of change management and resistance to innovation, and argue that to have meaningful impact, learning analytics proponents must also delve into the socio-technical sphere to ensure that learning analytics data are presented to those involved in strategic institutional planning in ways that have the power to motivate organizational adoption and cultural change.
Conference Paper
Full-text available
This paper provides an evaluation of the current state of the field of learning analytics through analysis of articles and citations occurring in the LAK conferences and identified special issue journals. The emerging field of learning analytics is at the intersection of numerous academic disciplines, and therefore draws on a diversity of methodologies, theories and underpinning scientific assumptions. Through citation analysis and structured mapping we aimed to identify the emergence of trends and disciplinary hierarchies that are influencing the development of the field to date. The results suggest that there is some fragmentation in the major disciplines (computer science and education) regarding conference and journal representation. The analyses also indicate that the commonly cited papers are of a more conceptual nature than empirical research reflecting the need for authors to define the learning analytics space. An evaluation of the current state of learning analytics provides numerous benefits for the development of the field, such as a guide for under-represented areas of research and to identify the disciplines that may require more strategic and targeted support and funding opportunities.
Article
Coh-Metrix is among the broadest and most sophisticated automated textual assessment tools available today. Automated Evaluation of Text and Discourse with Coh-Metrix describes this computational tool, as well as the wide range of language and discourse measures it provides. Section I of the book focuses on the theoretical perspectives that led to the development of Coh-Metrix, its measures, and empirical work that has been conducted using this approach. Section II shifts to the practical arena, describing how to use Coh-Metrix and how to analyze, interpret, and describe results. Coh-Metrix opens the door to a new paradigm of research that coordinates studies of language, corpus analysis, computational linguistics, education, and cognitive science. This tool empowers anyone with an interest in text to pursue a wide array of previously unanswerable research questions..
Article
We examined achievement goals measured by self-reports and by traces (behavioral indicators) gathered as undergraduates used software tools to study a multimedia-formatted article. Traces were operationalized by tags participants applied to selections of text and hyperlinks they clicked in the article. Tags and hyperlinks were titled to represent achievement goal orientations. Self-reported goal orientations did not correlate with goals traced as actions. In separate regression models, traces of goal orientations were stronger predictors of achievement than self reports. We suggest future research include traces in studies of achievement goals because traces reflect proximal events that comprise learning activities that can supplement static orientations that are operationally defined to be indifferent to the dynamics of learning activities.
Article
The Open Academic Analytics Initiative (OAAI) is a collaborative, multi‐year grant program aimed at researching issues related to the scaling up of learning analytics technologies and solutions across all of higher education. The paper describes the goals and objectives of the OAAI, depicts the process and challenges of collecting, organizing and mining student data to predict academic risk, and report results on the predictive performance of those models, their portability across pilot programs at partner institutions, and the results of interventions on at‐risk students.
Conference Paper
Although video annotation software is no longer considered as a new innovation, its application in promoting student self-regulated learning and reflection skills has only begun to emerge in the research literature. Advances in text and video analytics provide the capability of investigating students' use of the tool and the psychometrics and linguistic processes evident in their written annotations. This paper reports on a study exploring students' use of a video annotation tool when two different instructional approaches were deployed -- graded and non-graded self-reflection annotations within two courses in the performing arts. In addition to counts and temporal locations of self-reflections, the Linguistic Inquiry and Word Counts (LIWC) framework was used for the extraction of variables indicative of the linguistic and psychological processes associated with self-reflection annotations of videos. The results indicate that students in the course with graded self-reflections adopted more linguistic and psychological related processes in comparison to the course with non-graded self-reflections. In general, the effect size of the graded reflections was lower for students who took both courses in parallel. Consistent with prior research, the study identified that students tend to make the majority of their self-reflection annotations early in the video time line. The paper also provides several suggestions for future research to better understand the application of video annotations in facilitating student learning.
Article
This paper discusses the fundamental question of how data-intensive e-research methods could contribute to the development of learning theories. Using methodological developments in research on self-regulated learning as an example, it argues that current applications of data-driven analytical techniques, such as educational data mining and its branch process mining, are deeply grounded in an event-focused, ontologically flat view of learning phenomena. These techniques provide descriptive accounts of the regularities of events, but have limited power to generate theoretical explanations. Building on the philosophical views of critical realism, the paper argues that educational e-research needs to adopt more nuanced ways for investigating and theorising learning phenomena that could provide an account of the mechanisms and contexts in which those mechanisms are realised. It proposes that future methodological extensions should include three main aspects: (1) stratified ontological frameworks, (2) multimodal data collection and (3) dynamic analytical methods.