Content uploaded by Dragan Gasevic
Author content
All content in this area was uploaded by Dragan Gasevic on Sep 18, 2018
Content may be subject to copyright.
64 TechTrends • January/February 2015 Volume 59, Number 1
Abstract
e analysis of data collected from the
interaction of users with educational and
information technology has attracted much
attention as a promising approach for advancing
our understanding of the learning process. is
promise motivated the emergence of the new
research eld, learning analytics, and its closely
related discipline, educational data mining.
is paper rst introduces the eld of learning
analytics and outlines the lessons learned
from well-known case studies in the research
literature. e paper then identies the critical
topics that require immediate research attention
for learning analytics to make a sustainable
impact on the research and practice of learning
and teaching. e paper concludes by discussing
a growing set of issues that if unaddressed, could
impede the future maturation of the eld. e
paper stresses that learning analytics are about
learning. As such, the computational aspects of
learning analytics must be well integrated within
the existing educational research.
Keywords: educational research, Learning
analytics, learning sciences, learning technology,
self-regulated learning
Introduction
Over the past several years, we have
witnessed a growing trend for increased student
demand for participation in higher education.
While previous reports demonstrated the need
for higher education and contrasted this with
an argument surrounding the nite capacity to
support such growth (OECD, 2013), it was not
until 2012 and the hype linked to the massive
open online courses (MOOC) that there was
intensive public debate about the future role of
the university and scalable education models
(Kovanović, Joksimović, Gašević, Siemens, &
Hatala, 2014). In essence, the rapid advances
in technology and its subsequent broad scale
adoption provided the necessary infrastructure,
and the necessary tipping point for public
acceptance of online learning, to enable the
delivery of education at such a large scale. While
there is much promise amidst the proliferation
of MOOCs and online and blended modes
of learning more generally, these models also
promulgate a new suite of education challenges.
For instance, the noted poor attrition rates,
and the sheer volume of students enrolled in a
MOOC necessitates a more independent study
model that is in stark contrast to the more
accepted socio-constructivist approaches to
learning (Bayne & Ross, 2014).
Despite the challenges of online delivery, the
adoption of educational technologies has aorded
a new opportunity to gain insight into student
learning. As with most IT systems, the student’s
interactions with their online learning activities
are captured and stored. ese digital traces (log
data) can then be ‘mined’ and analysed to identify
patterns of learning behaviour that can provide
insights into education practice. is process has
been described as learning analytics. e study
of learning analytics has been dened as the
“measurement, collection, analysis and reporting
of data about learners and their contexts, for
purposes of understanding and optimizing
learning and the environments in which it occurs”
(Siemens & Gašević, 2012). Learning analytics is
Let’s not forget:
Learning analytics are
about learning
By Dragan Gašević, University of Edinburgh, Shane Dawson, University of South
Australia, George Siemens, University of Texas at Arlington
©Association for Educational Communications and Technology 2015
Volume 59, Number 1 TechTrends • January/Februar y 2015 65
a bricolage eld drawing on research, methods,
and techniques from numerous disciplines such
as learning sciences, data mining, information
visualization, and psychology. is paper reviews
the learning analytics research to outline a few of
the major topics that the learning analytics eld
needs to address in order to deliver its o cited
promise for transforming education practice. In
so doing, we argue that learning analytics needs
to build on and better connect with the existing
body of research knowledge about learning and
teaching. Specically, in this paper, we suggest
how learning analytics might be better integrated
into existing educational research and note the
implications for learning analytics research and
practice.
Course Signals: Lessons Learned
Predicting student learning success and
providing proactive feedback have been two of
the most frequently adopted tasks associated with
learning analytics (Dawson, Gašević, Siemens,
& Joksimovic, 2014). is paper provides an
evaluation of the current state of the eld of
learning analytics through analysis of articles and
citations occurring in the LAK conferences and
identied special issue journals. e emerging
eld of learning analytics is at the intersection
of numerous academic disciplines, and therefore
draws on a diversity of methodologies, theories
and underpinning scientic assumptions.
rough citation analysis and structured mapping
we aimed to identify the emergence of trends
and disciplinary hierarchies that are inuencing
the development of the eld to date. e results
suggest that there is some fragmentation in
the major disciplines (computer science and
education. In this context, the best known
application of analytics in education is Course
Signals developed at Purdue University (Arnold
& Pistilli, 2012). Using the trace data collected
by the Blackboard learning management system
(LMS) and data from the institutional Student
Information System (SIS), Course Signals uses
a data-mining algorithm to identify students at
risk of academic failure in a course. Specically,
Course Signals identies three main outcome
types – a student at a high risk, moderate risk,
and not at risk of failing the course. ese
three outcomes are symbolically represented as
trac light where each light represents one of
the three levels of risk (red, orange, and green
respectively). e trac lights serve to provide
an early warning “signal” to both instructor and
student. is signal is designed to prompt a form
of intervention that is aimed at improving the
progression of the student identied as at risk of
failure. Early studies of Course Signals showed
high levels of predictive accuracy and signicant
benets in the retention of the students who
took at least one course adopting the early alert
soware versus those who took a course without
the Course Signals tool (Arnold & Pistilli, 2012).
While Course Signals is a well-known example,
there have been many other predictive algorithms
aimed towards the identication of students at
risk of failure or retention (Jayaprakash, Moody,
Lauría, Regan, & Baron, 2014). Any predictive
model is generally accompanied by a dashboard
to aid sensemaking by visualizing the trace data
and prediction results (Ali, Hatala, Gašević, &
Jovanović, 2012).
Although establishing lead indicators
of academic performance and retention are
essential steps for learning analytics, there has
been a dearth of empirical studies that have
sought to evaluate the impact and transferability
of this initial work across domains and contexts
(Dawson et al., 2014).
Although establishing lead indicators
of academic performance and retention are
essential steps for learning analytics, there has
been a dearth of empirical studies that have
sought to evaluate the impact and transferability
of this initial work across domains and contexts
(Dawson et al., 2014).
e limited empirical research to date has
revealed some signicant issues that the eld
needs to consider and address in the future. e
most signicant is that learning analytics tools
are generally not developed from theoretically
established instructional strategies, especially
those related to provision of student feedback.
For instance, Tanes, Arnold, King, & Remnet
(2011) undertook a content analysis of the
feedback messages sent by instructors to
students aer receiving the Course Signals
alerts. e authors noted that instructive or
process feedback types were rarely observed
in the instructors’ messages to students. is
nding is in marked contrast to the vast volume
of research demonstrating that feedback is most
eective when information is provided “at the
process level” (for review, see Hattie & Timperley
(2007)). Rather than receiving messages with
detailed instructive feedback on how to address
identied deciencies in their learning, students
identied at risk would exclusively receive
multiple messages carrying low level summative
feedback. Consistent with educational research,
no eect of summative feedback on learning
success was identied.
While the simplicity of the trac light
metaphor of Course Signals was clear to the
target users and a simple and eective way to
66 TechTrends • January/February 2015 Volume 59, Number 1
prompt action, the tool design did not have
sucient theoretically informed functionality
to encourage adoption of eective instructional
and intervention practices. is is not
surprising, as Course Signals was initially
designed as an academic analytics tool (Arnold
& Pistilli, 2012). It is only more recently that
the soware has been promoted within the
domain of learning analytics. However, as an
academic analytics tool, Course Signals is well
suited to its proposed intent and addresses
the needs of the envisioned stakeholders (e.g.
university administrators, government ocials
and funders,). at is, access to data forecasts
concerning various institutional trends for
resource planning.
What we learn from this case study is that
learning analytics resources should be well
aligned to established research on eective
instructional practice. In so doing we can move
from static prediction of a single academic
outcome, to more sustainable and replicable
insights into the learning process. is is
consistent with observations from instructors
who appreciated features of the LOCO-
Analyst learning analytics tool that allowed
for establishing links between the students
activities (e.g., discussion messages posted) with
“the domain topics the students were having
diculties with” (Ali et al., 2012, p. 485). at
is, instructors expressed their preferences of
learning analytics features that oer insights
into learning processes and identify student
gaps in understanding over simple performance
measures. With such insights, instructors can
identify weak points in the learning activities
performed by their students; topics the students
have struggled with, and provide instructive
and process related feedback on how to improve
their learning.
Direction
As noted above, it is essential that future
learning analytics developments and innova-
tions draw on, and advance educational re-
search and practice. To do so, we posit that the
eld of learning analytics needs to ground data
collection, measurement, analysis, reporting
and interpretation processes within the exist-
ing research on learning. In this paper, we build
on three axioms that Winne (2006)identied as
commonly accepted foundations of research
knowledge about learning in educational psy-
chology: learners construct knowledge, learn-
ers are agents, and data includes randomness.
We use these three axioms to interrogate the
critical issues for the development of the learn-
ing analytics eld.
e Winne and Hadwin (1998) model of
self-regulated learning is based on the COPES
models. at is, the model builds on conditions,
operations, products, evaluation, and standards
learners adopt in order to explain how they
construct knowledge. In essence, learners
construct knowledge by using (cognitive,
digital, and physical) tools to perform operations
on raw information in order to create products
of learning. For example, a student can use
online discussions (as a tool) to synthesize and
integrate (as operations performed) knowledge
gained from dierent sources of information
in order to develop a critical perspective (as
a product of learning) to a problem under
study. In this process, learners use standards
to evaluate products of their learning and
eectiveness of the operations performed and
tools used as a part of metacognitive monitoring
and control. A group of individual standards
makes up a learning goal that learners set when
they are working on a specic learning task. For
example, a goal can be composed of the level
of cohesiveness of the argument created in an
online discussion message when developing
their critical perspective, the number, types and
trustworthiness of information sources they
would consult when building their argument, or
the time they decide to spend on the collection
of the sources.
e notion that learners are agents implies
they have “the capability to exercise choice in
reference to preferences” (Winne, 2006, p. 8).
e choices learners make are inuenced by
the (internal and external) conditions, which
in turn can aect the standards learners use in
their metacognitive monitoring and control.
Examples of external conditions include course
instructional designs such as, grading an online
discussion and providing appropriate scaolds
to guide how students participate and use the
learning tools. Examples of internal conditions
are metacognitive awareness and skills (e.g.,
whether learners are aware that discussions can
be an eective mean to develop critical thinking,
and if so, how skilled they are at doing so),
the level of motivation to participate in online
discussions, or prior knowledge about the topic
discussed online.
Eects of Instruction Conditions
To date, learning analytics has been focused
on the investigation of the eects of operations
performed by using proxy measures of learning
derived from trace data – i.e., counts of logs in
activity or access to discrete resources and time
spent online. However, far less attention has been
dedicated to the other elements of COPES, such
Volume 59, Number 1 TechTrends • January/Februar y 2015 67
as how and to what extent the conditions aect
the operations performed, the products, and the
standards used for metacognitive monitoring.
A lack of consideration of these elements raises
signicant concerns as to the validity of learning
analytics results and interpretation. For example,
Gašević, Dawson, Rogers and Gasevic (2014)
demonstrate that the association of trace data
about students’ activity in an LMS with academic
performance is moderated by instructional
conditions. e Gašević et al. (2014) analysis
of the results of a regression model created
by combining data from nine undergraduate
courses in an Australian university showed that
only three variables – number of logins and
number of operations performed on discussion
forums and resources were signicant predictors
of academic performance. e authors noted that
these three variables explained approximately
21% of the variability in academic performance.
However, in a practical sense, these predictors
cannot be reasonably translated into actionable
recommendations to facilitate student learning.
Furthermore, there is a limited degree of feedback
that can be provided to instructors without
a detailed understanding of the pedagogical
intent associated with their tool selection and
associated learning activities. us, critical
insights of such learning analytics could hardly
be used to inform the course learning designs
as previously suggested by Lockyer, Heathcote,
& Dawson (2013). When regression analysis
models were created for each course separately,
the variables indicative of the LMS tools relevant
for the learning design intention of each course
emerged. For example, in the communication
course with an emphasis on writing, the use of
Turnitin for plagiarism detection and assignment
descriptions were signicant predictors of the
students’ grades. is course-specic regression
model explained more than 70% of the variability
of the nal grades of the communication students.
In contrast, in the graphics course, no signicant
predictor was identied within the available
trace data for predicting students’ grades. is
nding reects the course design and technology
choices of the instructor. In this case, the course
did not utilize the institutional LMS. Instead
the course learning activities were performed in
public social media soware. As such, any counts
of log-ins, tools and resources within the LMS
course site, were eectively redundant for this
particular course.
e reasons for the diversity observed in
the ndings of the Gašević et al. (2014) study
may be attributed to the diering instructional
models and technology choices across the
courses. For instance, educational research
has shown that instructors have a signicant
inuence on a learner’s choice of tools within
an LMS (McGill & Klobas, 2009) and the
learning approach they follow (Trigwell,
Prosser, & Waterhouse, 1999). e dierence in
instructional conditions is likely to shed light
on the inconsistent results of the trace data-
based predictors of academic success that are
oen reported in the literature (Jayaprakash
et al., 2014; Macfadyen & Dawson, 2012). is
supports the earlier proposition stressing the
importance of framing future analytics studies
within the existing education research.
Eects of Internal Conditions
Learners are active agents in their learning
process. is simple statement has many
signicant implications. Learner agency implies
that even when learners receive the same
instructional conditions, they may choose to
adopt alternate study methods. As such, we
need to give greater emphasis to the importance
of internal conditions for facilitating student
learning. Existing studies about student
choice and use of learning tools have revealed
signicant dierences in both the quantity of
tool use and how specic tools are adopted to
complete a learning task. Building from the
work of Winne (2006), Lust, Elen, & Clarebout
(2013) posit that the use of learning tools can
be considered a self-regulated learning process
whereby the choices a student makes about
the tool are based on (internal) conditions
and individual learning goals. In their study
with undergraduate students of education in
a blended course, Lust et al. (2013) identied
four disparate groups of students based on
their use of learning tools. e groups were
classied as: i) no-users, low level adoption of
any tool in the LMS suggested to them in the
course design (e.g., quizzes, web lectures, and
discussion forums); ii) intensive active learners
– used all tools suggested by the course design
and used those tools actively; iii) selective users
– only used a selected number of tools oered to
them; iv) intensive supercial users – used all the
tools and spent more time than other groups,
predominantly on cognitively passive activities
such as reading discussion posts in lieu of
contributing to the forum. A future multivariate
analysis performed by Lust et al. (2013) revealed
that the dierences between user groups –
where groups were formed as a consequence
of exercising their learner agency – was as high
as the eects of instructional conditions (e.g.,
grade vs. non-graded tool use) on the tool use
reported in other studies (Gašević, Mirriahi, &
Dawson, 2014).
68 TechTrends • January/February 2015 Volume 59, Number 1
Eects of Learning Products and Strategy
Learning products and standards used
for learning are essential factors that need to
be captured to describe learning processes
comprehensively. Although the frequency of
activity and time on task are sound indicators of
the extent to which learners use a tool, the high
volume of these measures cannot be directly
interpreted as a high quality of learning. What is
of importance is the specic learning strategies
that are adopted by individual students. Learning
strategy can be reective of the metacognitive
monitoring and control operations, as these main
metacognitive operations are based on learning
standards. For example, in a study of the eects
of teaching on acceptance of a video annotation
soware tool for self-reection, Gašević, Mirriahi,
Dawson, and Joksimovic (2014) identied that
students, in performing arts, had a high level
of annotations created in a course where the
annotation tool used for self-reection on video
recordings of an individual’s performance, was
optional (i.e., not graded). e high level of the
annotations created was as high as it was in a
prior course where the tool was mandatory and
contributed to their nal course grades.
Simply counting the number of operations
performed within the video annotation study
(Gašević, Mirriahi, Dawson, et al., 2014) did
not provide an eective measure for the quality
of learning products (i.e., text of annotations)
nor the adopted learning strategy. However,
where counts fail the Coh-Metrix measures
succeed (McNamara, Graesser, McCarthy, &
Cai, 2014). e Coh-Metrix analysis showed
a signicant decline in the cohesiveness
and comprehensiveness of the text of self-
reections (i.e., learning products) in the
learners’ video annotations. Moreover, aer
representing learning strategies as transition
graphs1 of the activities learners performed
and calculating the density of those graphs
as a measure of metacognitive monitoring as
suggested by Hadwin, Nesbit, Jamieson-Noel,
Code, & Winne (2007), a considerable decline
in metacognitive monitoring was also observed.
is is in part due to the private nature of the
annotation when undertaken it the absence of
a graded component. For instance, notes taken
without any intention for sharing with others
typically do not have the same readability
as notes prepared for sharing with peers or
instructors. However, the decrease is a sign for
concern as metacognitive monitoring is the
“key SRL process” (Greene & Azevedo, 2009,
p. 18) to promote understanding. is nding
1
has much signicance for learning analytics
research. In essence, continued focus on event
activities ignores any examination of the quality
of learning products and strategy adopted.
Summary and Future
Consideration
e discussion oered in the paper reects
the impetus for building the eld of learning
analytics upon and contributing to the existing
research on learning and education. Clearly,
the counting of certain types of activities that
learners performed with online learning tools
can be correlated with academic performance.
However, the true test for learning analytics is
demonstrating a longer term impact on student
learning and teaching practice. In this context,
the eld of learning analytics can benet from
past lessons in information seeking. As a
developing eld in information seeking, Wilson
(1999, p. 250) noted that “many things were
counted, from the number of visits to libraries,
to the number of personal subscriptions to
journals and the number of items cited in
papers. Very little of this counting revealed
insights of value for the development of theory
or, indeed, of practice.” Signicant progress in
research and practice only really commenced
when information seeking was framed within
“robust theoretical models of human behaviour”
(Wilson, 1999, p. 250). e eld of learning
analytics must adopt a similar approach.
While it is oen perceived that education is
rife with data, very little is related to capturing
the conditions for learning (internal and
external). For example, external conditions, such
as instructional design, social context, previous
learning history with the use of a particular
tool, and revisions in the course content can
radically change the results, interpretation of
ndings, and the actionable value of learning
analytics. Similarly, the measurement of internal
conditions such as achievement goal orientation,
cognitive load, or epistemic beliefs are yet to be
fully understood in relation with their collection
and measurement with/from trace data. e
work of Zhou and Winne (2012) could provide
future research direction on how to integrate the
collection of variables about internal conditions
with the collection of trace data. e authors
suggested that the use of a highlighting tool for
reading text in an online learning tool could be
framed within the achievement goal orientation
framework. Essentially, each highlight (i.e.,
goal-orientation) can be associated with a
dierent tag, that is easy to understand and use
by learners; such as, “interesting” for mastery
Volume 59, Number 1 TechTrends • January/Februar y 2015 69
approach goal orientation; and “important to
get a good grade” for performance approach
goal-orientation. Similar instrumentation
and measurement approaches could be
incorporated into the existing learning tools,
so that more theoretically founded trace data
about internal conditions, temporally proximal
to the points in them when learning activities
are performed, are collected. Not only can this
type of instrumentation increase the theoretical
foundation of the measurement in learning
analytics, but this type of measurement provides
valuable contributions to educational research
to overcome the well-known limitations of self-
reported measures (Zhou & Winne, 2012).
e analysis of learning products and
strategy has received limited attention in the
existing research of learning analytics, despite
its demonstrated importance for educational
research (Hadwin et al., 2007; McNamara et
al., 2014). Although learning products can
have dierent forms and thus, require dierent
measurement approaches, presently, the
primary emphasis in the learning analytics eld
has been in memory recall through the use of
either scores in completing online quizzes or
crude proxies such as course grades, which do
not accurately measure learning products but
simply academic performance at a given point in
time. However, many other important learning
products are available in trace data already
collected by learning tools. e best example
is unstructured text – e.g., created in online
discussions, tags, or blogs. In order to analyze
these textual products of learning, there is a
need to scale up qualitative research methods.
e use of text mining and natural language
processing methods to conduct based content
and discourse analysis is a critically important
research direction (McNamara et al., 2014).
Learning strategy, as discussed in this paper,
can be indicative of dynamic processes activated
while learning. For analysis of learning strategy
and associated processes, modeling and analysis
of latent variables – oen not possible to detect
with simple counts of certain learning operations
– is required. For such dynamic processes to be
understood, the process nature of learning needs
to be accounted for and learning modelled as a
process by building on the existing knowledge
from the areas such as graph theory and process
mining (Reimann, Markauskaite, & Bannert,
2014).
Although much work has been done on
visualizing learning analytics results – typically in
the form of dashboards (Verbert, Duval, Klerkx,
Govaerts, & Santos, 2013)– their design and use
is far less understood. e design of dashboards
can lead to the implementation of weak and
perhaps detrimental instructional practices as
a result of promoting ineective feedback types
and methods (Tanes et al., 2011). For example, a
common approach is to oer visualizations with
the comparison of the students with the class
average. Corrin and de Barba (2014) investigated
the eects of such comparisons promoted by
dashboards and observed that students who
had strong academic standing interpreted (i.e.,
misinterpreted) the comparisons as if they did
well in a class aer seeing they were above the
class average, even though they actually under-
performed compared to both their previous
academic performance and goals set before
enrolling into the class. Likewise, the negative
eect of such comparison dashboards on the
students with low levels of self-ecacy is a
hypothesis commonly heard in the discussions
within the learning analytics community. In
order to design eective learning analytics
visualizations and dashboards, it is essential
to consider instructional, learning and
sensemaking benets for learning. Building on
the existing educational research in which the
foundations in distributed cognition and self-
regulated learning seem to be very promising
venues for the future research (Liu, Nersessian,
& Stasko, 2008; Zhou & Winne, 2012).
Finally, special attention to the development
of learning analytics culture and policies around
them needs to be paid. Although it may seem
promising to automate many measurements
and predictions about learning and teaching, the
sole focus on outcomes, as the primary target
of learning analytics, without consideration
of learning and teaching processes can have
detrimental consequences. In such cases, as
suggested by Goodhart’s law (Elton, 2004),
certain measures – proxies of learning and
constructs associated with learning – can cease
to be good measures. As a comparable analogy
to teaching to the test rather than teaching to
improve understanding, learning analytics that
do not promote eective learning and teaching
are susceptible to the use of trivial measures
such as increased number of log-ins into an
LMS, as a way to evaluate learning progression.
In order to avoid such undesirable practices,
the involvement of the relevant stakeholders
– e.g., learners, instructors, instructional
designers, information technology support, and
institutional administrators – is necessary in all
stages of the development, implementation, and
evaluation of learning analytics and the culture
that the extensive use of data in education carries.
70 TechTrends • January/February 2015 Volume 59, Number 1
Address correspondence regarding this article to Dragan
Gašević; Digital Education; University of Edinburgh; Old
Moray House; Holyrood Rd; Edinburgh EH8 8AQ; United
Kingdom; email: dgasevic@acm.org; phone: +44 131 651
6138
End notes
Transition graphs are constructed from a
contingency matrix in which rows and columns
were all events logged by the video annotation
tool. e rows denoted the start and the columns
the end nodes of the transition edges. To create a
transition edge from event A to event B, number
one was written in the matrix cell intersecting
row A and column B. e number in that cell was
incremented by one for any future appearance of
the edge from event A to event B.
References
Ali, L., Hatala, M., Gašević, D., & Jovanović, J. (2012).
A qualitative evaluation of evolution of a learning
analytics tool. Computers & Education, 58(1), 470–489.
doi:10.1016/j.compedu.2011.08.030
Arnold, K. E., & Pistilli, M. D. (2012). Course signals at
Purdue: using learning analytics to increase student
success. In Proceedings of the 2nd International Conference
on Learning Analytics and Knowledge (pp. 267–270). New
York, NY, USA: ACM. doi:10.1145/2330601.2330666
Bayne, S., & Ross, J. (2014). e pedagogy of the Massive
Open Online Course: the UK view. e Higher Education
Academy. Retr ieved from https://www.heacademy.ac.uk/
resources/detail/elt/the_pedagogy_of_the_MOOC_
UK_view
Corrin, L., & de Barba, P. (2014). Exploring students’
interpretation of feedback delivered through learning
analytics dashboards. In Proceedings of the ascilite 2014
conference. Dunedin, NZ.
Dawson, S., Gašević, D., Siemens, G., & Joksimovic, S.
(2014). Current State and Future Trends: A Citation
Network Analysis of the Learning Analytics Field. In
Proceedings of the Fourth International Conference on
Learning Analytics And Knowledge (pp. 231–240). New
York, NY, USA: ACM. doi:10.1145/2567574.2567585
Elton, L. (2004). Goodhart’s Law and Performance Indicators
in Higher Education. Evaluation & Research in Education,
18(1-2), 120–128. doi:10.1080/09500790408668312
Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2014).
Learning analytics should not promote one size ts all:
e eects of instructional conditions in predicating
learning success. Submitted to e Internet and Higher
Education.
Gašević, D., Mirriahi, N., & Dawson, S. (2014). Analytics
of the Eects of Video Use and Instruction to Support
Reective Learning. In Proceedings of the Fourth
International Conference on Learning Analytics And
Knowledge (pp. 123–132). New York, NY, USA: ACM.
doi:10.1145/2567574.2567590
Gašević, D., Mirriahi, N., Dawson, S., & Joksimovic, S. (2014).
What is the role of teaching in adoption of a learning
tool? A natural experiment of video annotation tool use.
Submitted for Publication to Computers & Education.
Greene, J. A., & Azevedo, R. (2009). A macro-level analysis
of SRL processes and their relations to the acquisition
of a sophisticated mental model of a complex system.
Contemporary Educational Psychology, 34(1), 18–29.
doi:10.1016/j.cedpsych.2008.05.006
Hadwin, A. F., Nesbit, J. C., Jamieson-Noel, D., Code, J.,
& Winne, P. H. (2007). Examining trace data to explore
self-regulated learning. Metacognition and Learning, 2(2-
3), 107–124. doi:10.1007/s11409-007-9016-7
Hattie, J., & Timperley, H. (2007). e Power of Feedback.
Review of Educational Research, 77(1), 81–112.
doi:10.3102/003465430298487
Jayaprakash, S. M., Moody, E. W., Lauría, E. J. M., Regan,
J. R., & Baron, J. D. (2014). Early Alert of Academically
At-Risk Students: An Open Source Analytics Initiative.
Journal of Learning Analytics, 1(1), 6–47.
Kovanović, V., Joksimović, S., Gašević, D., Siemens, G.,
& Hatala, M. (2014). What public media reveals about
MOOCs? Submitted for Publication to British Journal of
Educational Technology.
Liu, Z., Nersessian, N. J., & Stasko, J. T. (2008). Distributed
cognition as a theoretical framework for information
visualization. IEEE Transactions on Visualization and
Computer Graphics, 14(6), 1173–1180.
Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing
Pedagogical Action Aligning Learning Analytics With
Learning Design. American Behavioral Scientist, 57(10),
1439–1459. doi:10.1177/0002764213479367
Lust, G., Elen, J., & Clarebout, G. (2013). Students’
tool-use within a web enhanced course: Explanatory
mechanisms of students’ tool-use pattern. Computers
in Human Behavior, 29(5), 2013–2021. doi:10.1016/j.
chb.2013.03.014
Macfadyen, L. P., & Dawson, S. (2012). Numbers Are Not
Enough. Why e-Learning Analytics Failed to Inform an
Institutional Strategic Plan. Educational Technology &
Society, 15(3).
McGill, T. J., & Klobas, J. E. (2009). A task–technology
t view of learning management system impact.
Computers & Education, 52(2), 496–508. doi:10.1016/j.
compedu.2008.10.002
McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z.
(2014). Automated Evaluation of Text and Discourse with
Coh-Metrix. Cambridge, UK: Cambridge University Press.
OECD. (2013). Education at a Glance 2013: OECD
Indicators. Retrieved from http://dx.doi.org/10.1787/eag-
2013-en
Reimann, P., Markauskaite, L., & Bannert, M. (2014).
e-Research and learning theory: What do sequence and
process mining methods contribute? British Journal of
Educational Technology, 45(3), 528–540. doi:10.1111/
bjet.12146
Siemens, G., & Gašević, D. (2012). Special Issue on Learning
and Knowledge Analytics. Educational Technology &
Society, 15(3), 1–163.
Tanes, Z., Arnold, K. E., King, A. S., & Remnet, M. A. (2011).
Using Signals for appropriate feedback: Perceptions and
practices. Computers & Education, 57(4), 2414–2422.
doi:10.1016/j.compedu.2011.05.016
Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations
between teachers’ approaches to teaching and students’
approaches to learning. Higher Education, 37(1), 57–70.
doi:10.1023/A:1003548313194
Volume 59, Number 1 TechTrends • January/Februar y 2015 71
Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J.
L. (2013). Learning Analytics Dashboard Applications.
American Behavioral Scientist, 57(10), 1500–1509.
doi:10.1177/0002764213479363
Wilson, T. D. (1999). Models in information behaviour
research. Journal of Documentation, 55(3), 249–270.
doi:10.1108/EUM0000000007145
Winne, P. H. (2006). How Soware Technologies Can
Improve Research on Learning and Bolster School
Reform. Educational Psychologist, 41(1), 5–17.
doi:10.1207/s15326985ep4101_3
Winne, P. H., & Hadwin, A. F. (1998). Studying as self-
regulated learning. In D. J. Hacker, J. Dunlosky, & A.
C. Graesser (Eds.), Metacognition in educational theory
and practice (pp. 277–304). Mahwah, NJ, US: Lawrence
Erlbaum Associates Publishers.
Zhou, M., & Winne, P. H. (2012). Modeling academic
achievement by self-reported versus traced goal
orientation. Learning and Instruction, 22(6), 413–419.
doi:10.1016/j.learninstruc.2012.03.004
Calling all Instructional Designers
2015 CALL FOR NOMINATIONS
THE AECT
DESIGN & DEVELOPMENT
AWARDS
Each year the awards listed below are sponsored by the Design and Development Division of AECT.
Don’t miss this opportunity to be recognized for your work and present that work at the 2015
conference —submit your nomination(s) as soon as possible!
Outstanding Journal Article Award
Outstanding Book Award
Outstanding Practice Award
Outstanding Practice by a Graduate Student in Instructional Design
Award for Graduate Student Research in Instructional Design
Information about each of the awards, including how to submit nominations, is detailed on page 91. Please
note that you may nominate your own work as well as the work of others. Nominations and accompanying
materials must be received by March 15, 2015.
See submission details on page 91.