Conference PaperPDF Available

The Role of Achievement Goal Orientations When Studying Effect of Learning Analytics Visualizations

Authors:

Abstract and Figures

When designing learning analytics tools for use by learners we have an opportunity to provide tools that consider a particular learner's situation and the learner herself. To afford actual impact on learning, such tools have to be informed by theories of education. Particularly, educational research shows that individual differences play a significant role in explaining students' learning process. However, limited empirical research in learning analytics has investigated the role of theoretical constructs, such as motiva-tional factors, that are underlying the observed differences between individuals. In this work, we conducted a field experiment to examine the effect of three designed learning analytics visuali-zations on students' participation in online discussions in authentic course settings. Using hierarchical linear mixed models, our results revealed that effects of visualizations on the quantity and quality of messages posted by students with differences in achievement goal orientations could either be positive or negative. Our findings highlight the methodological importance of considering individual differences and pose important implications for future design and research of learning analytics visualizations.
Content may be subject to copyright.
The Role of Achievement Goal Orientations When
Studying Effect of Learning Analytics Visualizations
Sanam Shirazi Beheshitha, Marek Hatala
School of Interactive Arts and Technology
Simon Fraser University
Surrey, Canada
sshirazi,mhatala@sfu.ca
Dragan Gašević, Srećko Joksimović
Schools of Education and Informatics
University of Edinburgh
Edinburgh, UK
dragan.gasevic,s.joksimovic@ed.ac.uk
ABSTRACT
When designing learning analytics tools for use by learners we
have an opportunity to provide tools that consider a particular
learner’s situation and the learner herself. To afford actual impact
on learning, such tools have to be informed by theories of educa-
tion. Particularly, educational research shows that individual dif-
ferences play a significant role in explaining students’ learning
process. However, limited empirical research in learning analytics
has investigated the role of theoretical constructs, such as motiva-
tional factors, that are underlying the observed differences be-
tween individuals. In this work, we conducted a field experiment
to examine the effect of three designed learning analytics visuali-
zations on students’ participation in online discussions in authen-
tic course settings. Using hierarchical linear mixed models, our
results revealed that effects of visualizations on the quantity and
quality of messages posted by students with differences in
achievement goal orientations could either be positive or negative.
Our findings highlight the methodological importance of consid-
ering individual differences and pose important implications for
future design and research of learning analytics visualizations.
Categories and Subject Descriptors
K.3.1[Computers and Education] Distance Learning
General Terms
Human Factors, Measurement.
Keywords
Learning Analytics, Visualizations, Dashboards, Achievement
Goal Orientation, Online Discussions
1. INTRODUCTION
Recent advancement in technology-enhanced learning offers a
powerful and yet challenging opportunity to observe learning
analytics from students’ perspective. Learning analytics tools,
when put in the hands of students, can support their learning, par-
ticularly at higher education [23]. One way of presenting learning
analytics to students is through visualizations and dashboards
[35]. With the intent to offer opportunities for awareness, reflec-
tion, sense-making and impact on students’ learning [35] existing
learning analytics visualizations and dashboards present trace
data to students about their interaction with the learning environ-
ment such as use of resources [17, 24, 30], time spent on activities
[2, 22], generated artifacts [2], or social interactions with others
[7, 21, 27]. While some of the existing dashboards are targeted at
providing general information that can facilitate awareness and
monitoring of learning activities , others go further and directly
guide students to take actions to control their learning [5][8].
In terms of evaluation, a line of existing research have focused on
usability and students’ perceived usefulness [17, 30]. Some other
research studies have also been conducted that indicate positive
influence of learning analytics dashboards and visualizations on
improving engagement [28], academic performance [2], test re-
sults and assessments [4, 22], and retention rates [2] of the overall
population of students. A large number of studies that focus on
assessing leaning impact have been carried out in limited lab set-
tings [4, 22, 28]. There are also few studies that have been inves-
tigated in course settings at large sale, such as [2].
As research on learning analytics dashboards and visualizations is
expanding, further empirical research is needed to understand the
varying impact of information selected to be presented through
visualizations on different aspects of an individual students learn-
ing process and outcome. Research on educational psychology
shows that individuals differ in their readiness to profit from a
particular treatment in a particular context [34]. This indicates the
possible varying effect of a treatment for individual students. In
our study we focus on theoretical constructs of so called aptitudes
that can shed light up on the observed differences between indi-
viduals in learning context (e.g., motivational constructs, epistem-
ic beliefs, approaches to learning, and attitudes) [36]. Our aim is
to investigate the effect of learning analytics visualizations learn-
ing behavior by taking into account individual differences.
In so doing, we conducted a field experiment to examine the ef-
fects of different information presented through learning analytics
visualizations on students’ learning behavior while controlling for
their individual differences. In this work we focused on a motiva-
tional construct called achievement goal orientation [12].
Achievement goal orientation (AGO) is a well-established moti-
vational construct describing "the purpose of engagement in an
achievement behavior" [12]. In the early definitions, two main
goal orientations were identified: i) mastery goal, which was con-
ceptualized in terms of development of task competence; and ii)
performance goal, which was conceived as the illustration of per-
formance competence [26]. In terms of valence, these achieve-
ment goals were further distinguished by approaching success and
avoiding failure, e.g., being able to accomplish a task or avoiding
failing the test, respectively [11]. In recent AGO models, compe-
tence has been re-identified as the standard used in evaluation of
how well one is doing [13]. Task-based goals use absolute stand-
ards and define competence based on doing well or poorly relative
to the requirements of the task. Self-based and other-based goals
adopt intrapersonal and interpersonal standards, respectively, and
define competence in terms of doing well or poorly with respect
to how one has done before or can potentially do in the future, or
in comparison to others [13].
To discover possible association between individual differences
and information presented we designed three learning analytics
visualizations where each showed particular information about an
aspect of studentsparticipation in online discussions in a univer-
sity-level blended course. The visualizations were selected in a
way to potentially address students with particular goal orienta-
tions. We chose to focus on asynchronous online discussions, as
these are commonly exploited to support collaborative learning
[25] and can be seen as an environment in which students can
interact to build both collective and individual understanding
through conversation with their peers [20]. Critically, the level
and quality of students’ participation is largely influenced by stu-
dentsagency [37]. Learning analytics in the form of reports and
visualizations have been suggested to be supportive of participa-
tion and productive engagement in online discussions for the pop-
ulation of students as a whole [41]. However, more attention to-
wards the impact of what is presented to students with differences
in achievement goals is warranted [40]. Our results not only sub-
stantiate this assumption, they also have significant implication
for broader learning analytics research.
2. METHOD
2.1 Study Design and Research Questions
To study the effects of different information presented through
visualization on the posting behavior of students with individual
differences, we conducted a field experiment in an authentic
blended course setting. Students participated in an online group
discussion activity on a topic related to the course content. Each
student was randomly assigned to an experimental condition in
which they had access to one of the three visualizations informing
them on how they are performing in the group discussion activity.
Students’ goal orientations were measured through a self-reported
instrument.
We defined our research questions as follows:
RQ1: Is there an association between visualization type and the
quantity of students’ posts when controlled for their self-reported
achievement goal orientations?
RQ2: Is there an association between visualization type and the
quality of students’ posts when controlled for their self-reported
achievement goal orientations?
2.2 Learning Analytics Visualizations
The choice of learning analytics visualizations was guided by the
main goal of this study, i.e., to establish the association between
type of information visualized and its effect on students’ behavior.
Secondly, we expected that the effect the visualizations will vary
with goals students are pursuing. The three visualizations selected
aimed to potentially align with different types of motivations un-
derlying students’ goals. Each visualization also considered which
norm students will be evaluating themselves against, which varies
for different goal orientations. When cumulative performance is
shown, we used class average values (up to 200 students), rather
than the group average of 4-11 students. The reason is that our
LMS system students always see all contributions in a single
view, and hence, they could judge other group members perfor-
mance directly.
The Class Average visualization allows students to compare their
posting performance with the average number of messages posted
by the rest of the class (Figure 1). Comparison of the students
with the class average has been the most widely used approach
when offering learning analytics dashboards and visualiza-
tions [6]. Students who have a stronger inclination towards per-
formance orientation may find this visualization beneficial, with a
caveat that its effect on students’ participation and learning was
not always positive [6, 41]. We included this visualization mainly
because of its prevalence in deployed systems.
The Top Contributors visualization shows the count of posted
messages by the student in comparison to the top contributors in
the class. Top contributors are the top 5+ individuals in the class
who have had the highest number of messages posted (Figure 2).
Not only students are able to see performance of the top contribu-
tors, but the visualization increases their individual recognition by
showing their names and profile pictures. The norm shown by this
visualization is that of the best performing students and we expect
that it will positively motivate students with other-approach ten-
dency and somewhat those with self-approach tendency, while it
will be disturbing to students with avoidance valence for the same
goals.
While both prior visualizations focused on count of messages, the
Quality visualization focuses the content of posted messages. It
represents how many of the key concepts student has covered
within his/her posted messages and how well he/she has integrat-
ed those with logically related ideas. Students can compare quality
of their message with that of the rest of the class, hence we expect
Figure 1: The design of the Class Average visualization
Figure 2: The design of Top Contributors visualization
Figure 3: The design of the Quality visualization
that it will have positive effect on students pursuing mastery, both
those with task-approach or self-approach tendency. The key con-
cepts for each discussion topic were previously identified by the
course instructor. The visualization (Figure 3) showed the quality
for each key concept as a color-coded square. The color was de-
termined by computing the Latent Semantic Analysis (LSA), a
natural language processing technique for measuring the coher-
ence of the text1, at the sentence level [14].
More complex dashboards with several metrics may address dif-
ferent achievement goal orientations at the same time. For the
purpose of this study our selected visualizations included only one
metric of student performance and the same metric was shown at
the class level. We were explicitly not concerned with providing a
comprehensive overview of studentsperformance in a single
cumulative view.
2.3 Online Group Discussion Activity
Design and facilitation of discussions in all participating courses
followed guidelines suggested in collaborative learning literature
[29, 43]. The students were split into several groups of 4-11
members and were asked to participate over a period of 7-14 days.
All of the groups in a particular discussion were given the same
open-ended question related to the course content and were ex-
pected to engage in the discussion by exploring different aspects
of the question itself, proposing different ideas to address them,
selecting some ideas and finally deciding on one answer as a
group and justifying it with a clear rationale. Engagement in the
discussion was mandatory and was considered a graded compo-
nent of the course (5% of final grade per discussion task). Mark-
ing rubric was also provided which thoroughly explained the
marking criteria in terms of quantity and content of individual
posts, as well as, tone and mechanics, collaboration between
group members, and quality of arguments in the final response.
Each group had access to their private discussion space inside the
Canvas Learning Management System used in the course. This
space was composed of the discussion activity description, link to
visualization, and of the discussion thread itself (can be viewed at
http://at.sfu.ca/gCXQNW (permalink)). Once students clicked on
the visualization link, a new tab would open up and display the
assigned visualization to the student.
2.4 Courses, Discussions and Participants
The study was run in the Spring and Summer 2015 terms across
four different blended course offerings at the second and third
levels in a multidisciplinary Design, Media Arts and Technology
program in a Canadian post-secondary institution. Table 1 shows
the number of students (i.e., study participants) assigned to each
visualization per discussion (D1- D6) across courses (C1SP,
C1SM, C2SP and C3SP).
2.5 Data Collection a Measurement
The time stamped log data of students’ interaction with the visual-
ization was recorded. Also, the messages posted by each student
and the group structures were captured within the Learning Man-
agement System. The count of posted messages and count of vis-
ualization views was computed for each student per discussion
across different courses.
1 Coherence has been described as “the unifying element of good writing”
and hence it can be used in a way to measure quality of text.
(http://www.elc.polyu.edu.hk/elsc/material/Writing/coherenc.htm)
The 3×2 AGQ instrument was used to investigate students’
Achievement Goal Orientations [13]. The instrument consists of
18 items, grouped into 6 scales corresponding to achievement
goals (task-approach, task-avoidance, self-approach, self-
avoidance, other-approach, and other-avoidance, whereby self and
task represent mastery goals and other represents performance
goals). The responses were recorded on a Likert-type scale, from
1 (not at all true of me) to 7 (very true of me). The total scores on
every 3 items corresponding to a scale were used as the overall
measure on that AGO scale.
2.6 Data Analysis
2.6.1 Coh-Metrix Analyses
Discourse analysis can be used to help identify effectiveness of
discussions and quality of argumentation in collaborative learning
environments [33]. We used Coh-Metrix, a computational linguis-
tics facility that provides various measures of text characteristics
(e.g., text coherence, linguistic complexity, characteristics of
words and readability scores), to analyze content of the messages
posted by students [18]. We adopted the five latent components
that in a recent study on a corpus of around 37,520 texts explained
over 50% of the variability among texts [18]:
Narrativity: the degree to which the text is a narrative and con-
veys a story. On the opposite end of the spectrum are exposito-
ry texts.
Deep Cohesion: the degree to which the ideas in the text are
cohesively connected at a mental and conceptual level.
Referential Cohesion: reflects the degree to which explicit
words and ideas in the text overlap with each other.
Syntactic Simplicity: reflects the degree to which sentences
have lower number of words and use more simple and familiar
structures rather than dense sentences and high frequency of
embedded phrases.
Word Concreteness: the degree to which the text includes
words that are concrete and induce mental images in contrast to
abstract words.
In this study, the above discourse features were analyzed at the
level of each message using Co-Metrix. Then for each single stu-
dent we computed the average measures of all the messages per
discussion. Only the messages that included at least one of the key
concepts related to discussion topic (as identified by the instruc-
tor) were included in this analysis. These are the messages gauged
to have traces of higher level of knowledge construction [19].
2.6.2 Statistical Analysis
Due to the nested structure of the data and the crossed variables in
our analysis we identified hierarchical linear mixed models to be a
suitable method [31]. The primary analyses for RQ1 focused on
association between the visualization type for those students who
used them to monitor their participation in discussion and the
quantity of posted messages, after controlling for self-reported
Table 1. Number of students assigned to each visualization
Condition
C1SP
C1SM
C2SP
(Visualization)
D1
D2
D3
D4
D5
D6
All
Class Average
25
11
8
N/A
7
7
58
Top Contributors
23
12
7
5
4
4
55
Quality
13
17
5
11
5
5
56
AGOs. Hence, we identified student’s counts of posts as the de-
pendent variable.
The subsequent analysis was centered around RQ2 to find the
association between the visualization type and the quality of post-
ed messages, measured through the discourse features, after con-
trolling for the self-reported AGOs. Therefore, we identified five
dependent variables: Narrativity, Deep Cohesion, Referential
Cohesion, Syntactic Simplicity and Word Concreteness. The in-
dependent variables in all models for both RQ1 and RQ2 were the
visualization type assigned to the student (i.e., Class Average, Top
Contributors, or Quality) and the covariates were the scores on six
AGO scales (i.e., task-approach, task-avoidance, self-approach,
self-avoidance, other-approach, and other-avoidance).
Six different linear mixed models were constructed, one for the
dependent variable in RQ1 (count of posts) and one for each of
the five dependent variables in RQ2 (Narrativity, Deep Cohesion,
Referential Cohesion, Syntactic Simplicity and Word Concrete-
ness). The choice of the best fitting model for each dependent
variable was finalized after two steps of the model construction:
1) null model with student within a course as the only random
effect2 2) fixed model with the random effects introduced in the
null model and the interaction between visualization type and six
AGO scale scores as the fixed effect.
A comparison between the null random-effects only model and
fixed-effects model allows us to determine whether the model that
considers visualization types estimates quantity and quality of
posts when controlled for the self-reported AGO score better the
random effects model. Akaike Information Criterion (AIC) and
the likelihood ratio test were used to decide the best fitting model
[15]. Primarily, the model with lower AIC was suggested to have
a better fit. We used the likelihood ratio test to confirm AIC re-
sult. We also calculated an estimate of effect size (R2) for each
model, which reveals the variance explained by the model [42].
3. RESULTS
Since the students’ use of learning analytics visualizations was
voluntary, not all chose to engage with the visualizations. The
subset of students who engaged with the visualization more than
once are considered the actual users of the visualization and the
focus of our analysis in RQ1 and RQ2 (Table 2).
Table 2. Count of visualization views for students who used
visualizations
Visualization
N
Median (25%,75%)
Class Average
38
7.00 (4.00, 9.00)
Top Contributors
22
6.50 (3.25, 15.50)
Quality
38
5.00 (3.00, 10.00)
3.1 RQ1
According to the AIC and the likelihood ratio test the fixed model
that included the interaction between learning analytics visualiza-
tion and AGO scales yielded a significantly better fit than the null
model (Table 3). The linear mixed-effect analysis uncovered a
significant interaction effect between the learning analytics visual-
2 In model construction, discussion groups were considered an additional
levels in the nested structure of the random effect. Also, the total
activity count of students was accounted as another random effect. In all
models our findings showed that considering either or both of these
variables did not yield a better model.
ization and other-approach scale scores, (F(2,79.11)=4.12,
p<0.05) (Further details in Table 4).
Further investigation on interaction effect between learning ana-
lytics visualization and other-approach shows a marginal signifi-
cant difference in the count of posts between the users of the Class
Average visualization and the users of the Top Contributor visual-
ization (z=2.14, p<0.1) and significant difference between the
users of the Class Average visualization and the users of the Qual-
ity visualization (z=2.79, p<0.05). The other-approach scale is
positively associated with counts of posts for the users of the Top
Contributors and Quality visualization, while the other-approach
scale is negatively associated with counts of posted messages for
the users of the Class Average visualizations.
3.2 RQ2
For all of the five Coh-Metrix principal components, fixed effect
models that included interaction between learning analytics visu-
alization and the six AGO scales resulted with better overall
goodness of fit measures (AIC, likelihood ratio test and R2) than
null models (Table 5). As an example of analysis performed, Ta-
ble 6 shows analysis of the fixed models for Deep Cohesion, simi-
lar tables for remaining components can be viewed at
http://at.sfu.ca/shKRxa (permalink). In the context of online dis-
cussions we believe Deep Cohesion should be given higher
Table 3. Inferential Statistic for Model fit assessment - RQ1
χ2
Df
R2
AIC
Null Model
0.70
251.21
Fixed Model
40.60**
20
0.91
250.61
χ2 values show the differences between the model in the current
row and the model in the previous row.
Significance codes: *** p<0.001 , ** p<0.01 , *p<0.05
Table 4. Analysis of the fixed effects for the model - RQ1
Variable
β
SE
95% CI
Lower
Upper
Intercept (Class Average)**
0.478
0.174
0.130
0.826
Viz (Top Contributors)
0.156
0.274
-0.705
0.392
Viz (Quality)*
-0.511
0.228
-0.967
-0.054
TaskAp
0.002
0.185
-0.369
0.373
TaskAv
-0.067
0.346
-0.759
0.625
SelfAp
0.023
0.251
-0.480
0.525
SelfAv
0.402
0.510
-0.619
1.423
OtherAp***
-0.986
0.357
-1.700
-0.274
OtherAv*
0.707
0.480
-0.254
1.668
Viz (Top Contributors)*TaskAp
-0.641
0.394
-1.428
0.146
Viz (Top Contributors)*TaskAv
-0.151
0.565
-1.281
0.980
Viz (Top Contributors)* SelfAp.
1.076
0.628
-0.181
2.333
Viz (Top Contributors)* SelfAv
-0.866
0.769
-2.404
0.671
Viz(Top Contributors)* OtherAp*
1.047
0.490
0.067
2.026
Viz (Top Contributors)* OtherAv
-0.724
0.694
-2.112
0.665
Viz (Quality)*TaskAp
-0.180
0.222
-0.623
0.263
Viz (Quality)*TaskAv
-0.016
0.391
-0.799
0.767
Viz (Quality)* SelfAp
0.024
0.370
-0.716
0.765
Viz (Quality)* SelfAv
-0.206
0.589
-1.384
0.972
Viz (Quality)* OtherAp**
1.199
0.430
0.340
2.059
Viz (Quality)* OtherAv.
-1.076
0.547
-2.169
0.018
Significance code: *** p<0.001 , ** p<0.01 , *p<0.05, . p<0.1
(marginal)
All variables are scaled
weight given the importance of text cohesion for knowledge con-
struction as emphasized by cognitive scientists [10].
3.3 Narrativity
The linear mixed model for narrativity further revealed significant
interaction effect between learning analytics visualization and
task-approach (F(2,81.52)=9.27, p<0.001), learning analytics
visualization and task-avoidance (F(2,81.02)=5.26, p<0.01),
learning analytics visualization and self-approach
(F(2,80.66)=3.64, p<0.05), and learning analytics visualization
and self-avoidance (F(2,81.36)=4.08, p<0.05). Also, the interac-
tion between learning analytics visualization and other-avoidance
is marginally significant, F(2,80.62)=2.99, p<0.1.
Further investigation on interaction effect between learning ana-
lytics visualization and task-approach shows a significant differ-
ence between the scores of narrativity of the users of the Top Con-
tributors visualization compared to the scores of narrativity of the
users of the Quality visualization (z=-3.22, p<0.01) and between
the scores of narrativity of the Class Average visualization and
those of the users of Top Contributors (z=4.31, p<0.001). The
positive association between the task-approach scale and narrativ-
ity scores was largest for Top Contributors, followed by the posi-
tive association for the users of the Quality visualization, while a
negative association was found for the users of the Class Average
visualization.
Probing the interaction effect between learning analytics visuali-
zation and task-avoidance shows a significant difference in narra-
tivity scores between the users of Top Contributors compared to
the users of Quality (z=-3.00, p<0.01). The effect of task-
avoidance was negative on narrativity for the users of the Quality
visualization, while this effect was positive on the narrativity
scores of the users of the other two visualizations.
Further exploration on the interaction effect between learning
analytics visualization and self-approach exhibited a significant
difference in the scores of narrativity between the users of Class
Average visualization and those of the Quality visualization (z=-
2.32, p<0.05). Self-approach scale scores were positively associ-
ated with narrativity scores for the users of the Class Average
visualization, whereas this self-avoidance scale scores were nega-
tively associated with the narrativity scores for the messages post-
ed by the users of Top Contributors and Quality visualization.
Finally, exploring the interaction effect between learning analytics
visualization and self-avoidance goal-orientation exhibits signifi-
cant difference in the scores of narrativity between the users of the
Top Contributors and Quality visualizations (z=2.61, p<0.05).
Self-avoidance scale scores were positively associated with narra-
tivity scores for the users of the Quality visualization, whereas
these self-avoidance scores were negatively associated with both
narrativity scores of the users of both Top Contributors and Class
Average visualizations.
3.4 Deep Cohesion
The deep cohesion model revealed significant interaction effects
between learning analytics visualization and task-approach
(F(2,82.38)=10.02, p<0.001), learning analytics visualization and
self-avoidance scales (F(2,82.28)=4.36, p<0.05), and learning
analytics visualization and other-avoidance (F(2,81.14)=3.65,
p<0.05). Also, the interaction between learning analytics visuali-
zation and task-avoidance was marginally significant,
F(2,81.62)=2.94, p<0.1 (Further details in Table 6).
Further investigation on the interaction effect between learning
analytics visualization and task-approach shows a significant
difference in the deep cohesion scores between the users of the
Class Average visualization compared to the deep cohesions
scores of the users of the Top Contributors visualization (z=4.33,
p<0.001), and between deep cohesions scores of the users of the
Top Contributors and Quality visualizations (z=-3.99, p<0.001).
Table 5. Inferential Statistic for Model fit assessment - RQ2
Narrativity
χ2
df
R2
AIC
Null Model
0.51
251.70
Fixed Model
74.42***
20
0.68
217.28
Deep Cohesion
χ2
df
R2
AIC
Null Model
0.36
246.40
Fixed Model
56.64***
20
0.44
229.76
Syntactic Simplicity
χ2
df
R2
AIC
Null Model
0.14
248.32
Fixed Model
44.10**
20
0.28
244.22
Referential Cohesion
χ2
df
R2
AIC
Null Model
0.69
245.88
Fixed Model
57.47***
20
0.77
228.42
Word Concreteness
χ2
df
R2
AIC
Null Model
0.44
226.46
Fixed Model
52.99***
20
0.68
213.47
χ2 values show the differences between the model in the current
row and the model in the previous row.
Significance codes: *** p<0.001 , ** p<0.01 , *p<0.05
Table 6. Analysis of the fixed effects for the model - RQ2
Deep Cohesion
Variable
β
SE
95% CI
Lower
Upper
Intercept (Class Average)
0.252
0.179
-0.105
0.609
Viz (Top Contributors)
0.195
0.235
-0.276
0.665
Viz (Quality)
-0.165
0.198
-0.561
0.231
TaskAp
-0.019
0.160
0.339
0.301
TaskAv*
0.681
0.308
0.065
1.296
SelfAp
0.009
0.212
-0.416
0.433
SelfAv.
-0.867
0.468
-1.803
0.070
OtherAp*
-0.771
0.322
-1.414
-0.128
OtherAv*
1.107
0.446
0.214
1.999
Viz(TopContributors)*TaskAp***
1.523
0.351
0.820
2.225
Viz (Top Contributors)*TaskAv
-0.767
0.499
-1.764
0.231
Viz (Top Contributors)* SelfAp
-0.740
0.690
-2.119
0.640
Viz (Top Contributors)* SelfAv
-0.074
0.741
-1.554
1.406
Viz (Top Contributors)* OtherAp
0.162
0.433
-0.705
1.028
Viz (Top Contributors)* OtherAv
-0.604
0.608
-1.820
0.612
Viz (Quality)*TaskAp
0.029
0.258
-0.488
0.545
Viz (Quality)*TaskAv*
-0.886
0.370
-1.626
-0.146
Viz (Quality)* SelfAp
-0.167
0.349
-0.864
0.530
Viz (Quality)* SelfAv*
1.375
0.557
0.262
2.489
Viz (Quality)* OtherAp
0.508
0.401
-0.293
1.310
Viz (Quality)* OtherAv**
-1.333
0.516
-2.365
-0.300
Significance code: *** p<0.001 , ** p<0.01 , *p<0.05, . p<0.1
(marginal)
All variables are scaled
The positive association between task-approach scales and deep
cohesion was largest for the Top Contributors, while much small-
er positive association was found for the Quality visualization
followed by the negative association for the users of the Class
Average visualization.
Further exploration on the interaction effect between learning
analytics visualization and self-avoidance exhibited a significant
difference in the scores of deep cohesion between the Class Aver-
age visualization users and Quality visualization users (z=2.47,
p<0.05), and marginally significant difference between Top Con-
tributor visualization users and Quality visualization users
(z=2.21, p<0.1). Self-avoidance scale scores were positively asso-
ciated with deep cohesion scores for the users of the Quality visu-
alization, whereas this self-avoidance scale scores were negatively
associated with the deep cohesion scores for the messages posted
by the Top Contributors and Class Average visualization users.
Further investigation on interaction effect between learning ana-
lytics visualization and other-avoidance showed a significant
difference in the scores of deep cohesion between the users of
Class Average visualization with those of Quality visualization
(z=-2.58, p<0.05). The association between other-avoidance scale
scores and deep cohesion scores was negative for the users of the
Quality visualization, while the association was positive for the
users of both Top Contributors and Class Average visualizations.
3.5 Syntactic Simplicity
Analysis for syntactic simplicity principal component revealed
significant interaction effect between learning analytics visualiza-
tion and self-avoidance (F(2,80.99)=3.46, p<0.05).
Further exploration on the interaction effect between learning
analytics visualization and self-avoidance exhibited a significant
difference in the scores of deep cohesion between the Top Con-
tributors visualization users and the Quality visualization users
(z=2.56, p<0.05). Self-avoidance scale scores were positively
associated with syntactic simplicity scores for the users of the
Quality visualization, whereas this self-avoidance scale scores
were negatively associated with the syntactic simplicity for the
messages posted by the Top Contributors and Class Average visu-
alization users.
3.6 Referential Cohesion
Analysis of mixed models for referential cohesion revealed a sig-
nificant interaction effect between learning analytics visualization
and task-approach scales (F(2,78.05)=7.44, p<0.01), learning
analytics visualization and self-avoidance (F(2,75.33)=3.93,
p<0.05), and learning analytics visualization and other-approach
(F(2,73.33)=3.61,p<0.05).
Further investigation of the interaction effect between learning
analytics visualization and task-approach showed a significant
difference in the scores of referential cohesion between the users
of the Top Contributor visualization and the users of the Quality
visualization (z=-3.066, p<0.01), and between the Class Average
visualizations users and Top Contributor users (z=3.86 ,p<0.001).
The positive association between task-approach scales and refer-
ential cohesion was largest for the Top Contributors, while much
smaller positive association was found for the Quality visualiza-
tion followed by the negative association for the users of the Class
Average visualization.
Probing the interaction effect between learning analytics visuali-
zation and self-avoidance shows a significant difference in refer-
ential cohesion scores between the users of Top Contributors
compared to the users of Quality (z=2.77, p<0.05) and marginally
significant difference between the users of Class Average visuali-
zation compared to the users of Top Contributors visualization
(z=-2.22, 0<0.1). Self-avoidance scale scores were positively as-
sociated with referential scores for the users of the Quality visual-
ization and Class Average, whereas this self-avoidance scale
scores were negatively associated with the referential cohesions
for the messages posted by the users of Top Contributors visuali-
zation.
Further exploration on the interaction effect between learning
analytics visualization and task-approach exhibited a significant
difference in the scores of referential cohesion between the users
of Top Contributors visualization and those of Quality visualiza-
tion (z=2.68, p<0.5). The other-approach scale scores were posi-
tively associated with referential cohesion scores for the users of
the Quality visualization, whereas this self-avoidance scale scores
were negatively associated with the deep cohesion scores for the
messages posted by the Top Contributors and Class Average visu-
alization users.
3.7 Word Concreteness
Further analysis of the models for word concreteness uncovered a
significant interaction between learning analytics visualization
and task-approach (F(2,80.24)=4.41, p<0.05), learning analytics
visualization and task-avoidance (F(2,80.17)=4.00, p<0.05),
learning analytics visualization and other-approach
(F(2,80.57)=3.68, p<0.05), and learning analytics visualization
and other-avoidance scales (F(2,80.06)=4.35, p<0.05).
Further investigation of the interaction effect between learning
analytics visualization and task-approach showed a significant
difference in the word concreteness scores between users of the
Top Contributor visualization and the Quality visualization (z=-
2.59, p<0.05), as well as, users of the Top Contributors and Class
Average visualization (z=2.90, p<001). The positive association
between the task-approach scale and word concreteness scores
was largest for Top Contributors, followed by the users of the
Quality visualization and the Class Average visualization.
Probing the interaction effect between learning analytics visuali-
zation and task-avoidance showed a significant difference in the
word concreteness scores between the users of the Top Contribu-
tors visualization and the Quality visualization (z=2.63, p<0.05).
Further analysis showed a positive effect on word concreteness
scores for the Quality visualization users, while this effect was
negative on the word concreteness scores for the users of the other
two visualizations.
Further investigation of the interaction effect between learning
analytics visualization and other-approach showed a significant
difference in the word concreteness scores between users of the
Class Average visualization and the users of the Top Contributors
visualization (z=-2.69, p<0.05). Further analysis showed a posi-
tive effect on word concreteness scores for the users of the Class
Average and Quality visualization, while this effect was negative
on the word concreteness scores for the users of the Top Contribu-
tors visualization.
Finally, the interaction effect between learning analytics visualiza-
tion and other-avoidance showed a significant difference in the
word concreteness scores between the users of the Class Average
visualization and the users of both Top Contributors (z=2.67,
p<0.05) and Quality visualizations (z=2.64, p<0.05). The associa-
tion between other-avoidance scale scores and word concreteness
scores was negative for the users of the Class Average visualiza-
tion, while the association was positive for the Top Contributors
and Quality visualizations.
4. DISCUSSION AND CONCLUSIONS
The overall goal of this study was to investigate the effect of dif-
ferent information presented through learning analytics visualiza-
tions on the posting behavior of students with different self-
reported achievement goal orientations in online group discussion
activities.
4.1 Interpretation of the results
4.1.1 Different Visualizations and Students’ Quantity
of Posts Considering their AGOs
Our analysis showed that after controlling for achievement goals,
some learning analytics visualizations had positive and some had
negative effects on students’ quantity of posts.
For students who used Top Contributors and Quality visualiza-
tion, higher scores on other-approach scale were significantly
associated with higher numbers of posts, whereas for those who
used Class Average, the association with count of posts was nega-
tive.
The positive effect of Top Contributors visualization on the quan-
tity of posts is in alignment with prior research showing that stu-
dents with other-approach goals assess their competence level in
terms of normative standards and aim at outperforming their peers
[13]. In this case, the students who used the Top Contributors
visualization may have interpreted the norm based on the contri-
bution level of those who had the highest number of postings in
the class. Another interpretation is that they may have strived to
gain visibility by the rest of the class, which means being listed as
top contributors themselves. Hence, this positive association be-
tween the other-approach scale scores and numbers of posts for
users of this visualization is not surprising. The Quality visualiza-
tion may have motivated students with orientation towards other-
approach goal to outperform the rest of the class in terms of the
depth and breadth of the key concepts covered in their messages.
In order to reach that goal, this visualization may have indirectly
encouraged them to contribute more.
For the Class Average visualization the students’ judgment of
how their peers were doing may have been influenced by the dis-
played average performance of the entire class. Research shows
that students who adopt normative standards, through other-
approach, usually rely on the instructors criteria, as they believe
this can best lead to outperforming their peers if no other visible
norm exists [32]. In light of this, real-time updates presented in
the visualization may push the instructor’s clearly expressed crite-
ria behind the analytics metrics. If the class average is below
teachers expectation at any given time, students with other-
approach tendency may follow that as their normative standards
for their goal.
Previous research shows that normative goal-standards can range
from modest to extreme [32]. It might be that learning analytics
visualization can be an influencing factor in determining the end
points of this range. The Top Contributors and Quality visualiza-
tions encourages setting a higher standard to outperform in com-
parison to the class average, which means it is more challenging
to achieve and requires more effort. This is in accordance with the
idea that if desirable participation behaviors are explicitly exposed
to students oriented towards performance goals (i.e. other-based
and self-based goals in 3×2 AGO model), it can encourage them
to engage more productively in the discussion activity [40].
4.1.2 Different Visualizations and Students’ Quality
of Posts Considering their AGOs
Our results showed that after controlling for achievement goals,
some learning analytics visualizations had positive and some had
negative effects on studentsquality of posts observed through
discourse features (i.e., Narrativity, Deep Cohesion, Referential
Cohesion, and Word Concreteness). For each achievement goal,
summary of significant associations are reported in Table 7.
In Table 7, positive associations show that higher scores on a
specific AGO scale are associated with higher scores on a specific
discourse component when using the visualization, whereas nega-
tive associations indicate that higher scores on an AGO scale are
associated with lower scores on discourse features for a particular
visualization. Table 7 uncovers non-homogenous findings across
different goal orientations and different visualizations.
Out of the five visible features presented in Table 7, the most
highlighted and frequent discourse component is deep cohesion.
For long, the importance of cohesion in text and oral communica-
tion has been emphasized by cognitive scientists who aimed at
understanding how human mind constructs meaning from dis-
course [10]. In fact, measuring cohesion was the main driver for
the development of Coh-Metrix which later expanded to other
discourse features. There are almost consistent findings in the
collaborative learning literature showing positive outcomes of
deep cohesion. Higher levels of deep cohesion show deeper inte-
gration of the ideas with background knowledge and fever con-
ceptual gaps, as well as, better individual and group performance
[9].
Our non-homogenous results across different visualizations show
that using a particular visualization followed a positive association
between a certain goal and a discourse feature, while another vis-
ualization may have followed a negative association for the same
achievement goal and the same discourse component. For in-
stance, those with higher tendency towards self-avoidance goals
constructed messages with higher deep cohesion when using the
Quality visualization but lower deep cohesion when using the
Class Average or Top Contributors visualizations. As discussed
previously, high deep cohesion is associated with positive out-
comes and thus, it is highly desirable [9]. Students with avoidance
goals often suffer from the lack of task focus and hence, are more
likely to experience low deep cohesion. It seems that the Quality
visualization may have played a positive role in directing the stu-
dents with high self-avoidance goals towards overcoming task
disrupting thoughts and integrating more cohesive messages,
while the other two visualizations may have played a negative
role. A possible explanation is that presentation of information in
the Quality visualization was more focused on improvements of
self over time (key concepts covered), which can increase feeling
of self-efficacy and self-confidence, and hence, improve the task
focus [32].
Similarly, our non-homogenous findings across different
achievement goals indicate that using a particular visualization
followed a positive association between one achievement goal and
a discourse feature, whereas the same visualization may have
followed negative association for the same discourse feature and
another achievement goal. For instance, despite positive outcomes
of Quality visualization for students oriented towards self-
avoidance goals, the role this visualization played on construction
of deep cohesive messages appeared to be negative on the indi-
viduals with higher tendency towards task-avoidance. It seems
that for students with task-avoidance strivings, seeing the con-
cepts they have not covered increased their stress of doing poorly
in the discussions.
A highlighted aspect of the summary table is the presence of
negative valence goals. In the literature, avoidance goals regard-
less of the competence definition have mostly been associated
with negative outcomes because of their tendency to avoid failure.
Low cognitive engagement, low self-efficacy, high anxiety and
feeling of shame, confusion, disorganized study habits, task-
disrupting thoughts, help-avoidance, poor performance and inter-
est are among destructive outcomes of mastery-avoidance (task-
avoidance) and performance-avoidance goals (other-avoidance
and self-avoidance) [32]. Therefore, providing feedback to help
reduce some of the negative aspects of these avoidance goals is
desirable in addition to the prevision of the information shown in
the learning analytics visualizations.
The most visible achievement goal with positive valance in Ta-
ble 7 is task-approach. Prior research shows that students with a
high task-approach tendency in a particular context compared to
others, find the topic interesting, have positive feelings about the
task and perceive it as valuable, use deep learning strategies and
appreciate both cooperativeness and help seeking [32]. Therefore,
it is not surprising that their deep approach to learning can help
them mentally connect ideas and construct messages that show
stronger signs of deep cohesion [1]. Our findings indicate all the
Quality and Top Contributors visualization had a positive effect
on deep cohesion when controlled for task-approach scores. This
finding is not surprising for the Quality visualization, as it directly
promotes coherent discussion of some key concepts and logical
integration with related ideas. As for Top Contributors, quality
may indirectly be promoted through externalization of high stand-
ards on the contribution level. Therefore, it may encourage deeper
investigation into the topic of discussion.
4.2 Implications for Theory and Practice
The findings present some methodological, theoretical, and prac-
tical implications. On a methodological side, the study shows the
importance of assessing learning analytics visualizations in au-
thentic course settings to evaluate the actual effect of the present-
ed information on students’ behaviors and outcomes. Combining
traditionally collected data through self-reported surveys, such as
individual achievement goal orientation, with fine grained data
such as interaction logs and generated artifacts. In this study, the
effect of different learning analytics visualizations on students’
behavior was uncovered only when looking at the fine-grained
data and after controlling for students’ achievement goals, as mo-
tivational constructs. In addition, analysis of discourse patterns
provided in-depth insight into the quality of students’ contribu-
tions that complemented traditional metrics that rely on quantity
of contributions.
The study poses some important theoretical and practical implica-
tions for the further research and use of learning dashboards and
tools by encouraging adoption of effective instructional practices
to support their use. From the instructional design point of view,
our findings show the potentials of learning analytics visualiza-
tions as a feedback mechanism for students in online learning
environments. In our study, the instructional design of the discus-
sion activity followed guidelines based on theories and practices
for effective and productive discussions. We are continuing to
investigate both the effect of pedagogical framing of learning
analytics visualization and the effect of connection of information
presented to the learning activities on students’ learning out-
comes. Also, our results confirm the findings of the limited re-
search in this area that reveals learning analytics in the form of
Table 7. Summary of Mixed Model Analysis for Interaction
between Learning Analytics Visualization and AGO Scale on
Quality of Posts
AGO Scale
Visualization
Dependent Variable
Assoc.
Direction
Task-
Approach
Class Average
Narrativity
-
Deep Cohesion
-
Referential Cohesion
-
Word Concreteness
+
Top
Contributors
Narrativity
+
Deep Cohesion
+
Referential Cohesion
+
Word Concreteness
+
Quality
Narrativity
+
Deep Cohesion
+
Referential Cohesion
+
Word Concreteness
+
Task
Avoidance
Class Average
Narrativity
+
Deep Cohesion
+
Word Concreteness
-
Top
Contributors
Narrativity
+
Deep Cohesion
+
Word Concreteness
-
Quality
Narrativity
-
Deep Cohesion
-
Word Concreteness
+
Self-
Approach
Class Average
Narrativity
+
Top
Contributors
Narrativity
-
Quality
Narrativity
-
Self-
Avoidance
Class Average
Narrativity
-
Deep Cohesion
-
Syntactic Simplicity
-
Referential Cohesion
+
Top
Contributors
Narrativity
-
Deep Cohesion
-
Syntactic Simplicity
-
Referential Cohesion
-
Quality
Narrativity
+
Deep Cohesion
+
Syntactic Simplicity
+
Referential Cohesion
+
Other-
Approach
Class Average
Referential Cohesion
-
Word Concreteness
+
Top
Contributors
Referential Cohesion
-
Word Concreteness
-
Quality
Referential Cohesion
+
Word Concreteness
+
Other-
Avoidance
Class Average
Deep Cohesion
+
Word Concreteness
-
Top
Contributors
Deep Cohesion
+
Word Concreteness
+
Quality
Deep Cohesion
-
Word Concreteness
+
dashboards or reports can lead to the change of activities in online
discussions that are sometimes intentional and goal-oriented and
sometimes unconscious [41].
Our research has implications for direction of empirical studies
around learning analytics visualizations and subsequently their
designs. The findings of our field study reveal that the effect of a
particular learning analytics visualization on students’ behavior
differs when students are inclined to different achievement goals.
This can motivate further empirical studies to investigate the con-
nection between other theoretical constructs that underlie individ-
ual differences and effectiveness of learning analytics dashboards.
Such studies can help move towards developing a body of
knowledge that could guide design and application of learning
analytics tools that are theoretically informed.
For instance, our results showed that the use of a particular learn-
ing analytics visualization can be associated with positive changes
on students’ learning behavior with tendency towards a certain
goal, even for avoidance goals. We know that avoidance goals
have been mostly associated with negatives outcomes. Hence, our
findings encourages further examination of the role that personal-
ized interventions can play in encouraging positive changes that
may lead to improved learning processes and outcomes. If the
feedback provided through these visualizations alleviates negative
outcomes associated with pursuit of avoidance goals, such as anx-
iety and low self-efficacy, it may even have the potential to direct
students towards pursuit of approach goals which according to
research have been associated with more positive learning out-
comes [32].
On the opposite side, our results show that each of the three visu-
alizations can be negatively associated with learning behavior of
students with certain individual difference. For example, showing
the Top Contributors visualization to students with tendency to-
wards self-avoidance was negatively associated with four dis-
course features in their postings. This was discovered only after
carefully analyzing the interaction effect of visualizations with
goal orientations. Other examples can be extracted from Table 7.
Such insight can encourage adoption of more stringent require-
ments for empirical evaluation of learning analytics visualizations
before deploying them to wide-scale use.
The choice of visualizations in this research was guided by their
ability to engage different goals individual students may pursue.
We did not tap into knowledge in information visualization field.
As our results show, both what is being presented and how, very
likely have different effect on individual students, with some vis-
ualizations being more effective than others. A systematic study is
needed to understand the effect of different learning analytics
visualization designs by controlling for certain individual differ-
ences, eventually leading to clear guidelines how to provide per-
sonalized learning analytics [4].
4.3 Limitations and Future Work
The current work has several limitations that require further re-
search to complement our results. First, our learning analytics
visualizations were integrated into the learning management sys-
tem by providing a link that required additional effort and motiva-
tion on students’ part to click and be directed to the visualization.
This may have affected how many students and how often viewed
the visualizations. Future work should explore other integration
options and their influence on the adoption and engagement with
the tool, while considering the platform and affordances it pro-
vides.
Secondly, in this study, we considered achievement goal orienta-
tion, a theoretical construct that could reveal individual differ-
ences with respect to motivational factors in educational context.
However, other aptitude constructs that illuminate on students'
preferred approaches to learning [3] can also help understand how
particular students interact with learning analytics visualizations
and how those visualizations affect their learning behaviors. Addi-
tionally, since we are dealing with visual information and writing,
further linking motivational disposition to other individuals traits,
such as attention and perception, processing and evaluation, and
in case of the discussion argumentative writing, is needed to build
fuller understanding how visualizations influence individuals. In
our current set of studies we are also probing for an individual’s
numeracy, graph literacy and other related cognitive characteris-
tics.
Our findings open several other directions for future research.
First, in learning analytics of discussion activities, the listening
behaviors, i.e. reading other students’ post, is critical for effective
discussion [39, 40]. As listening behavior constitutes over 75% of
discussion activities, understanding the effect of learning analytics
visualizations on listening behavior for would complement our
research. Next, instructional scaffolds can produce desirable ef-
fects on development of critical thinking [16]. From this arises the
question how scaffolding, or the lack thereof, fosters the positive
association between engaging with the visualizations and posting
behaviors. Following a suggested framework [38], in a follow up
study, we will investigate the effect of reflection and goal setting
by embedding a space around the visualization for students to set
their goals and write a reflection journal and have it appear every
time they view the visualizations.
This work focused on learning analytics for discussions. Investi-
gating the association between individual characteristics and dif-
ferent ways of visualizing other learning activities is needed to
generalize our findings.
5. REFERENCES
[1] Akyol, Z. and Garrison, D.R. 2011. Understanding cognitive
presence in an online and blended community of inquiry:
Assessing outcomes and processes for deep approaches to
learning. British Journal of Educational Technology. 42, 2,
233 250.
[2] Arnold, K. and Pistilli, M. 2012. Course signals at Purdue:
using learning analytics to increase student success. Proc. of
the 2nd Int. Conf. on Learning Analytics and Knowledge,
267 270.
[3] Biggs, J.B. 1987. Student Approaches to Learning and Stud-
ying. Research Monograph. ERIC.
[4] Brusilovsky, P., Hsiao, I.-H. and Folajimi, Y. 2011. Quiz-
Map: open social student modeling and adaptive navigation
support with TreeMaps. Towards Ubiquitous Learning. 71
82.
[5] Bull, S. and Kay, J. 2008. Metacognition and open learner
models. The 3rd Workshop on Meta-Cognition and Self-
Regulated Learning in Educational Technologies, at
ITS2008, 720.
[6] Corrin, L. and de Barba, P. 2014. Exploring students’ inter-
pretation of feedback delivered through learning analytics
dashboards. Proc. of the ascilite 2014 conference, 201-205.
[7] Dawson, S., Bakharia, A. and Heathcote, E. 2010. SNAPP:
Realising the affordances of real-time SNA within net-
worked learning environments. Proc. of the 7th Int. Conf. on
Networked Learning, 125133.
[8] Dimitrova, V. 2003. STyLE-OLM: Interactive open learner
modelling. Int. Journal of Artificial Intelligence in Education
(IJAIED). 13, 3578.
[9] Dowell, N.M., Cade, W.L., Tausczik, Y., Pennebaker, J. and
Graesser, A.C. 2014. What works: Creating adaptive and in-
telligent systems for collaborative learning support. Intelli-
gent Tutoring Systems, 124133.
[10] Dowell, N.M., Graesser, A.C. and Cai, Z. Language and
Discourse Analysis with Coh-Metrix: Applications from Ed-
ucational Material to Learning Environments at Scale, Jour-
nal of Learning Analytics, in press.
[11] Elliot, A.J. 1999. Approach and avoidance motivation and
achievement goals. Educational Psychologist. 34, 3, 169-
189.
[12] Elliot, A.J., Elliot, A.J. and Dweck, C.S. 2005. A conceptual
history of the achievement goal construct. Handbook of
Competence and Motivation. 16, 5272.
[13] Elliot, A.J., Murayama, K. and Pekrun, R. 2011. A 3×2
achievement goal model. Journal of Educational Psycholo-
gy. 103, 3, 632.
[14] Foltz, P.W., Kintsch, W. and Landauer, T.K. 1998. The
measurement of textual coherence with latent semantic anal-
ysis. Discourse Processes. 25, 2-3, 285307.
[15] Friedman, J., Hastie, T. and Tibshirani, R. 2001. The ele-
ments of statistical learning. Springer series in statistics
Springer, Berlin.
[16] Gašević, D., Adesope, O., Joksimović, S. and Kovanović, V.
2015. Externally-facilitated regulation scaffolding and role
assignment to develop cognitive presence in asynchronous
online discussions. The Internet and Higher Education. 24,
5365.
[17] Govaerts, S., Verbert, K., Duval, E. and Pardo, A. 2012. The
student activity meter for awareness and self-reflection.
CHI’12 Extended Abstracts on Human Factors in Compu-
ting Systems, 869884.
[18] Graesser, A.C., McNamara, D.S. and Kulikowich, J.M.
2011. Coh-Metrix providing multilevel analyses of text
characteristics. Educational Researcher. 40, 5, 223234.
[19] Joksimovic, S., Gasevic, D., Kovanovic, V., Adesope, O.
and Hatala, M. 2014. Psychological characteristics in cogni-
tive presence of communities of inquiry: A linguistic analy-
sis of online discussions. The Internet and Higher Educa-
tion. 22, 110.
[20] Kanuka, H. and Anderson, T. 2007. Online social inter-
change, discord, and knowledge construction. International
Journal of E-Learning & Distance Education. 13, 1, 5774.
[21] Kay, J., Maisonneuve, N., Yacef, K. and Reimann, P. 2006.
The big five and visualisations of team work activity. Proc.
of the 8th Int. Conf. on Intelligent Tutoring Systems, 197
206.
[22] Kerly, A., Ellis, R. and Bull, S. 2008. CALMsystem: a con-
versational agent for learner modelling. Knowledge-Based
Systems. 21, 3, 238246.
[23] Kruse, A. and Pongsajapan, R. 2012. Student-centered learn-
ing analytics. CNDLS Thought Papers, 19.
[24] Leony, D., Pardo, A., de la Fuente Valentín, L., de Castro,
D.S. and Kloos, C.D. 2012. GLASS: a learning analytics
visualization tool. Proc. of the 2nd Int. Conf. on Learning
Analytics and Knowledge, 162163.
[25] Luppicini, R. 2007. Review of computer mediated commu-
nication research for education. Instructional Science. 35, 2,
141185.
[26] Maehr, M.L. 1989. Thoughts about motivation. Research on
motivation in education: Goals and cognitions. 3, 1, 299
315.
[27] Mazza, R. and Milani, C. 2004. Gismo: a graphical interac-
tive student monitoring tool for course management systems.
Int. Conf. on Technology-Enhanced Learning, 18.
[28] Nakahara, J., Hisamatsu, S., Yaegashi, K. and Yamauchi, Y.
2005. iTree: Does the mobile phone encourage learners to be
more involved in collaborative learning? Proc. of the Conf.
on Computer Support for Collaborative Learning: Learning
2005: the next 10 years! , 470478.
[29] Rovai, A.P. 2007. Facilitating online discussions effectively.
The Internet and Higher Education. 10, 1, 7788.
[30] Santos, J.L., Verbert, K., Govaerts, S. and Duval, E. 2013.
Addressing learner issues with StepUp!: an Evaluation.
Proc. of the 3rd Int. Conf. on Learning Analytics and
Knowledge, 1422.
[31] Schielzeth, H. and Nakagawa, S. 2013. Nested by design:
model fitting and interpretation in a mixed model era. Meth-
ods in Ecology and Evolution. 4, 1, 1424.
[32] Senko, C., Hulleman, C.S. and Harackiewicz, J.M. 2011.
Achievement goal theory at the crossroads: Old controver-
sies, current challenges, and new directions. Educational
Psychologist. 46, 1, 2647.
[33] Shum, S.B. and Ferguson, R. 2012. Social learning analytics.
Journal of Educational Technology & Society. 15, 3, 326.
[34] Snow, R.E. 1991. Aptitude-treatment interaction as a frame-
work for research on individual differences in psychothera-
py. Journal of Consulting and Clinical Psychology. 59, 2,
205-216.
[35] Verbert, K., Duval, E., Klerkx, J., Govaerts, S. and Santos,
J.L. 2013. Learning analytics dashboard applications. Ameri-
can Behavioral Scientist. 57, 10, 1500-1509.
[36] Winne, P.H. 2010. Improving measurements of self-
regulated learning. Educational Psychologist. 45, 4, 267
276.
[37] Winne, P.H. and Hadwin, A.F. 1998. Studying as self-
regulated learning. Metacognition in Educational Theory
and Practice. 93, 2730.
[38] Wise, A.F. 2014. Designing pedagogical interventions to
support student use of learning analytics. Proc. of the 4th Int.
Conf. on Learning Analytics and Knowledge, 203211.
[39] Wise, A.F., Hausknecht, S.N. and Zhao, Y. 2014. Attending
to others’ posts in asynchronous discussions: Learners’
online “listening” and its relationship to speaking. Int. Jour-
nal of Computer-Supported Collaborative Learning. 9, 2,
185209.
[40] Wise, A.F., Speer, J., Marbouti, F. and Hsiao, Y.-T. 2013.
Broadening the notion of participation in online discussions:
examining patterns in learners’ online listening behaviors.
Instructional Science. 41, 2, 323343.
[41] Wise, A., Zhao, Y. and Hausknecht, S. 2014. Learning ana-
lytics for online discussions: Embedded and extracted ap-
proaches. Journal of Learning Analytics. 1, 2, 4871.
[42] Xu, R. 2003. Measuring explained variation in linear mixed
effects models. Statistics in Medicine. 22, 22, 35273541.
[43] Yuan, J. and Kim, C. 2014. Guidelines for facilitating the
development of learning communities in online courses.
Journal of Computer-Assisted Learning. 30, 3, 220232.
... Students' response to dashboards can be measured in various ways, from an interpretation of their standing in the course when shown the dashboard by researchers (e.g., Corrin & de Barba, 2014) or advising staff (e.g., Lonn et al., 2015), differentiating dashboard and non-dashboard conditions without accounting for intensity of dashboard use (e.g., Davis et al., 2017), counting students' views of the dashboard and relating them to some achievement and their dashboard perception (e.g., Matz et al., 2021;Zamecnik et al., 2022), relating students' patterns of dashboard use to their self-regulatory behaviours (e.g., Kia et al., 2020), qualitatively evaluating students' interpretation of what they see in the dashboard and how they intend to change their learning behaviour (e.g., A. Wise et al., 2014; J. P. L. Tan et al., 2016;Han et al., 2021), relating some of the students' outcomes to their level of activity (e.g., Brusilovsky et al., 2016), students' aptitudes (e.g., achievement goal orientation; Shirazi Beheshitha et al., 2016), motivation (e.g., Bai et al., 2021;Balci et al., 2022), attribution to effort or ability (e.g., Aghaei et al., 2023), or other personal characteristic such as culture and individualism (e.g., Davis et al., 2017). Such a variety makes it difficult to generalize LADs' learning impact. ...
... In the prior study, the source of LAD usage data for this study, we investigated if students' achievement goal orientation (AGO; Elliot et al., 2011) can be used to personalize LADs (Shirazi Beheshitha et al., 2016). We created two distinct dashboards aimed at encouraging engagement of performance-oriented students and mastery-oriented students, respectively. ...
Article
Full-text available
An essential part of making dashboards more effective in motivating students and leading to desirable behavioural change is knowing what information to communicate to the student and how to frame and present it. Most of the research studying dashboards' impact on learning analyzes learning indicators of students as a group. Understanding how a student's learning unfolds after viewing the dashboard is necessary for personalized dashboard selection and its content. In the context of the discussion activity, we analyzed 28,290 actions of 896 students after they saw their learning status on the dashboards, which were integrated into 21 discussions in 11 courses. We provide a comparative perspective on three dashboard types: the class average, the leaderboard, and message-quality dashboards. Our results indicate that students' behaviours after viewing three dashboards were associated with their displayed standing in the discussion: views showing the student's status below the frame of reference were associated with a higher likelihood of posting, and views of the student outperforming the norm with diminished further posting, although demonstrating higher discussion engagement. We reiterate a need to understand the impact of dashboard states on students' behaviour, creating a foundation for a personalized selection of dashboard views based on individual students' standing.
... We therefore recommend that LAD developers should consider including one or multiple reference frame(s) in a LAD. Reference frames facilitate data interpretation (e.g., Wise et al., 2016;de Vreugd et al., 2023) but can also elicit negative emotions (Beheshitha et al., 2016). Interestingly, participants voiced a positive perception of visualizing growth in the LAD. ...
Article
Full-text available
For university students, self-regulation of study behaviour is important. However, students are not always capable of effective self-regulation. Providing study behaviour information via a learning analytics dashboard (LAD) may support phases within self-regulated learning (SRL). However, it is unclear what information a LAD should provide, how to present information in a usable manner, and what the information’s perceived usefulness is in supporting self-regulation of study behaviour. This study entails a sequential mixed design: assessing information needs in focus groups (n=7), exploring usability via think-aloud interviews (n=8), assessing usability with the System Usability Scale (n=42), and assessing perceived usefulness via interviews (n=16). Results showed that students and tutors agreed on the relevance of the constructs chosen from literature but differed in ranking the importance of new constructs. Usability exploration led to several design improvements. Perceived usefulness assessment showed the LAD supported the appraisal of study behaviour. A need for reference frames to facilitate data interpretation was vocalized. Impacts on study behaviour varied, possibly because preparatory activities were not used. Impact could be improved by further integrating the LAD into existing learning processes.
... Also, it seems that encouraging students to actively advance their studies with such analytics applications necessitates a student-centered approach and holistic development through research. According to Rets et al. (2021), there is a particular call for qualitative insights, as many previous LAD studies that included students have primarily used quantitative approaches (e.g., Beheshitha et al., 2016;Divjak et al., 2023;Kim et al., 2016). ...
Article
Full-text available
Learning analytics provides a novel means to support the development and growth of students into self-regulated learners, but little is known about student perspectives on its utilization. To address this gap, the present study proposed the following research question: what are the perceptions of higher education students on the utilization of a learning analytics dashboard to promote self-regulated learning? More specifically, this can be expressed via the following threefold sub-question: how do higher education students perceive the use of a learning analytics dashboard and its development as promoting the (1) forethought, (2) performance, and (3) reflection phase processes of self-regulated learning? Data for the study were collected from students (N = 16) through semi-structured interviews and analyzed using a qualitative content analysis. Results indicated that the students perceived the use of the learning analytics dashboard as an opportunity for versatile learning support, providing them with a means to control and observe their studies and learning, while facilitating various performance phase processes. Insights from the analytics data could also be used in targeting the students’ development areas as well as in reflecting on their studies and learning, both individually and jointly with their educators, thus contributing to the activities of forethought and reflection phases. However, in order for the learning analytics dashboard to serve students more profoundly across myriad studies, its further development was deemed necessary. The findings of this investigation emphasize the need to integrate the use and development of learning analytics into versatile learning processes and mechanisms of comprehensive support and guidance.
... By using mixedeffect models, we were able to examine the impact that assessment type had on assessment grades that went beyond what could be explained by both student and course specific features that were not measured. The use of mixed-effect models to examine the effect of specific variables on grades while controlling for variability due to student and course differences, is a well-established practice in LA research (e.g., Beheshitha et al., 2016;Jovanović et al., 2021). ...
Article
Full-text available
Higher education institutions are increasingly seeking ways to leverage the available educational data to make program and course quality improvements. The development of automated curriculum analytics can play a substantial role in this effort by bringing novel and timely insights into course and program quality. However, the adoption of curriculum analytics for program quality assurance has been impeded by a lack of accessible and scalable data-informed methods that can be employed to evaluate assessment practices and ensure their alignment with the curriculum objectives. Presently, this work remains a manual and resource intensive endeavour. In response to this challenge, we present an exploratory curriculum analytics approach that allows for scalable, semi-automated examination of the alignment between assessments and learning objectives at the program level. The method employs a comprehensive representation of assessment objectives (i.e., learning objectives associated with assessments), to encode the domain specific and general knowledge, as well as the specific skills the implemented assessments are designed to measure. The proposed method uses this representation for clustering assessment objectives within a study program, and proceeds with an exploratory analysis of the resulting clusters of objectives in relation to the corresponding assessment types and student assessment grades. We demonstrate and discuss the capacity of the proposed method to offer an initial insight into alignment of assessment objectives and practice, using the assessment-related data from an undergraduate study program in information systems.
... Moreover, our chart types were associated with data types, so we are unable to decouple the potential impacts of data types and chart types, as discussed in Section 7.1.4. We also did not test visualizations showing negative associations, which could have a different impact on human perception in certain cases [4]. In addition, we removed chart axes since determining appropriate axis value complicated the study design. ...
Preprint
Full-text available
"Correlation does not imply causation" is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users' confidence in their causal assessments. In addition, our results align with prior work, indicating that chart type may also affect causal inference. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user's perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest remaining challenges and heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.
... As a result, the frequency and duration of video watching are insufficient indicators of viewer engagement. Therefore, researchers should assess learning engagement by examining more detailed clickstream data in video-based learning settings (Beheshitha, Hatala, Gašević, & Joksimović, 2016;Kim et al., 2018). For example, Lu et al. (2018) showed that operation records in a video-based learning setting, including number of plays, forward seeks, backward seeks, pauses, stops, reliably and accurately predicted academic outcomes in a Calculus course. ...
Article
Purpose Learning analytics are often used as proxies for student engagement. More qualitative data on how post-secondary students engage with course elements are needed to guide the design, development and deployment of learning analytics information, particularly in the use of nudge techniques. Design/methodology/approach In the context of a graduate-level quantitative course within a public health core curriculum, the following research questions were explored: What do students cite as their motivations when making decisions about whether, when or how to engage with course content and learning supports? and What are student reactions to visual prompts designed to activate these motivations? This qualitative study included two phases of interviews: (1) in-depth interviews with screen sharing as students interacted with the learning management system and (2) in-depth interviews as students reviewed pairs of visual prompts that could potentially be used as behavioral nudges. Findings The study found that student motivations when making decisions about course content and learning supports principally fell into three categories: learning, doing and performing and that all participants attributed their visual prompt preferences to personal motivations or self-perceptions as learners. Research limitations/implications We acknowledge the limitations for external validity and generalizability of the findings in this study. The goal of this formative design research was not to assess the relationship between study habits and motivations and learning outcomes; rather, it was to provide insight to researchers and practitioners seeking to develop, test or employ nudges based on learner study habits. We also acknowledge the small sample size for Phase 2. The aim of Phase 2 was not to identify emergent themes through content analysis but to explore student reactions to nudges mapped to the Damgaard and Nielsen (2018) typology as part of investigating its salience in applications informed by Phase 1 learner study habits. Practical implications Insights from this study could not only be used to design engagement-focused interventions to be applied in education but also in sectors such as training or organizational development. Educators could incorporate the study’s findings to create more engaging learning environments or curricula, fostering active participation and improved learning outcomes and inform policies in education, public programs or workforce development by encouraging evidence-based engagement practices. Originality/value The motivation categories that emerged here – learning, doing and performing – are consistent with studies delving into motivational constructs in education like expectancy value theory, self-regulation and achievement orientation (Ames and Archer, 1988; Pintrich and De Groot, 1990; Wigfield, 1994) and can be leveraged to design interventions that increase engagement, which has been shown in previous work to be lower than hoped (Garbers et al. , 2022) to support student educational outcomes.
Article
“Correlation does not imply causation' is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users' confidence in their causal assessments. In addition, our results align with prior work, indicating that chart type may also affect causal inference. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user's perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest remaining challenges and heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.
Article
Full-text available
The goal of this article is to preserve and distribute the information presented at the LASI (2014) workshop on Coh-­Metrix , a theoretically grounded , computational linguistics facility that analyzes texts on multiple levels of language and discourse. The workshop focused on the utility of Coh-­‐Metrix in discourse theory and educational practice. We discuss some of the motivating factors that led to the development of Coh-­‐Metrix , situated within the context of multilevel theoretical frameworks of discourse comprehension and learning. A review of published studies will highlight the applications of Coh-­‐Metrix , ranging from the scaling and selection of educational material to learning environments at scale. The examples illustrate the relationship between discourse and cognitive , affective , and social processes. We walk through the pedagogical guidelines that should be followed when analyzing texts using Coh-­‐Metrix. Finally , we conclude the paper with a general discussion of the future directions for Coh-­‐Metrix including methodological and practical implications for the learning analytics (LA) and educational data mining (EDM) community .
Conference Paper
Full-text available
The delivery of feedback to students through learning analytics dashboards is becoming more common in higher education. However, it is not clear what ability students have to interpret this feedback in ways that will benefit their learning. This paper presents the preliminary results of a mixed methods study into students' interpretation of feedback delivered through learning analytics dashboards and the influence this feedback has on students' self-regulated learning. The findings from a preliminary analysis of the data from the first two phases will be discussed and the future phases of the research outlined. The outcomes of this research provide new insights into how dashboards can be designed to provide effective feedback in blended learning environments.
Article
Full-text available
This paper describes a study that looked at the effects of different teaching presence approaches in communities of inquiry, and ways in which student–student online discussions with high levels of cognitive presence can be designed. Specifically, this paper proposes that high-levels of cognitive presence can be facilitated in online courses, based on the community of inquiry model, by building upon existing research in i) self-regulated learning through externally-facilitated regulation scaffolding and ii) computer-supported collaborative learning through role assignment. We conducted a quasi-experimental study in a fully-online course (N = 82) using six offerings of the course. After performing a quantitative content analysis of online discussion transcripts, a multilevel linear modeling analysis showed the significant positive effects of both externally-facilitated regulation scaffolding and role assignment on the level of cognitive presence. Specifically, the results showed that externally-facilitated regulation scaffolding had a higher effect on cognitive presence than extrinsically induced motivation through grades. The results showed the effectiveness of role assignment to facilitate a high-level of cognitive presence. More importantly, the results showed a significant effect of the interaction between externally-facilitated regulation scaffolding and role assignment on cognitive presence. The paper concludes with a discussion of practical and theoretical implications.
Article
Full-text available
Benefits of social interaction for learning have widely been recognized in educational research and practice. The existing body of research knowledge in computer supported collaborative learning (CSCL) offers numerous practical approaches that can enhance educational experience in online group activities. The Community of Inquiry (CoI) model is one of the best-researched frameworks that comprehensively explains different dimensions of online learning in communities of inquiry. However, individual differences, well-established in educational psychology to affect learning (e.g., emotions, motivation and working memory capacity), have received much less attention in the CSCL and CoI research published to date. This paper reports on the findings of a study that investigated linguistic features of online discussion transcripts coded by the four levels of cognitive presence – a CoI dimension that explains the extent to which a community can construct meaning from the initial practical inquiry to the eventual problem resolution. The automated linguistic analysis, conducted by using the Linguistic Inquiry and Word Count (LIWC) framework, revealed that certain word categories – reported previously in the literature as accurate indicators of specific psychological characteristics – had distinct distributions for each level of cognitive presence of the CoI framework. The most significant finding of the study is that linguistic proxies of increased cognitive load have unique representation patterns across the four levels of cognitive presence. Consequently, this study legitimizes more research on individual differences in general and on cognitive load theory in particular in communities of inquiry. The paper also discusses implications for educational research, practice, and technology.
Conference Paper
Full-text available
An emerging trend in classrooms is the use of collaborative learning environments that promote lively exchanges between learners in order to facili-tate learning. This paper explored the possibility of using discourse features to predict student and group performance during collaborative learning interac-tions. We investigate the linguistic patterns of group chats, within an online col-laborative learning exercise, on five discourse dimensions using an automated linguistic facility, Coh-Metrix. The results show students who engaged in deep-er cohesive integration and generated more complicated syntactic structures performed significantly better. The overall group level results indicated collabo-rative groups who engaged in deeper cohesive and expository style interactions performed significantly better on posttests. Although students do not directly express knowledge construction and cognitive processes, our results indicate that these states can be monitored by analyzing language and discourse. Impli-cations are discussed regarding computer supported collaborative learning and ITS’s to facilitate productive communication in collaborative learning environ-ments.
Article
This paper describes an application of learning analytics that builds on an existing research program investigating how students contribute and attend to the messages of others in asynchronous online discussions. We first overview the E-Listening research program and then explain how this work was translated into analytics that students and instructors could use to reflect on their discussion participation. Two kinds of analytics were designed: some e mbedded in the learning environment to provide students with real-time information on their activity in-progress; and some extracted from the learning environment and presented to students in a separate digital space for reflection. In addition, we describe the design of an intervention though which use of the analytics can be introduced as an integral course activity. Findings from an initial implementation of the application indicated that the learning analytics intervention supported changes in students’ discussion participation. Five issues for future work on learning analytics in online discussions are presented. One, unintentional versus purposeful change; two, differing changes prompted by the same analytic; three, importance of theoretical buy-in and calculation transparency for perceived analytic value; four, affective components of students’ reactions; and five, support for students in the process of enacting analytics-driven changes.
Article
We propose that the design and implementation of effective Social Learning Analytics (SLA) present significant challenges and opportunities for both research and enterprise, in three important respects. The first is that the learning landscape is extraordinarily turbulent at present, in no small part due to technological drivers. Online social learning is emerging as a significant phenomenon for a variety of reasons, which we review, in order to motivate the concept of social learning. The second challenge is to identify different types of SLA and their associated technologies and uses. We discuss five categories of analytic in relation to online social learning; these analytics are either inherently social or can be socialised. This sets the scene for a third challenge, that of implementing analytics that have pedagogical and ethical integrity in a context where power and control over data are now of primary importance. We consider some of the concerns that learning analytics provoke, and suggest that Social Learning Analytics may provide ways forward. We conclude by revisiting the drivers andtrends, and consider future scenarios that we may see unfold as SLA tools and services mature. © International Forum of Educational Technology & Society (IFETS).
Article
Theoretical models of collaborative learning through online discussions presuppose that students generally attend to others’ posts. However, a succession of studies over the last decade has shown this assumption to be unwarranted. Instead, research indicates that learners attend to others’ posts in diverse and particular ways—an activity we have conceptualized as online “listening.” In this study, we take an important step forward in developing a robust theory of online listening by examining the relationship between how learners “listen” (access existing posts) and “speak” (contribute posts) in online discussions. Ten variables indexing four dimensions of students’ listening (breadth, depth, temporal contiguity and revisitation) and five variables indexing three dimensions of students’ speaking (discursiveness, depth of content and reflectivity) were calculated for 31 students participating in 6 week-long online discussions as part of an undergraduate educational psychology course. Multi-level mixed-model linear regressions indicated that responsiveness of students’ posts was positively predicted by how often they revisited previously read peer posts, and negatively related to a greater number of posts in the discussion overall. The depth of posts’ contents was predicted by the percentage of posts viewed that students actually read (as opposed to scanned). An exploratory follow-up analysis indicated that these listening-speaking relationships manifest differently over time for distinct subsets of learners (e.g., a decrease in variable pairs versus corresponding fluctuations around stable levels). Put together, results suggest that when students take the time to read and re-read their peers’ posts there are related benefits in the quality of the posts they contribute.
Conference Paper
This article addresses a relatively unexplored area in the emerging field of learning analytics, the design of learning analytics interventions. A learning analytics intervention is defined as the surrounding frame of activity through which analytic tools, data, and reports are taken up and used. It is a soft technology that involves the orchestration of the human process of engaging with the analytics as part of the larger teaching and learning activity. This paper first makes the case for the overall importance of intervention design, situating it within the larger landscape of the learning analytics field, and then considers the specific issues of intervention design for student use of learning analytics. Four principles of pedagogical learning analytics intervention design that can be used by teachers and course developers to support the productive use of learning analytics by students are introduced: Integration, Agency, Reference Frame and Dialogue. In addition three core processes in which to engage students are described: Grounding, Goal-Setting and Reflection. These principles and processes are united in a preliminary model of pedagogical learning analytics intervention design for students, presented as a starting point for further inquiry.