Content uploaded by Dragan Gasevic
Author content
All content in this area was uploaded by Dragan Gasevic on Mar 05, 2021
Content may be subject to copyright.
The Role of Achievement Goal Orientations When
Studying Effect of Learning Analytics Visualizations
Sanam Shirazi Beheshitha, Marek Hatala
School of Interactive Arts and Technology
Simon Fraser University
Surrey, Canada
sshirazi,mhatala@sfu.ca
Dragan Gašević, Srećko Joksimović
Schools of Education and Informatics
University of Edinburgh
Edinburgh, UK
dragan.gasevic,s.joksimovic@ed.ac.uk
ABSTRACT
When designing learning analytics tools for use by learners we
have an opportunity to provide tools that consider a particular
learner’s situation and the learner herself. To afford actual impact
on learning, such tools have to be informed by theories of educa-
tion. Particularly, educational research shows that individual dif-
ferences play a significant role in explaining students’ learning
process. However, limited empirical research in learning analytics
has investigated the role of theoretical constructs, such as motiva-
tional factors, that are underlying the observed differences be-
tween individuals. In this work, we conducted a field experiment
to examine the effect of three designed learning analytics visuali-
zations on students’ participation in online discussions in authen-
tic course settings. Using hierarchical linear mixed models, our
results revealed that effects of visualizations on the quantity and
quality of messages posted by students with differences in
achievement goal orientations could either be positive or negative.
Our findings highlight the methodological importance of consid-
ering individual differences and pose important implications for
future design and research of learning analytics visualizations.
Categories and Subject Descriptors
K.3.1[Computers and Education] Distance Learning
General Terms
Human Factors, Measurement.
Keywords
Learning Analytics, Visualizations, Dashboards, Achievement
Goal Orientation, Online Discussions
1. INTRODUCTION
Recent advancement in technology-enhanced learning offers a
powerful and yet challenging opportunity to observe learning
analytics from students’ perspective. Learning analytics tools,
when put in the hands of students, can support their learning, par-
ticularly at higher education [23]. One way of presenting learning
analytics to students is through visualizations and dashboards
[35]. With the intent to offer opportunities for awareness, reflec-
tion, sense-making and impact on students’ learning [35] existing
learning analytics visualizations and dashboards present trace
data to students about their interaction with the learning environ-
ment such as use of resources [17, 24, 30], time spent on activities
[2, 22], generated artifacts [2], or social interactions with others
[7, 21, 27]. While some of the existing dashboards are targeted at
providing general information that can facilitate awareness and
monitoring of learning activities , others go further and directly
guide students to take actions to control their learning [5][8].
In terms of evaluation, a line of existing research have focused on
usability and students’ perceived usefulness [17, 30]. Some other
research studies have also been conducted that indicate positive
influence of learning analytics dashboards and visualizations on
improving engagement [28], academic performance [2], test re-
sults and assessments [4, 22], and retention rates [2] of the overall
population of students. A large number of studies that focus on
assessing leaning impact have been carried out in limited lab set-
tings [4, 22, 28]. There are also few studies that have been inves-
tigated in course settings at large sale, such as [2].
As research on learning analytics dashboards and visualizations is
expanding, further empirical research is needed to understand the
varying impact of information selected to be presented through
visualizations on different aspects of an individual student’s learn-
ing process and outcome. Research on educational psychology
shows that individuals differ in their readiness to profit from a
particular treatment in a particular context [34]. This indicates the
possible varying effect of a treatment for individual students. In
our study we focus on theoretical constructs of so called aptitudes
that can shed light up on the observed differences between indi-
viduals in learning context (e.g., motivational constructs, epistem-
ic beliefs, approaches to learning, and attitudes) [36]. Our aim is
to investigate the effect of learning analytics visualizations learn-
ing behavior by taking into account individual differences.
In so doing, we conducted a field experiment to examine the ef-
fects of different information presented through learning analytics
visualizations on students’ learning behavior while controlling for
their individual differences. In this work we focused on a motiva-
tional construct called achievement goal orientation [12].
Achievement goal orientation (AGO) is a well-established moti-
vational construct describing "the purpose of engagement in an
achievement behavior" [12]. In the early definitions, two main
goal orientations were identified: i) mastery goal, which was con-
ceptualized in terms of development of task competence; and ii)
performance goal, which was conceived as the illustration of per-
formance competence [26]. In terms of valence, these achieve-
ment goals were further distinguished by approaching success and
avoiding failure, e.g., being able to accomplish a task or avoiding
failing the test, respectively [11]. In recent AGO models, compe-
tence has been re-identified as the standard used in evaluation of
how well one is doing [13]. Task-based goals use absolute stand-
ards and define competence based on doing well or poorly relative
to the requirements of the task. Self-based and other-based goals
adopt intrapersonal and interpersonal standards, respectively, and
define competence in terms of doing well or poorly with respect
to how one has done before or can potentially do in the future, or
in comparison to others [13].
To discover possible association between individual differences
and information presented we designed three learning analytics
visualizations where each showed particular information about an
aspect of students’ participation in online discussions in a univer-
sity-level blended course. The visualizations were selected in a
way to potentially address students with particular goal orienta-
tions. We chose to focus on asynchronous online discussions, as
these are commonly exploited to support collaborative learning
[25] and can be seen as an environment in which students can
interact to build both collective and individual understanding
through conversation with their peers [20]. Critically, the level
and quality of students’ participation is largely influenced by stu-
dents’ agency [37]. Learning analytics in the form of reports and
visualizations have been suggested to be supportive of participa-
tion and productive engagement in online discussions for the pop-
ulation of students as a whole [41]. However, more attention to-
wards the impact of what is presented to students with differences
in achievement goals is warranted [40]. Our results not only sub-
stantiate this assumption, they also have significant implication
for broader learning analytics research.
2. METHOD
2.1 Study Design and Research Questions
To study the effects of different information presented through
visualization on the posting behavior of students with individual
differences, we conducted a field experiment in an authentic
blended course setting. Students participated in an online group
discussion activity on a topic related to the course content. Each
student was randomly assigned to an experimental condition in
which they had access to one of the three visualizations informing
them on how they are performing in the group discussion activity.
Students’ goal orientations were measured through a self-reported
instrument.
We defined our research questions as follows:
RQ1: Is there an association between visualization type and the
quantity of students’ posts when controlled for their self-reported
achievement goal orientations?
RQ2: Is there an association between visualization type and the
quality of students’ posts when controlled for their self-reported
achievement goal orientations?
2.2 Learning Analytics Visualizations
The choice of learning analytics visualizations was guided by the
main goal of this study, i.e., to establish the association between
type of information visualized and its effect on students’ behavior.
Secondly, we expected that the effect the visualizations will vary
with goals students are pursuing. The three visualizations selected
aimed to potentially align with different types of motivations un-
derlying students’ goals. Each visualization also considered which
norm students will be evaluating themselves against, which varies
for different goal orientations. When cumulative performance is
shown, we used class average values (up to 200 students), rather
than the group average of 4-11 students. The reason is that our
LMS system students always see all contributions in a single
view, and hence, they could judge other group members perfor-
mance directly.
The Class Average visualization allows students to compare their
posting performance with the average number of messages posted
by the rest of the class (Figure 1). Comparison of the students
with the class average has been the most widely used approach
when offering learning analytics dashboards and visualiza-
tions [6]. Students who have a stronger inclination towards per-
formance orientation may find this visualization beneficial, with a
caveat that its effect on students’ participation and learning was
not always positive [6, 41]. We included this visualization mainly
because of its prevalence in deployed systems.
The Top Contributors visualization shows the count of posted
messages by the student in comparison to the top contributors in
the class. Top contributors are the top 5+ individuals in the class
who have had the highest number of messages posted (Figure 2).
Not only students are able to see performance of the top contribu-
tors, but the visualization increases their individual recognition by
showing their names and profile pictures. The norm shown by this
visualization is that of the best performing students and we expect
that it will positively motivate students with other-approach ten-
dency and somewhat those with self-approach tendency, while it
will be disturbing to students with avoidance valence for the same
goals.
While both prior visualizations focused on count of messages, the
Quality visualization focuses the content of posted messages. It
represents how many of the key concepts student has covered
within his/her posted messages and how well he/she has integrat-
ed those with logically related ideas. Students can compare quality
of their message with that of the rest of the class, hence we expect
Figure 1: The design of the Class Average visualization
Figure 2: The design of Top Contributors visualization
Figure 3: The design of the Quality visualization
that it will have positive effect on students pursuing mastery, both
those with task-approach or self-approach tendency. The key con-
cepts for each discussion topic were previously identified by the
course instructor. The visualization (Figure 3) showed the quality
for each key concept as a color-coded square. The color was de-
termined by computing the Latent Semantic Analysis (LSA), a
natural language processing technique for measuring the coher-
ence of the text1, at the sentence level [14].
More complex dashboards with several metrics may address dif-
ferent achievement goal orientations at the same time. For the
purpose of this study our selected visualizations included only one
metric of student performance and the same metric was shown at
the class level. We were explicitly not concerned with providing a
comprehensive overview of students’ performance in a single
cumulative view.
2.3 Online Group Discussion Activity
Design and facilitation of discussions in all participating courses
followed guidelines suggested in collaborative learning literature
[29, 43]. The students were split into several groups of 4-11
members and were asked to participate over a period of 7-14 days.
All of the groups in a particular discussion were given the same
open-ended question related to the course content and were ex-
pected to engage in the discussion by exploring different aspects
of the question itself, proposing different ideas to address them,
selecting some ideas and finally deciding on one answer as a
group and justifying it with a clear rationale. Engagement in the
discussion was mandatory and was considered a graded compo-
nent of the course (5% of final grade per discussion task). Mark-
ing rubric was also provided which thoroughly explained the
marking criteria in terms of quantity and content of individual
posts, as well as, tone and mechanics, collaboration between
group members, and quality of arguments in the final response.
Each group had access to their private discussion space inside the
Canvas Learning Management System used in the course. This
space was composed of the discussion activity description, link to
visualization, and of the discussion thread itself (can be viewed at
http://at.sfu.ca/gCXQNW (permalink)). Once students clicked on
the visualization link, a new tab would open up and display the
assigned visualization to the student.
2.4 Courses, Discussions and Participants
The study was run in the Spring and Summer 2015 terms across
four different blended course offerings at the second and third
levels in a multidisciplinary Design, Media Arts and Technology
program in a Canadian post-secondary institution. Table 1 shows
the number of students (i.e., study participants) assigned to each
visualization per discussion (D1- D6) across courses (C1SP,
C1SM, C2SP and C3SP).
2.5 Data Collection a Measurement
The time stamped log data of students’ interaction with the visual-
ization was recorded. Also, the messages posted by each student
and the group structures were captured within the Learning Man-
agement System. The count of posted messages and count of vis-
ualization views was computed for each student per discussion
across different courses.
1 Coherence has been described as “the unifying element of good writing”
and hence it can be used in a way to measure quality of text.
(http://www.elc.polyu.edu.hk/elsc/material/Writing/coherenc.htm)
The 3×2 AGQ instrument was used to investigate students’
Achievement Goal Orientations [13]. The instrument consists of
18 items, grouped into 6 scales corresponding to achievement
goals (task-approach, task-avoidance, self-approach, self-
avoidance, other-approach, and other-avoidance, whereby self and
task represent mastery goals and other represents performance
goals). The responses were recorded on a Likert-type scale, from
1 (not at all true of me) to 7 (very true of me). The total scores on
every 3 items corresponding to a scale were used as the overall
measure on that AGO scale.
2.6 Data Analysis
2.6.1 Coh-Metrix Analyses
Discourse analysis can be used to help identify effectiveness of
discussions and quality of argumentation in collaborative learning
environments [33]. We used Coh-Metrix, a computational linguis-
tics facility that provides various measures of text characteristics
(e.g., text coherence, linguistic complexity, characteristics of
words and readability scores), to analyze content of the messages
posted by students [18]. We adopted the five latent components
that in a recent study on a corpus of around 37,520 texts explained
over 50% of the variability among texts [18]:
• Narrativity: the degree to which the text is a narrative and con-
veys a story. On the opposite end of the spectrum are exposito-
ry texts.
• Deep Cohesion: the degree to which the ideas in the text are
cohesively connected at a mental and conceptual level.
• Referential Cohesion: reflects the degree to which explicit
words and ideas in the text overlap with each other.
• Syntactic Simplicity: reflects the degree to which sentences
have lower number of words and use more simple and familiar
structures rather than dense sentences and high frequency of
embedded phrases.
• Word Concreteness: the degree to which the text includes
words that are concrete and induce mental images in contrast to
abstract words.
In this study, the above discourse features were analyzed at the
level of each message using Co-Metrix. Then for each single stu-
dent we computed the average measures of all the messages per
discussion. Only the messages that included at least one of the key
concepts related to discussion topic (as identified by the instruc-
tor) were included in this analysis. These are the messages gauged
to have traces of higher level of knowledge construction [19].
2.6.2 Statistical Analysis
Due to the nested structure of the data and the crossed variables in
our analysis we identified hierarchical linear mixed models to be a
suitable method [31]. The primary analyses for RQ1 focused on
association between the visualization type for those students who
used them to monitor their participation in discussion and the
quantity of posted messages, after controlling for self-reported
Table 1. Number of students assigned to each visualization
Condition
C1SP
C1SM
C2SP
C3SP
(Visualization)
D1
D2
D3
D4
D5
D6
All
Class Average
25
11
8
N/A
7
7
58
Top Contributors
23
12
7
5
4
4
55
Quality
13
17
5
11
5
5
56
AGOs. Hence, we identified student’s counts of posts as the de-
pendent variable.
The subsequent analysis was centered around RQ2 to find the
association between the visualization type and the quality of post-
ed messages, measured through the discourse features, after con-
trolling for the self-reported AGOs. Therefore, we identified five
dependent variables: Narrativity, Deep Cohesion, Referential
Cohesion, Syntactic Simplicity and Word Concreteness. The in-
dependent variables in all models for both RQ1 and RQ2 were the
visualization type assigned to the student (i.e., Class Average, Top
Contributors, or Quality) and the covariates were the scores on six
AGO scales (i.e., task-approach, task-avoidance, self-approach,
self-avoidance, other-approach, and other-avoidance).
Six different linear mixed models were constructed, one for the
dependent variable in RQ1 (count of posts) and one for each of
the five dependent variables in RQ2 (Narrativity, Deep Cohesion,
Referential Cohesion, Syntactic Simplicity and Word Concrete-
ness). The choice of the best fitting model for each dependent
variable was finalized after two steps of the model construction:
1) null model with student within a course as the only random
effect2 2) fixed model with the random effects introduced in the
null model and the interaction between visualization type and six
AGO scale scores as the fixed effect.
A comparison between the null random-effects only model and
fixed-effects model allows us to determine whether the model that
considers visualization types estimates quantity and quality of
posts when controlled for the self-reported AGO score better the
random effects model. Akaike Information Criterion (AIC) and
the likelihood ratio test were used to decide the best fitting model
[15]. Primarily, the model with lower AIC was suggested to have
a better fit. We used the likelihood ratio test to confirm AIC re-
sult. We also calculated an estimate of effect size (R2) for each
model, which reveals the variance explained by the model [42].
3. RESULTS
Since the students’ use of learning analytics visualizations was
voluntary, not all chose to engage with the visualizations. The
subset of students who engaged with the visualization more than
once are considered the actual users of the visualization and the
focus of our analysis in RQ1 and RQ2 (Table 2).
Table 2. Count of visualization views for students who used
visualizations
Visualization
N
Median (25%,75%)
Class Average
38
7.00 (4.00, 9.00)
Top Contributors
22
6.50 (3.25, 15.50)
Quality
38
5.00 (3.00, 10.00)
3.1 RQ1
According to the AIC and the likelihood ratio test the fixed model
that included the interaction between learning analytics visualiza-
tion and AGO scales yielded a significantly better fit than the null
model (Table 3). The linear mixed-effect analysis uncovered a
significant interaction effect between the learning analytics visual-
2 In model construction, discussion groups were considered an additional
levels in the nested structure of the random effect. Also, the total
activity count of students was accounted as another random effect. In all
models our findings showed that considering either or both of these
variables did not yield a better model.
ization and other-approach scale scores, (F(2,79.11)=4.12,
p<0.05) (Further details in Table 4).
Further investigation on interaction effect between learning ana-
lytics visualization and other-approach shows a marginal signifi-
cant difference in the count of posts between the users of the Class
Average visualization and the users of the Top Contributor visual-
ization (z=2.14, p<0.1) and significant difference between the
users of the Class Average visualization and the users of the Qual-
ity visualization (z=2.79, p<0.05). The other-approach scale is
positively associated with counts of posts for the users of the Top
Contributors and Quality visualization, while the other-approach
scale is negatively associated with counts of posted messages for
the users of the Class Average visualizations.
3.2 RQ2
For all of the five Coh-Metrix principal components, fixed effect
models that included interaction between learning analytics visu-
alization and the six AGO scales resulted with better overall
goodness of fit measures (AIC, likelihood ratio test and R2) than
null models (Table 5). As an example of analysis performed, Ta-
ble 6 shows analysis of the fixed models for Deep Cohesion, simi-
lar tables for remaining components can be viewed at
http://at.sfu.ca/shKRxa (permalink). In the context of online dis-
cussions we believe Deep Cohesion should be given higher
Table 3. Inferential Statistic for Model fit assessment - RQ1
χ2
Df
R2
AIC
Null Model
0.70
251.21
Fixed Model
40.60**
20
0.91
250.61
χ2 values show the differences between the model in the current
row and the model in the previous row.
Significance codes: *** p<0.001 , ** p<0.01 , *p<0.05
Table 4. Analysis of the fixed effects for the model - RQ1
Variable
β
SE
95% CI
Lower
Upper
Intercept (Class Average)**
0.478
0.174
0.130
0.826
Viz (Top Contributors)
0.156
0.274
-0.705
0.392
Viz (Quality)*
-0.511
0.228
-0.967
-0.054
TaskAp
0.002
0.185
-0.369
0.373
TaskAv
-0.067
0.346
-0.759
0.625
SelfAp
0.023
0.251
-0.480
0.525
SelfAv
0.402
0.510
-0.619
1.423
OtherAp***
-0.986
0.357
-1.700
-0.274
OtherAv*
0.707
0.480
-0.254
1.668
Viz (Top Contributors)*TaskAp
-0.641
0.394
-1.428
0.146
Viz (Top Contributors)*TaskAv
-0.151
0.565
-1.281
0.980
Viz (Top Contributors)* SelfAp.
1.076
0.628
-0.181
2.333
Viz (Top Contributors)* SelfAv
-0.866
0.769
-2.404
0.671
Viz(Top Contributors)* OtherAp*
1.047
0.490
0.067
2.026
Viz (Top Contributors)* OtherAv
-0.724
0.694
-2.112
0.665
Viz (Quality)*TaskAp
-0.180
0.222
-0.623
0.263
Viz (Quality)*TaskAv
-0.016
0.391
-0.799
0.767
Viz (Quality)* SelfAp
0.024
0.370
-0.716
0.765
Viz (Quality)* SelfAv
-0.206
0.589
-1.384
0.972
Viz (Quality)* OtherAp**
1.199
0.430
0.340
2.059
Viz (Quality)* OtherAv.
-1.076
0.547
-2.169
0.018
Significance code: *** p<0.001 , ** p<0.01 , *p<0.05, . p<0.1
(marginal)
All variables are scaled
weight given the importance of text cohesion for knowledge con-
struction as emphasized by cognitive scientists [10].
3.3 Narrativity
The linear mixed model for narrativity further revealed significant
interaction effect between learning analytics visualization and
task-approach (F(2,81.52)=9.27, p<0.001), learning analytics
visualization and task-avoidance (F(2,81.02)=5.26, p<0.01),
learning analytics visualization and self-approach
(F(2,80.66)=3.64, p<0.05), and learning analytics visualization
and self-avoidance (F(2,81.36)=4.08, p<0.05). Also, the interac-
tion between learning analytics visualization and other-avoidance
is marginally significant, F(2,80.62)=2.99, p<0.1.
Further investigation on interaction effect between learning ana-
lytics visualization and task-approach shows a significant differ-
ence between the scores of narrativity of the users of the Top Con-
tributors visualization compared to the scores of narrativity of the
users of the Quality visualization (z=-3.22, p<0.01) and between
the scores of narrativity of the Class Average visualization and
those of the users of Top Contributors (z=4.31, p<0.001). The
positive association between the task-approach scale and narrativ-
ity scores was largest for Top Contributors, followed by the posi-
tive association for the users of the Quality visualization, while a
negative association was found for the users of the Class Average
visualization.
Probing the interaction effect between learning analytics visuali-
zation and task-avoidance shows a significant difference in narra-
tivity scores between the users of Top Contributors compared to
the users of Quality (z=-3.00, p<0.01). The effect of task-
avoidance was negative on narrativity for the users of the Quality
visualization, while this effect was positive on the narrativity
scores of the users of the other two visualizations.
Further exploration on the interaction effect between learning
analytics visualization and self-approach exhibited a significant
difference in the scores of narrativity between the users of Class
Average visualization and those of the Quality visualization (z=-
2.32, p<0.05). Self-approach scale scores were positively associ-
ated with narrativity scores for the users of the Class Average
visualization, whereas this self-avoidance scale scores were nega-
tively associated with the narrativity scores for the messages post-
ed by the users of Top Contributors and Quality visualization.
Finally, exploring the interaction effect between learning analytics
visualization and self-avoidance goal-orientation exhibits signifi-
cant difference in the scores of narrativity between the users of the
Top Contributors and Quality visualizations (z=2.61, p<0.05).
Self-avoidance scale scores were positively associated with narra-
tivity scores for the users of the Quality visualization, whereas
these self-avoidance scores were negatively associated with both
narrativity scores of the users of both Top Contributors and Class
Average visualizations.
3.4 Deep Cohesion
The deep cohesion model revealed significant interaction effects
between learning analytics visualization and task-approach
(F(2,82.38)=10.02, p<0.001), learning analytics visualization and
self-avoidance scales (F(2,82.28)=4.36, p<0.05), and learning
analytics visualization and other-avoidance (F(2,81.14)=3.65,
p<0.05). Also, the interaction between learning analytics visuali-
zation and task-avoidance was marginally significant,
F(2,81.62)=2.94, p<0.1 (Further details in Table 6).
Further investigation on the interaction effect between learning
analytics visualization and task-approach shows a significant
difference in the deep cohesion scores between the users of the
Class Average visualization compared to the deep cohesions
scores of the users of the Top Contributors visualization (z=4.33,
p<0.001), and between deep cohesions scores of the users of the
Top Contributors and Quality visualizations (z=-3.99, p<0.001).
Table 5. Inferential Statistic for Model fit assessment - RQ2
Narrativity
χ2
df
R2
AIC
Null Model
0.51
251.70
Fixed Model
74.42***
20
0.68
217.28
Deep Cohesion
χ2
df
R2
AIC
Null Model
0.36
246.40
Fixed Model
56.64***
20
0.44
229.76
Syntactic Simplicity
χ2
df
R2
AIC
Null Model
0.14
248.32
Fixed Model
44.10**
20
0.28
244.22
Referential Cohesion
χ2
df
R2
AIC
Null Model
0.69
245.88
Fixed Model
57.47***
20
0.77
228.42
Word Concreteness
χ2
df
R2
AIC
Null Model
0.44
226.46
Fixed Model
52.99***
20
0.68
213.47
χ2 values show the differences between the model in the current
row and the model in the previous row.
Significance codes: *** p<0.001 , ** p<0.01 , *p<0.05
Table 6. Analysis of the fixed effects for the model - RQ2
Deep Cohesion
Variable
β
SE
95% CI
Lower
Upper
Intercept (Class Average)
0.252
0.179
-0.105
0.609
Viz (Top Contributors)
0.195
0.235
-0.276
0.665
Viz (Quality)
-0.165
0.198
-0.561
0.231
TaskAp
-0.019
0.160
0.339
0.301
TaskAv*
0.681
0.308
0.065
1.296
SelfAp
0.009
0.212
-0.416
0.433
SelfAv.
-0.867
0.468
-1.803
0.070
OtherAp*
-0.771
0.322
-1.414
-0.128
OtherAv*
1.107
0.446
0.214
1.999
Viz(TopContributors)*TaskAp***
1.523
0.351
0.820
2.225
Viz (Top Contributors)*TaskAv
-0.767
0.499
-1.764
0.231
Viz (Top Contributors)* SelfAp
-0.740
0.690
-2.119
0.640
Viz (Top Contributors)* SelfAv
-0.074
0.741
-1.554
1.406
Viz (Top Contributors)* OtherAp
0.162
0.433
-0.705
1.028
Viz (Top Contributors)* OtherAv
-0.604
0.608
-1.820
0.612
Viz (Quality)*TaskAp
0.029
0.258
-0.488
0.545
Viz (Quality)*TaskAv*
-0.886
0.370
-1.626
-0.146
Viz (Quality)* SelfAp
-0.167
0.349
-0.864
0.530
Viz (Quality)* SelfAv*
1.375
0.557
0.262
2.489
Viz (Quality)* OtherAp
0.508
0.401
-0.293
1.310
Viz (Quality)* OtherAv**
-1.333
0.516
-2.365
-0.300
Significance code: *** p<0.001 , ** p<0.01 , *p<0.05, . p<0.1
(marginal)
All variables are scaled
The positive association between task-approach scales and deep
cohesion was largest for the Top Contributors, while much small-
er positive association was found for the Quality visualization
followed by the negative association for the users of the Class
Average visualization.
Further exploration on the interaction effect between learning
analytics visualization and self-avoidance exhibited a significant
difference in the scores of deep cohesion between the Class Aver-
age visualization users and Quality visualization users (z=2.47,
p<0.05), and marginally significant difference between Top Con-
tributor visualization users and Quality visualization users
(z=2.21, p<0.1). Self-avoidance scale scores were positively asso-
ciated with deep cohesion scores for the users of the Quality visu-
alization, whereas this self-avoidance scale scores were negatively
associated with the deep cohesion scores for the messages posted
by the Top Contributors and Class Average visualization users.
Further investigation on interaction effect between learning ana-
lytics visualization and other-avoidance showed a significant
difference in the scores of deep cohesion between the users of
Class Average visualization with those of Quality visualization
(z=-2.58, p<0.05). The association between other-avoidance scale
scores and deep cohesion scores was negative for the users of the
Quality visualization, while the association was positive for the
users of both Top Contributors and Class Average visualizations.
3.5 Syntactic Simplicity
Analysis for syntactic simplicity principal component revealed
significant interaction effect between learning analytics visualiza-
tion and self-avoidance (F(2,80.99)=3.46, p<0.05).
Further exploration on the interaction effect between learning
analytics visualization and self-avoidance exhibited a significant
difference in the scores of deep cohesion between the Top Con-
tributors visualization users and the Quality visualization users
(z=2.56, p<0.05). Self-avoidance scale scores were positively
associated with syntactic simplicity scores for the users of the
Quality visualization, whereas this self-avoidance scale scores
were negatively associated with the syntactic simplicity for the
messages posted by the Top Contributors and Class Average visu-
alization users.
3.6 Referential Cohesion
Analysis of mixed models for referential cohesion revealed a sig-
nificant interaction effect between learning analytics visualization
and task-approach scales (F(2,78.05)=7.44, p<0.01), learning
analytics visualization and self-avoidance (F(2,75.33)=3.93,
p<0.05), and learning analytics visualization and other-approach
(F(2,73.33)=3.61,p<0.05).
Further investigation of the interaction effect between learning
analytics visualization and task-approach showed a significant
difference in the scores of referential cohesion between the users
of the Top Contributor visualization and the users of the Quality
visualization (z=-3.066, p<0.01), and between the Class Average
visualizations users and Top Contributor users (z=3.86 ,p<0.001).
The positive association between task-approach scales and refer-
ential cohesion was largest for the Top Contributors, while much
smaller positive association was found for the Quality visualiza-
tion followed by the negative association for the users of the Class
Average visualization.
Probing the interaction effect between learning analytics visuali-
zation and self-avoidance shows a significant difference in refer-
ential cohesion scores between the users of Top Contributors
compared to the users of Quality (z=2.77, p<0.05) and marginally
significant difference between the users of Class Average visuali-
zation compared to the users of Top Contributors visualization
(z=-2.22, 0<0.1). Self-avoidance scale scores were positively as-
sociated with referential scores for the users of the Quality visual-
ization and Class Average, whereas this self-avoidance scale
scores were negatively associated with the referential cohesions
for the messages posted by the users of Top Contributors visuali-
zation.
Further exploration on the interaction effect between learning
analytics visualization and task-approach exhibited a significant
difference in the scores of referential cohesion between the users
of Top Contributors visualization and those of Quality visualiza-
tion (z=2.68, p<0.5). The other-approach scale scores were posi-
tively associated with referential cohesion scores for the users of
the Quality visualization, whereas this self-avoidance scale scores
were negatively associated with the deep cohesion scores for the
messages posted by the Top Contributors and Class Average visu-
alization users.
3.7 Word Concreteness
Further analysis of the models for word concreteness uncovered a
significant interaction between learning analytics visualization
and task-approach (F(2,80.24)=4.41, p<0.05), learning analytics
visualization and task-avoidance (F(2,80.17)=4.00, p<0.05),
learning analytics visualization and other-approach
(F(2,80.57)=3.68, p<0.05), and learning analytics visualization
and other-avoidance scales (F(2,80.06)=4.35, p<0.05).
Further investigation of the interaction effect between learning
analytics visualization and task-approach showed a significant
difference in the word concreteness scores between users of the
Top Contributor visualization and the Quality visualization (z=-
2.59, p<0.05), as well as, users of the Top Contributors and Class
Average visualization (z=2.90, p<001). The positive association
between the task-approach scale and word concreteness scores
was largest for Top Contributors, followed by the users of the
Quality visualization and the Class Average visualization.
Probing the interaction effect between learning analytics visuali-
zation and task-avoidance showed a significant difference in the
word concreteness scores between the users of the Top Contribu-
tors visualization and the Quality visualization (z=2.63, p<0.05).
Further analysis showed a positive effect on word concreteness
scores for the Quality visualization users, while this effect was
negative on the word concreteness scores for the users of the other
two visualizations.
Further investigation of the interaction effect between learning
analytics visualization and other-approach showed a significant
difference in the word concreteness scores between users of the
Class Average visualization and the users of the Top Contributors
visualization (z=-2.69, p<0.05). Further analysis showed a posi-
tive effect on word concreteness scores for the users of the Class
Average and Quality visualization, while this effect was negative
on the word concreteness scores for the users of the Top Contribu-
tors visualization.
Finally, the interaction effect between learning analytics visualiza-
tion and other-avoidance showed a significant difference in the
word concreteness scores between the users of the Class Average
visualization and the users of both Top Contributors (z=2.67,
p<0.05) and Quality visualizations (z=2.64, p<0.05). The associa-
tion between other-avoidance scale scores and word concreteness
scores was negative for the users of the Class Average visualiza-
tion, while the association was positive for the Top Contributors
and Quality visualizations.
4. DISCUSSION AND CONCLUSIONS
The overall goal of this study was to investigate the effect of dif-
ferent information presented through learning analytics visualiza-
tions on the posting behavior of students with different self-
reported achievement goal orientations in online group discussion
activities.
4.1 Interpretation of the results
4.1.1 Different Visualizations and Students’ Quantity
of Posts Considering their AGOs
Our analysis showed that after controlling for achievement goals,
some learning analytics visualizations had positive and some had
negative effects on students’ quantity of posts.
For students who used Top Contributors and Quality visualiza-
tion, higher scores on other-approach scale were significantly
associated with higher numbers of posts, whereas for those who
used Class Average, the association with count of posts was nega-
tive.
The positive effect of Top Contributors visualization on the quan-
tity of posts is in alignment with prior research showing that stu-
dents with other-approach goals assess their competence level in
terms of normative standards and aim at outperforming their peers
[13]. In this case, the students who used the Top Contributors
visualization may have interpreted the norm based on the contri-
bution level of those who had the highest number of postings in
the class. Another interpretation is that they may have strived to
gain visibility by the rest of the class, which means being listed as
top contributors themselves. Hence, this positive association be-
tween the other-approach scale scores and numbers of posts for
users of this visualization is not surprising. The Quality visualiza-
tion may have motivated students with orientation towards other-
approach goal to outperform the rest of the class in terms of the
depth and breadth of the key concepts covered in their messages.
In order to reach that goal, this visualization may have indirectly
encouraged them to contribute more.
For the Class Average visualization the students’ judgment of
how their peers were doing may have been influenced by the dis-
played average performance of the entire class. Research shows
that students who adopt normative standards, through other-
approach, usually rely on the instructor’s criteria, as they believe
this can best lead to outperforming their peers if no other visible
norm exists [32]. In light of this, real-time updates presented in
the visualization may push the instructor’s clearly expressed crite-
ria behind the analytics metrics. If the class average is below
teacher’s expectation at any given time, students with other-
approach tendency may follow that as their normative standards
for their goal.
Previous research shows that normative goal-standards can range
from modest to extreme [32]. It might be that learning analytics
visualization can be an influencing factor in determining the end
points of this range. The Top Contributors and Quality visualiza-
tions encourages setting a higher standard to outperform in com-
parison to the class average, which means it is more challenging
to achieve and requires more effort. This is in accordance with the
idea that if desirable participation behaviors are explicitly exposed
to students oriented towards performance goals (i.e. other-based
and self-based goals in 3×2 AGO model), it can encourage them
to engage more productively in the discussion activity [40].
4.1.2 Different Visualizations and Students’ Quality
of Posts Considering their AGOs
Our results showed that after controlling for achievement goals,
some learning analytics visualizations had positive and some had
negative effects on students’ quality of posts observed through
discourse features (i.e., Narrativity, Deep Cohesion, Referential
Cohesion, and Word Concreteness). For each achievement goal,
summary of significant associations are reported in Table 7.
In Table 7, positive associations show that higher scores on a
specific AGO scale are associated with higher scores on a specific
discourse component when using the visualization, whereas nega-
tive associations indicate that higher scores on an AGO scale are
associated with lower scores on discourse features for a particular
visualization. Table 7 uncovers non-homogenous findings across
different goal orientations and different visualizations.
Out of the five visible features presented in Table 7, the most
highlighted and frequent discourse component is deep cohesion.
For long, the importance of cohesion in text and oral communica-
tion has been emphasized by cognitive scientists who aimed at
understanding how human mind constructs meaning from dis-
course [10]. In fact, measuring cohesion was the main driver for
the development of Coh-Metrix which later expanded to other
discourse features. There are almost consistent findings in the
collaborative learning literature showing positive outcomes of
deep cohesion. Higher levels of deep cohesion show deeper inte-
gration of the ideas with background knowledge and fever con-
ceptual gaps, as well as, better individual and group performance
[9].
Our non-homogenous results across different visualizations show
that using a particular visualization followed a positive association
between a certain goal and a discourse feature, while another vis-
ualization may have followed a negative association for the same
achievement goal and the same discourse component. For in-
stance, those with higher tendency towards self-avoidance goals
constructed messages with higher deep cohesion when using the
Quality visualization but lower deep cohesion when using the
Class Average or Top Contributors visualizations. As discussed
previously, high deep cohesion is associated with positive out-
comes and thus, it is highly desirable [9]. Students with avoidance
goals often suffer from the lack of task focus and hence, are more
likely to experience low deep cohesion. It seems that the Quality
visualization may have played a positive role in directing the stu-
dents with high self-avoidance goals towards overcoming task
disrupting thoughts and integrating more cohesive messages,
while the other two visualizations may have played a negative
role. A possible explanation is that presentation of information in
the Quality visualization was more focused on improvements of
self over time (key concepts covered), which can increase feeling
of self-efficacy and self-confidence, and hence, improve the task
focus [32].
Similarly, our non-homogenous findings across different
achievement goals indicate that using a particular visualization
followed a positive association between one achievement goal and
a discourse feature, whereas the same visualization may have
followed negative association for the same discourse feature and
another achievement goal. For instance, despite positive outcomes
of Quality visualization for students oriented towards self-
avoidance goals, the role this visualization played on construction
of deep cohesive messages appeared to be negative on the indi-
viduals with higher tendency towards task-avoidance. It seems
that for students with task-avoidance strivings, seeing the con-
cepts they have not covered increased their stress of doing poorly
in the discussions.
A highlighted aspect of the summary table is the presence of
negative valence goals. In the literature, avoidance goals – regard-
less of the competence definition – have mostly been associated
with negative outcomes because of their tendency to avoid failure.
Low cognitive engagement, low self-efficacy, high anxiety and
feeling of shame, confusion, disorganized study habits, task-
disrupting thoughts, help-avoidance, poor performance and inter-
est are among destructive outcomes of mastery-avoidance (task-
avoidance) and performance-avoidance goals (other-avoidance
and self-avoidance) [32]. Therefore, providing feedback to help
reduce some of the negative aspects of these avoidance goals is
desirable in addition to the prevision of the information shown in
the learning analytics visualizations.
The most visible achievement goal with positive valance in Ta-
ble 7 is task-approach. Prior research shows that students with a
high task-approach tendency in a particular context compared to
others, find the topic interesting, have positive feelings about the
task and perceive it as valuable, use deep learning strategies and
appreciate both cooperativeness and help seeking [32]. Therefore,
it is not surprising that their deep approach to learning can help
them mentally connect ideas and construct messages that show
stronger signs of deep cohesion [1]. Our findings indicate all the
Quality and Top Contributors visualization had a positive effect
on deep cohesion when controlled for task-approach scores. This
finding is not surprising for the Quality visualization, as it directly
promotes coherent discussion of some key concepts and logical
integration with related ideas. As for Top Contributors, quality
may indirectly be promoted through externalization of high stand-
ards on the contribution level. Therefore, it may encourage deeper
investigation into the topic of discussion.
4.2 Implications for Theory and Practice
The findings present some methodological, theoretical, and prac-
tical implications. On a methodological side, the study shows the
importance of assessing learning analytics visualizations in au-
thentic course settings to evaluate the actual effect of the present-
ed information on students’ behaviors and outcomes. Combining
traditionally collected data through self-reported surveys, such as
individual achievement goal orientation, with fine grained data
such as interaction logs and generated artifacts. In this study, the
effect of different learning analytics visualizations on students’
behavior was uncovered only when looking at the fine-grained
data and after controlling for students’ achievement goals, as mo-
tivational constructs. In addition, analysis of discourse patterns
provided in-depth insight into the quality of students’ contribu-
tions that complemented traditional metrics that rely on quantity
of contributions.
The study poses some important theoretical and practical implica-
tions for the further research and use of learning dashboards and
tools by encouraging adoption of effective instructional practices
to support their use. From the instructional design point of view,
our findings show the potentials of learning analytics visualiza-
tions as a feedback mechanism for students in online learning
environments. In our study, the instructional design of the discus-
sion activity followed guidelines based on theories and practices
for effective and productive discussions. We are continuing to
investigate both the effect of pedagogical framing of learning
analytics visualization and the effect of connection of information
presented to the learning activities on students’ learning out-
comes. Also, our results confirm the findings of the limited re-
search in this area that reveals learning analytics in the form of
Table 7. Summary of Mixed Model Analysis for Interaction
between Learning Analytics Visualization and AGO Scale on
Quality of Posts
AGO Scale
Visualization
Dependent Variable
Assoc.
Direction
Task-
Approach
Class Average
Narrativity
-
Deep Cohesion
-
Referential Cohesion
-
Word Concreteness
+
Top
Contributors
Narrativity
+
Deep Cohesion
+
Referential Cohesion
+
Word Concreteness
+
Quality
Narrativity
+
Deep Cohesion
+
Referential Cohesion
+
Word Concreteness
+
Task
Avoidance
Class Average
Narrativity
+
Deep Cohesion
+
Word Concreteness
-
Top
Contributors
Narrativity
+
Deep Cohesion
+
Word Concreteness
-
Quality
Narrativity
-
Deep Cohesion
-
Word Concreteness
+
Self-
Approach
Class Average
Narrativity
+
Top
Contributors
Narrativity
-
Quality
Narrativity
-
Self-
Avoidance
Class Average
Narrativity
-
Deep Cohesion
-
Syntactic Simplicity
-
Referential Cohesion
+
Top
Contributors
Narrativity
-
Deep Cohesion
-
Syntactic Simplicity
-
Referential Cohesion
-
Quality
Narrativity
+
Deep Cohesion
+
Syntactic Simplicity
+
Referential Cohesion
+
Other-
Approach
Class Average
Referential Cohesion
-
Word Concreteness
+
Top
Contributors
Referential Cohesion
-
Word Concreteness
-
Quality
Referential Cohesion
+
Word Concreteness
+
Other-
Avoidance
Class Average
Deep Cohesion
+
Word Concreteness
-
Top
Contributors
Deep Cohesion
+
Word Concreteness
+
Quality
Deep Cohesion
-
Word Concreteness
+
dashboards or reports can lead to the change of activities in online
discussions that are sometimes intentional and goal-oriented and
sometimes unconscious [41].
Our research has implications for direction of empirical studies
around learning analytics visualizations and subsequently their
designs. The findings of our field study reveal that the effect of a
particular learning analytics visualization on students’ behavior
differs when students are inclined to different achievement goals.
This can motivate further empirical studies to investigate the con-
nection between other theoretical constructs that underlie individ-
ual differences and effectiveness of learning analytics dashboards.
Such studies can help move towards developing a body of
knowledge that could guide design and application of learning
analytics tools that are theoretically informed.
For instance, our results showed that the use of a particular learn-
ing analytics visualization can be associated with positive changes
on students’ learning behavior with tendency towards a certain
goal, even for avoidance goals. We know that avoidance goals
have been mostly associated with negatives outcomes. Hence, our
findings encourages further examination of the role that personal-
ized interventions can play in encouraging positive changes that
may lead to improved learning processes and outcomes. If the
feedback provided through these visualizations alleviates negative
outcomes associated with pursuit of avoidance goals, such as anx-
iety and low self-efficacy, it may even have the potential to direct
students towards pursuit of approach goals which according to
research have been associated with more positive learning out-
comes [32].
On the opposite side, our results show that each of the three visu-
alizations can be negatively associated with learning behavior of
students with certain individual difference. For example, showing
the Top Contributors visualization to students with tendency to-
wards self-avoidance was negatively associated with four dis-
course features in their postings. This was discovered only after
carefully analyzing the interaction effect of visualizations with
goal orientations. Other examples can be extracted from Table 7.
Such insight can encourage adoption of more stringent require-
ments for empirical evaluation of learning analytics visualizations
before deploying them to wide-scale use.
The choice of visualizations in this research was guided by their
ability to engage different goals individual students may pursue.
We did not tap into knowledge in information visualization field.
As our results show, both what is being presented and how, very
likely have different effect on individual students, with some vis-
ualizations being more effective than others. A systematic study is
needed to understand the effect of different learning analytics
visualization designs by controlling for certain individual differ-
ences, eventually leading to clear guidelines how to provide per-
sonalized learning analytics [4].
4.3 Limitations and Future Work
The current work has several limitations that require further re-
search to complement our results. First, our learning analytics
visualizations were integrated into the learning management sys-
tem by providing a link that required additional effort and motiva-
tion on students’ part to click and be directed to the visualization.
This may have affected how many students and how often viewed
the visualizations. Future work should explore other integration
options and their influence on the adoption and engagement with
the tool, while considering the platform and affordances it pro-
vides.
Secondly, in this study, we considered achievement goal orienta-
tion, a theoretical construct that could reveal individual differ-
ences with respect to motivational factors in educational context.
However, other aptitude constructs that illuminate on students'
preferred approaches to learning [3] can also help understand how
particular students interact with learning analytics visualizations
and how those visualizations affect their learning behaviors. Addi-
tionally, since we are dealing with visual information and writing,
further linking motivational disposition to other individuals traits,
such as attention and perception, processing and evaluation, and
in case of the discussion argumentative writing, is needed to build
fuller understanding how visualizations influence individuals. In
our current set of studies we are also probing for an individual’s
numeracy, graph literacy and other related cognitive characteris-
tics.
Our findings open several other directions for future research.
First, in learning analytics of discussion activities, the listening
behaviors, i.e. reading other students’ post, is critical for effective
discussion [39, 40]. As listening behavior constitutes over 75% of
discussion activities, understanding the effect of learning analytics
visualizations on listening behavior for would complement our
research. Next, instructional scaffolds can produce desirable ef-
fects on development of critical thinking [16]. From this arises the
question how scaffolding, or the lack thereof, fosters the positive
association between engaging with the visualizations and posting
behaviors. Following a suggested framework [38], in a follow up
study, we will investigate the effect of reflection and goal setting
by embedding a space around the visualization for students to set
their goals and write a reflection journal and have it appear every
time they view the visualizations.
This work focused on learning analytics for discussions. Investi-
gating the association between individual characteristics and dif-
ferent ways of visualizing other learning activities is needed to
generalize our findings.
5. REFERENCES
[1] Akyol, Z. and Garrison, D.R. 2011. Understanding cognitive
presence in an online and blended community of inquiry:
Assessing outcomes and processes for deep approaches to
learning. British Journal of Educational Technology. 42, 2,
233 – 250.
[2] Arnold, K. and Pistilli, M. 2012. Course signals at Purdue:
using learning analytics to increase student success. Proc. of
the 2nd Int. Conf. on Learning Analytics and Knowledge,
267 – 270.
[3] Biggs, J.B. 1987. Student Approaches to Learning and Stud-
ying. Research Monograph. ERIC.
[4] Brusilovsky, P., Hsiao, I.-H. and Folajimi, Y. 2011. Quiz-
Map: open social student modeling and adaptive navigation
support with TreeMaps. Towards Ubiquitous Learning. 71–
82.
[5] Bull, S. and Kay, J. 2008. Metacognition and open learner
models. The 3rd Workshop on Meta-Cognition and Self-
Regulated Learning in Educational Technologies, at
ITS2008, 7–20.
[6] Corrin, L. and de Barba, P. 2014. Exploring students’ inter-
pretation of feedback delivered through learning analytics
dashboards. Proc. of the ascilite 2014 conference, 201-205.
[7] Dawson, S., Bakharia, A. and Heathcote, E. 2010. SNAPP:
Realising the affordances of real-time SNA within net-
worked learning environments. Proc. of the 7th Int. Conf. on
Networked Learning, 125–133.
[8] Dimitrova, V. 2003. STyLE-OLM: Interactive open learner
modelling. Int. Journal of Artificial Intelligence in Education
(IJAIED). 13, 35–78.
[9] Dowell, N.M., Cade, W.L., Tausczik, Y., Pennebaker, J. and
Graesser, A.C. 2014. What works: Creating adaptive and in-
telligent systems for collaborative learning support. Intelli-
gent Tutoring Systems, 124–133.
[10] Dowell, N.M., Graesser, A.C. and Cai, Z. Language and
Discourse Analysis with Coh-Metrix: Applications from Ed-
ucational Material to Learning Environments at Scale, Jour-
nal of Learning Analytics, in press.
[11] Elliot, A.J. 1999. Approach and avoidance motivation and
achievement goals. Educational Psychologist. 34, 3, 169-
189.
[12] Elliot, A.J., Elliot, A.J. and Dweck, C.S. 2005. A conceptual
history of the achievement goal construct. Handbook of
Competence and Motivation. 16, 52–72.
[13] Elliot, A.J., Murayama, K. and Pekrun, R. 2011. A 3×2
achievement goal model. Journal of Educational Psycholo-
gy. 103, 3, 632.
[14] Foltz, P.W., Kintsch, W. and Landauer, T.K. 1998. The
measurement of textual coherence with latent semantic anal-
ysis. Discourse Processes. 25, 2-3, 285–307.
[15] Friedman, J., Hastie, T. and Tibshirani, R. 2001. The ele-
ments of statistical learning. Springer series in statistics
Springer, Berlin.
[16] Gašević, D., Adesope, O., Joksimović, S. and Kovanović, V.
2015. Externally-facilitated regulation scaffolding and role
assignment to develop cognitive presence in asynchronous
online discussions. The Internet and Higher Education. 24,
53–65.
[17] Govaerts, S., Verbert, K., Duval, E. and Pardo, A. 2012. The
student activity meter for awareness and self-reflection.
CHI’12 Extended Abstracts on Human Factors in Compu-
ting Systems, 869–884.
[18] Graesser, A.C., McNamara, D.S. and Kulikowich, J.M.
2011. Coh-Metrix providing multilevel analyses of text
characteristics. Educational Researcher. 40, 5, 223–234.
[19] Joksimovic, S., Gasevic, D., Kovanovic, V., Adesope, O.
and Hatala, M. 2014. Psychological characteristics in cogni-
tive presence of communities of inquiry: A linguistic analy-
sis of online discussions. The Internet and Higher Educa-
tion. 22, 1–10.
[20] Kanuka, H. and Anderson, T. 2007. Online social inter-
change, discord, and knowledge construction. International
Journal of E-Learning & Distance Education. 13, 1, 57–74.
[21] Kay, J., Maisonneuve, N., Yacef, K. and Reimann, P. 2006.
The big five and visualisations of team work activity. Proc.
of the 8th Int. Conf. on Intelligent Tutoring Systems, 197–
206.
[22] Kerly, A., Ellis, R. and Bull, S. 2008. CALMsystem: a con-
versational agent for learner modelling. Knowledge-Based
Systems. 21, 3, 238–246.
[23] Kruse, A. and Pongsajapan, R. 2012. Student-centered learn-
ing analytics. CNDLS Thought Papers, 1–9.
[24] Leony, D., Pardo, A., de la Fuente Valentín, L., de Castro,
D.S. and Kloos, C.D. 2012. GLASS: a learning analytics
visualization tool. Proc. of the 2nd Int. Conf. on Learning
Analytics and Knowledge, 162–163.
[25] Luppicini, R. 2007. Review of computer mediated commu-
nication research for education. Instructional Science. 35, 2,
141–185.
[26] Maehr, M.L. 1989. Thoughts about motivation. Research on
motivation in education: Goals and cognitions. 3, 1, 299–
315.
[27] Mazza, R. and Milani, C. 2004. Gismo: a graphical interac-
tive student monitoring tool for course management systems.
Int. Conf. on Technology-Enhanced Learning, 1–8.
[28] Nakahara, J., Hisamatsu, S., Yaegashi, K. and Yamauchi, Y.
2005. iTree: Does the mobile phone encourage learners to be
more involved in collaborative learning? Proc. of the Conf.
on Computer Support for Collaborative Learning: Learning
2005: the next 10 years! , 470–478.
[29] Rovai, A.P. 2007. Facilitating online discussions effectively.
The Internet and Higher Education. 10, 1, 77–88.
[30] Santos, J.L., Verbert, K., Govaerts, S. and Duval, E. 2013.
Addressing learner issues with StepUp!: an Evaluation.
Proc. of the 3rd Int. Conf. on Learning Analytics and
Knowledge, 14–22.
[31] Schielzeth, H. and Nakagawa, S. 2013. Nested by design:
model fitting and interpretation in a mixed model era. Meth-
ods in Ecology and Evolution. 4, 1, 14–24.
[32] Senko, C., Hulleman, C.S. and Harackiewicz, J.M. 2011.
Achievement goal theory at the crossroads: Old controver-
sies, current challenges, and new directions. Educational
Psychologist. 46, 1, 26–47.
[33] Shum, S.B. and Ferguson, R. 2012. Social learning analytics.
Journal of Educational Technology & Society. 15, 3, 3–26.
[34] Snow, R.E. 1991. Aptitude-treatment interaction as a frame-
work for research on individual differences in psychothera-
py. Journal of Consulting and Clinical Psychology. 59, 2,
205-216.
[35] Verbert, K., Duval, E., Klerkx, J., Govaerts, S. and Santos,
J.L. 2013. Learning analytics dashboard applications. Ameri-
can Behavioral Scientist. 57, 10, 1500-1509.
[36] Winne, P.H. 2010. Improving measurements of self-
regulated learning. Educational Psychologist. 45, 4, 267–
276.
[37] Winne, P.H. and Hadwin, A.F. 1998. Studying as self-
regulated learning. Metacognition in Educational Theory
and Practice. 93, 27–30.
[38] Wise, A.F. 2014. Designing pedagogical interventions to
support student use of learning analytics. Proc. of the 4th Int.
Conf. on Learning Analytics and Knowledge, 203–211.
[39] Wise, A.F., Hausknecht, S.N. and Zhao, Y. 2014. Attending
to others’ posts in asynchronous discussions: Learners’
online “listening” and its relationship to speaking. Int. Jour-
nal of Computer-Supported Collaborative Learning. 9, 2,
185–209.
[40] Wise, A.F., Speer, J., Marbouti, F. and Hsiao, Y.-T. 2013.
Broadening the notion of participation in online discussions:
examining patterns in learners’ online listening behaviors.
Instructional Science. 41, 2, 323–343.
[41] Wise, A., Zhao, Y. and Hausknecht, S. 2014. Learning ana-
lytics for online discussions: Embedded and extracted ap-
proaches. Journal of Learning Analytics. 1, 2, 48–71.
[42] Xu, R. 2003. Measuring explained variation in linear mixed
effects models. Statistics in Medicine. 22, 22, 3527–3541.
[43] Yuan, J. and Kim, C. 2014. Guidelines for facilitating the
development of learning communities in online courses.
Journal of Computer-Assisted Learning. 30, 3, 220–232.