Conference PaperPDF Available

What Works: Creating Adaptive and Intelligent Systems for Collaborative Learning Support

Authors:

Abstract and Figures

An emerging trend in classrooms is the use of collaborative learning environments that promote lively exchanges between learners in order to facili-tate learning. This paper explored the possibility of using discourse features to predict student and group performance during collaborative learning interac-tions. We investigate the linguistic patterns of group chats, within an online col-laborative learning exercise, on five discourse dimensions using an automated linguistic facility, Coh-Metrix. The results show students who engaged in deep-er cohesive integration and generated more complicated syntactic structures performed significantly better. The overall group level results indicated collabo-rative groups who engaged in deeper cohesive and expository style interactions performed significantly better on posttests. Although students do not directly express knowledge construction and cognitive processes, our results indicate that these states can be monitored by analyzing language and discourse. Impli-cations are discussed regarding computer supported collaborative learning and ITS’s to facilitate productive communication in collaborative learning environ-ments.
Content may be subject to copyright.
S. Trausan-Matu et al. (Eds.): ITS 2014, LNCS 8474, pp. 124–133, 2014.
© Springer International Publishing Switzerland 2014
What Works: Creating Adaptive and Intelligent Systems
for Collaborative Learning Support
Nia M. Dowell1, Whitney L. Cade1, Yla Tausczik2, James Pennebaker3,
and Arthur C. Graesser1
1 Institute for Intelligent Systems, The University of Memphis, Memphis TN 38152 USA
{ndowell,wlcade,graesser}@memphis.edu
2 Department of Social Computing, Carnegie Mellon University, Pittsburg PA 13289 USA
ylataus@cs.cmu.edu
3 Department of Psychology, University of Texas at Austin, Austin TX 78705 USA
pennebaker@mail.utexas.edu
Abstract. An emerging trend in classrooms is the use of collaborative learning
environments that promote lively exchanges between learners in order to facili-
tate learning. This paper explored the possibility of using discourse features to
predict student and group performance during collaborative learning interac-
tions. We investigated the linguistic patterns of group chats, within an online
collaborative learning exercise, on five discourse dimensions using an auto-
mated linguistic facility, Coh-Metrix. The results indicated that students who
engaged in deeper cohesive integration and generated more complicated syntac-
tic structures performed significantly better. The overall group level results in-
dicated collaborative groups who engaged in deeper cohesive and expository
style interactions performed significantly better on posttests. Although students
do not directly express knowledge construction and cognitive processes, our re-
sults indicate that these states can be monitored by analyzing language and dis-
course. Implications are discussed regarding computer supported collaborative
learning and ITS’s to facilitate productive communication in collaborative
learning environments.
Keywords: collaborative interactions, learning, computational linguistics,
Coh-Metrix.
1 Introduction
Current educational practices suggest an emerging trend toward collaborative problem
solving or group learning [1,2]. This is reflected in the more recent upsurge of
computer-mediated collaborative learning or groupware tools, such as email, chat,
threaded discussion, massive open online courses (MOOCs), and trialog-based intelli-
gent tutoring systems (ITSs). The growing adoption of collaborative learning
environments is supported by research that shows that, in general, collaboration can
increase group performance and individual learning outcomes (see [3] for a review).
The interest of educational researchers in this topic has motivated a substantial area of
What Works: Creating Adaptive and Intelligent Systems 125
research aimed at identifying and improving collaborative knowledge building
processes using both ITSs and computer-supported collaborative learning (CSCL)
systems [4]. Previous research in the area of collaborative learning has shown that
information in the interaction itself can be useful in predicting the cognitive benefits
that students take away [5,6]. For instance, cognitive elaboration, quality argumenta-
tion, common ground, task difficulty, and cognitive load have been shown to influ-
ence knowledge acquisition of the individual learner and performance of the overall
group [7,8,9,10]. One factor that sets collaborative learning apart from individual
learning is the use of collaborative language [11,12,13]. Being the root of all comput-
er-mediated collaboration, language, discourse, and communication are critical for
organizing a team, establishing a common ground and vision, assigning tasks, track-
ing progress, building consensus, managing conflict, and a host of other activities [1].
However, previous research in this area has predominantly focused on asynchron-
ous communication, such as email or discussion boards, that require no real-time
interaction between the users. In contrast, synchronous communication, such as text-
based IM tools and videoconferencing, involves interactions that are dynamic and
constantly updated [14]. Additionally, scholars typically rely on human coding, and
have only recently applied automatic or semi-automatic natural language evaluation
methods [2], [5], [15,16]. Consequentially, we know little about the actual process of
knowledge construction in synchronous collaborative learning interactions.
There are several advantages to utilizing textual features as an independent channel
for assessing collaborative communication processes. First, in the past, it has been an
arduous task to assess communication during collaborative learning due to the com-
plex nature of transcribing spoken conversations. However, advances in technology
have increased the use of computer-mediated collaborative learning (CMCL), which
allows researchers to track and analyze the language and discourse characteristics in
group learning environments. Second, linguistic features derived from CMCL are
contextually constrained in a fashion that provides cues regarding the social dynamics
and an in-depth understanding of different qualities of interaction [2], [5], [17,18].
Third, recent advances in computational linguistics have convincingly demonstrated
that language and discourse features can predict complex phenomenon such as perso-
nality, deception, emotions, successful group interaction, and even physical and men-
tal health outcomes [19,20,21,22,23,24]. Thus, it is plausible to expect a textual anal-
ysis of symmetrical collaborative learning interactions to provide valuable insights
into collaborative learning processes and performance.
A number of psychological models of discourse comprehension and learning, such
as the construction-integration, constructionist, and indexical-embodiment models,
lend themselves nicely to the exploration of how knowledge is constructed in colla-
borative learning interactions. These psychological frameworks of comprehension
have identified the representations, structures, strategies, and processes at multiple
levels of discourse [7], [25,26]. Computational linguistic tools that analyze discourse
patterns at these multiple levels, such as Coh-Metrix (described later), can be applied
in collaborative learning interactions to gain a deeper understanding of the discourse
patterns useful for individual and group performance [7], [27,28]. This endeavor also
holds the potential for enabling substantially improved collaborative learning envi-
ronments both by providing real-time detection of students and group performance
126 N.M. Dowell et al.
and by using this information to develop the student model and trigger collaborative
learning support as needed.
In the current study, we employ computational linguistic techniques to systemati-
cally explore chat communication during collaborative learning interactions in a large
undergraduate psychology course. Specifically, we identify the discourse levels and
linguistic properties of collaborative learning interactions that are predictive of learn-
ing. Further, we examine how these relations may differ for individual students and
overall group level discourse. A more general overarching goal of this paper is to
illustrate some of the advantages of automated linguistics tools to identify pedagogi-
cally valuable discourse features that can be applied in collaborative learning ITS and
CSCL environments.
1.1 Brief Overview of Coh-Metrix
Coh-Metrix is a computer program that provides over 100 measures of various types
of cohesion, including co-reference, referential, causal, spatial, temporal, and struc-
tural cohesion [27,28,29]. Coh-Metrix also has measures of linguistic complexity,
characteristics of words, and readability scores. Currently, Coh-Metrix is being used
to analyze texts in K-12 for the Common Core standards and states throughout the
U.S. More than 50 published studies have demonstrated that Coh-Metrix indices can
be used to detect subtle differences in text and discourse [28], [30].
There is a need to reduce the large number of measures provided by Coh-Metrix
into a more manageable number of measures. This was achieved in a study that ex-
amined 53 Coh-Metrix measures for 37,520 texts in the TASA (Touchstone Applied
Science Association) corpus, which represents what typical high school students have
read throughout their lifetime [29]. A principal components analysis was conducted
on the corpus, yielding eight components that explained an impressive 67.3% of the
variability among texts; the top five components explained over 50% of the variance.
Importantly, the components aligned with the language-discourse levels previously
proposed in multilevel theoretical frameworks of cognition and comprehension [7],
[25,26]. These theoretical frameworks identify the representations, structures, strate-
gies, and processes at different levels of language and discourse, and thus are ideal for
investigating trends in learning-oriented conversations. Below are the five major di-
mensions, or latent components:
Narrativity. The extent to which the text is in the narrative genre, which conveys a
story, a procedure, or a sequence of episodes of actions and events with animate
beings. Informational texts on unfamiliar topics are at the opposite end of the con-
tinuum.
Deep Cohesion. The extent to which the ideas in the text are cohesively connected
at a deeper conceptual level that signifies causality or intentionality.
Referential Cohesion. The extent to which explicit words and ideas in the text are
connected with each other as the text unfolds.
What Works: Creating Adaptive and Intelligent Systems 127
Syntactic Simplicity. Sentences with few words and simple, familiar syntactic
structures. At the opposite pole are structurally embedded sentences that require
the reader to hold many words and ideas in working memory.
Word Concreteness. The extent to which content words that are concrete, mea-
ningful, and evoke mental images as opposed to abstract words.
2 Methods
2.1 Participants, Materials, and Procedure
The participants were 851 undergraduates (62.4% female) in two introductory-level
psychology courses at a large Midwestern university. Caucasians accounted for
49.6% of participants while Hispanic/Latino accounted for 22.4%, Asian American
for 16.1%, African American 4.2% and less than 1% identified as either Native Amer-
ican or Pacific Islander. Twelve participants were discarded as outliers or due to com-
puter failure, resulting in N = 839.
Students logged into an education platform managed within the University at spe-
cified times to complete the group interaction task. The education platform was an
online course center where students filled out surveys, took quizzes, completed writ-
ing assignments, and participated in group chat. Prior to logging into the system, stu-
dents were instructed that, in order to complete the assignment, they would need to
read supplementary material on a few psychological theories (e.g. 10 pages of the
text-book).
Once students logged into the educational platform, they were directed to the first
quiz. The quiz was 10 multiple-choice questions and tested students’ knowledge of
the reading material. After completing the quiz, they were randomly matched with
other students currently waiting to engage in the chatroom portion of the task. When
there were at least 2 students and no more than 5 students (M = 4.59), individuals
were directed to an instant messaging platform that was built into the educational
platform. The group chat began as soon as someone typed the first message and lasted
for 20 minutes. The chat window closed automatically after 20 minutes, at which time
students took a second 10 multiple-choice question quiz. Each student contributed
154 words on average (SD = 104.94) in 19.49 sentences (SD = 12.46). As a group,
discussions were about 714.8 words long (SD = 235.68) and 90.62 sentences long
(SD = 33.47).
2.2 Performance
On average, students scored better on the posttest after the group discussion than on
the pretest. Pretest and posttest scores, for both the individual and group, were con-
verted to proportions based the number of correct answers. Group performance was
then operationalized as the average group members’ score on the pretest and posttest.
128 N.M. Dowell et al.
2.3 Data Treatment and Computational Evaluation
The educational platform logged all of the students’ contributions. Prior to analysis,
the logs were cleaned and parsed to facilitate two levels of evaluation. First, for the
individual-level analyses, texts files were created that included all contributions from
a single student, resulting in 839 text files. Second, we combined all group members’
contributions into a text file for group-level analyses. All files were then analyzed
using Coh-Metrix. Following the Coh-Metrix analysis, the scores were normalized by
removing any outliers. Specifically, the normalization procedure involved Winsoris-
ing the data based on each variable’s upper and lower percentile.
3 Results and Discussion
A mixed-effects modeling approach was adopted for all analyses due to the nested
structure of the data (e.g., learners embedded within groups). Mixed-effects modeling
is the recommended analysis method for this type of data [31]. Mixed-effects models
include a combination of fixed and random effects and can be used to assess the
influence of the fixed effects on dependent variables after accounting for any extrane-
ous random effects. The lme4 package in R [32] was used to perform the requisite
computation.
The primary analyses focused on identifying discourse features (namely, the five
dimension used to generally describe texts in Coh-Metrix: Narrativity, Deep Cohe-
sion, Referential Cohesion, Syntax Simplicity, and Word Concreteness) of the chat
data that are predictive of learning. We also tested whether prior knowledge mod-
erated the effect of discourse on learning performance. Separate models were con-
structed to analyze discourse at the individual learner and group levels in order to
isolate their independent contributions on learning performance. Therefore, there were
two sets of dependent measures in the present analyses: (1) individual learners’ per-
formance on the multiple-choice posttest and (2) overall groups’ performance on the
multiple-choice posttest. The independent variables in all models were the 5 discourse
features of interest, as well as proportional pretest performance scores, which were
included to control for the effect of prior knowledge. The random effects for the indi-
vidual learner models were participant (839 levels), while the group model used par-
ticipant (839 levels) within group (183 levels) as the random effect.
Table 1 shows the discourse features that were predictive of learning performance
for both the individual and group level models. As can be seen from this table, learn-
ers’ deep cohesion and syntax are predictive of individual learning performance.
Specifically, we see that learners who engaged in deeper cohesive integration and
generated more complicated syntactic structures were significantly more likely to
score higher on the posttest than learners who used simpler syntax and less deep co-
hesion. Discourse cohesion, defined as the extent to which the ideas in the text are
cohesively connected at a deeper conceptual level that signifies causality or intentio-
nality, is a central component in a number of processes that facilitate individual learn-
ing and comprehension [7]. With regard to the findings for deep cohesion, this sug-
gests that students who are learning are engaging in deeper integration of topics with
What Works: Creating Adaptive and Intelligent Systems 129
their background knowledge, generating more inferences to address any conceptual
and structural gaps, and consequentially increasing the probability of knowledge re-
tention. The finding for syntactic structure might provide evidence for the cognitive
explanation hypothesis [17]. In general, this suggests that students who are producing
denser sentence compositions are high verbal and/or are engaging in increased effort,
inferences, and elaboration.
The analysis of collaborative group interaction discourse revealed that narrativity
and deep cohesion were predictive of learning performance. In particular, the group-
level results indicated that collaborative groups who engaged in more expository, or
informational, style interactions significantly outperformed those with more narrative
discourse. Initially, these findings seem counterintuitive based on previous research
which found that narrative text is substantially easier to read, comprehend, and recall
than informational text [7], even when the familiarity of the topics and vocabulary are
controlled. However, students were instructed to talk about what they read in their
textbook, which could suggest that groups that learned more were mirroring their
textbook’s more expository nature. Additionally, [29] noted that informational texts
tend to have higher cohesion, as compared with narratives, and thus cohesion plays an
important role in in compensating for the greater difficulty of expository style dis-
course. Deep cohesion was also predictive of learning performance in the group-level
interaction analysis.
In addition to the previously mentioned benefits of deep cohesion for learning, co-
hesion also aids processes important for collaboration, including establishing and
maintaining common ground [33], negotiating references [7], and coordinating group
members’ mental models [34]. High cohesion dialogue may indicate more thorough
collaboration and learning in building a shared mental model. This is similar to the
way high cohesion text can aid learners in building a solid mental model (relative to
low cohesion text). In the context of group interactions, our findings support research
showing that collaborative learners may create and preserve shared conceptions of a
topic, and this social co-construction facilitates optimal collaboration for knowledge
building [35]. We also tested whether prior knowledge moderated the effect of dis-
course on learning by assessing whether the prior knowledge x discourse feature inte-
raction term significantly predicted posttest scores. However, the interaction term was
not significant (p > .05) for any of the models.
Table 1. Descriptive Statistics and Mixed-Effects Model Coefficients
Measure Learner Model Group Model
M SD B SE M SD B SE
Narrativity .15 .79 .01 .01 .53 .34 -.04* .02
Deep Cohesion .87 1.681 .01** .003 1.291 .75
.03** .01
Referential Cohesion -.521 1.521 -.003 .005 -1.6411 .42 .01 .02
Syntax Simplicity .69 .81 -.01* .01 1.301 .37 -.001 .02
Word Concreteness -2.0711 1.071 -.011 .001 -2.6711 .41 -.031 .01
Note: * p < .05; ** p < .001. Standard error (SE).
130 N.M. Dowell et al.
4 General Discussion
This paper explored the possibility of using discourse features to predict student and
group performance during collaborative learning interactions. Although students do
not directly express knowledge construction and cognitive processes, our results indi-
cate that these states can be monitored by analyzing language and discourse. This
suggests that it takes a more systematic and deeper analysis of dialogues to uncover
diagnostic cues of the knowledge construction. Overall, the findings suggest that au-
tomated analyses of linguistic characteristics can provide valid representations of
individual and group processes that are beneficial for knowledge construction during
collaborative learning. In particular, students and collaborative groups can achieve
new levels of understanding during collaborative learning interactions where more
complex cognitive activities occur, such as analytical thinking, elaboration and inte-
gration of ideas and reasoning.
It is also interesting to note that it takes an analysis of both the student and colla-
borative group interaction to obtain a comprehensive understanding of the linguistic
properties that influence knowledge acquisition during collaborative group interac-
tions. These findings stimulate an interesting discussion because, until recently, most
research on groups has concentrated on the individual people in the group as the cog-
nitive agents [36]. This traditional granularity uses the individual as the unit of analy-
sis both to understand behavioral characteristics of individuals working within groups
and to measure performance or knowledge-building outcomes of the individuals in
group contexts. However, the present findings support the claims of many in the
CSCL community to also consider group levels of granularity in discourse tracking.
The present research has important implications for CSCL and collaborative learn-
ing-focused ITSs. In order to tailor interaction feedback to student needs, a system has
to be able to automatically evaluate student interactions and to provide adaptive sup-
port. The support should be sensitive to these evaluations and also follow models of
ideal collaboration. While the field has started to recognize the benefits of automated
language evaluation, thus far, this technology has only been used effectively in li-
mited ways (e.g. classifying the topic of conversation or speech acts) [37]. Some re-
search has attempted to address the issue of evaluating dialogue by relying on more
shallow measures like participation to trigger feedback. Unfortunately, these ap-
proaches make it difficult to give students feedback on how to contribute, which may
ultimately be more valuable. Computational linguistics facilities, like Coh-Metrix and
the Linguistic Inquiry and Word Count (LIWC) tool, could be used to alleviate some
of the burdens of capturing these important processes. Additionally, systems that are
based on underlying cognitive frameworks of knowledge construction have the ad-
vantage of being applicable in diverse contexts.
The present findings suggest that these systems have the capability of identifying
linguistic features beneficial for knowledge construction on multiple levels, including
individual learners and overall collaborative group interaction. Information gleaned
from such analyses could be useful for those in pursuing CSCL and collaborative
learning-focused ITSs. For instance, a system could provide accurate real time
support for learners using an interface that delivered suggestions via a simple pop
up window or a more sophisticated intelligent agent. However, the value of such
enhancements awaits future work and empirical testing.
What Works: Creating Adaptive and Intelligent Systems 131
Acknowledgments. This research was supported by the National Science Foundation
(BCS 0904909, DRK-12-0918409), the Institute of Education Sciences
(R305G020018, R305A080589), The Gates Foundation, U.S. Department of Homel-
and Security (Z934002/UTAA08-063), and the Army Research Institute
(W5J9CQ12C0043). Any opinions, findings, conclusions, or recommendations ex-
pressed in this paper are those of the authors and do not necessarily reflect the views
of NSF.
References
1. Graesser, A.C., Foltz, P., Rosen, Y., Forsyth, C., Germany, M.: Challenges of Assessing
Collaborative Problem-Solving. In: Csapo, B., Funke, J., Schleicher, A. (eds.) The Nature
of Problem Solving. OECD Series (in press)
2. De Wever, B., Schellens, T., Valcke, M., Van Keer, H.: Content Analysis Schemes to Ana-
lyze Transcripts of Online Asynchronous Discussion Groups: A Review. Comput
Educ. 46, 6–28 (2006)
3. Lou, Y., Abrami, P.C., d’ Apollonia, S.: Small Group and Individual Learning with Tech-
nology: A Meta-Analysis. Rev. Educ. Res. 71, 449–521 (2001)
4. Gress, C.L.Z., Fior, M., Hadwin, A.F., Winne, P.H.: Measurement and Assessment in
Computer-Supported Collaborative Learning. Comput. Hum. Behav. 26, 806–814 (2010)
5. Rosé, C., Wang, Y.-C., Cui, Y., Arguello, J., Stegmann, K., Weinberger, A., Fischer, F.:
Analyzing Collaborative Learning Processes Automatically: Exploiting the Advances of
Computational Linguistics in Computer-Supported Collaborative Learning. Int. J. Com-
put.-Support. Collab. Learn. 3, 237–271 (2008)
6. King, A.: Scripting Collaborative Learning Processes: A Cognitive Perspective. In: Fisch-
er, F., Kollar, I., Mandl, H., Haake, J.M. (eds.) Scripting Computer-Supported Collabora-
tive Learning, pp. 13–37. Springer, Heidelberg (2007)
7. Graesser, A.C., McNamara, D.S.: Computational Analyses of Multilevel Discourse Com-
prehension. Top. Cogn. Sci. 3, 371–398 (2011)
8. Kirschner, P.A., Ayres, P., Chandler, P.: Contemporary Cognitive Load Theory Research:
The Good, the Bad and the Ugly. Comput. Hum. Behav. 27, 99–105 (2011)
9. Noroozi, O., Weinberger, A., Biemans, H.J.A., Mulder, M., Chizari, M.: Argumentation-
Based Computer Supported Collaborative Learning (ABCSCL): A synthesis of 15 years of
research. Educ. Res. Rev. 7, 79–106 (2012)
10. Baker, M., Hansen, T., Joiner, R., Traum, D.: The Role Of Grounding In Collaborative
Learning Tasks. In: Dillenbourg, P. (ed.) Collaborative Learning: Cognitive and Computa-
tional Approaches, pp. 31–36. Emerald Group Publishing Limited, Bingley (1999)
11. Fiore, S., Schooler, J.: Process Mapping and Shared Cognition: Teamwork and the Devel-
opment of Shared Problem Models. In: Salas, E., Fiore, S. (eds.) Team Cognition: Under-
standing the Factors that Drive Process and Performance, pp. 133–152. American Psycho-
logical Association, Washington, D.C (2004)
12. Fiore, S.M., Rosen, M.A., Smith-Jentsch, K.A., Salas, E., Letsky, M., Warner, N.: Toward
an Understanding of Macrocognition in Teams: Predicting Processes in Complex Colla-
borative Contexts. Hum. Factors. 52, 203–224 (2010)
13. Dillenbourg, P., Traum, D.: Sharing Solutions: Persistence and Grounding in Multimodal
Collaborative Problem Solving. J. Learn. Sci. 15, 121–151 (2006)
132 N.M. Dowell et al.
14. Hou, H.-T., Wu, S.-Y.: Analyzing the Social Knowledge Construction Behavioral Patterns
of an Online Synchronous Collaborative Discussion Instructional Activity Using an Instant
Messaging Tool: A Case Study. Comput. Educ. 57, 1459–1468 (2011)
15. Yoo, J., Kim, J.: Can Online Discussion Participation Predict Group Project Performance?
Investigating the Roles of Linguistic Features and Participation Patterns. Int. J. Artif. In-
tell. Educ., 1–25 (2014)
16. Murray, T., Woolf, B.P., Xu, X., Shipe, S., Howard, S., Wing, L.: Supporting Social Deli-
berative Skills in Online Classroom Dialogues: Preliminary Results Using Automated Text
Analysis. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.) ITS 2012.
LNCS, vol. 7315, pp. 666–668. Springer, Heidelberg (2012)
17. Webb, N.M.: Peer Interaction and Learning in Small Groups. Int. J. Educ. Res. 13, 21–39
(1989)
18. Van der Pol, J., Admiraal, W., Simons, P.R.J.: The Affordance of Anchored Discussion for
the Collaborative Processing of Academic Texts. Int. J. Comput.-Support. Collab.
Learn. 1, 339–357 (2006)
19. Scholand, A.J., Tausczik, Y.R., Pennebaker, J.W.: Assessing Group Interaction with Social
Language Network Analysis. In: Chai, S.-K., Salerno, J.J., Mabry, P.L. (eds.) SBP 2010.
LNCS, vol. 6007, pp. 248–255. Springer, Heidelberg (2010)
20. D’Mello, S., Dowell, N., Graesser, A.C.: Cohesion Relationships in Tutorial Dialogue as
Predictors of Affective States. In: Dimitrova, V., Mizoguchi, R., du Boulay, B., Graesser,
A. (eds.) AIED 2009, pp. 9–16. IOS Press, Amsterstam (2009)
21. Mairesse, F., Walker, M.A.: Towards Personality-Based User Adaptation: Psychologically
Informed Stylistic Language Generation. User Model. User-Adapt. Interact. 20, 227–278
(2010)
22. Newman, M.L., Pennebaker, J.W., Berry, D.S., Richards, J.M.: Lying Words: Predicting
Deception from Linguistic Styles. Pers. Soc. Psychol. Bull. 29, 665–675 (2003)
23. Tausczik, Y.R., Pennebaker, J.W.: The Psychological Meaning of Words: LIWC and
Computerized Text Analysis Methods. J. Lang. Soc. Psychol. 29, 24–54 (2010)
24. Hancock, J.T., Woodworth, M.T., Porter, S.: Hungry Like the Wolf: A Word-Pattern
Analysis of the Language of Psychopaths. Leg. Criminol. Psychol. 18, 102–114 (2013)
25. Kintsch, W.: Comprehension: A Paradigm for Cognition. Cambridge University Press,
Cambridge (1998)
26. Snow, C.E.: Reading for Understanding: Toward a Research and Development Program in
Reading Comprehension. Rand Corporation, Santa Monica (2002)
27. Graesser, A.C., McNamara, D.S., Louwerse, M.M., Cai, Z.: Coh-Metrix: Analysis of Text
on Cohesion and Language. Behav. Res. Methods Instrum. Comput. J. Psychon. Soc.
Inc. 36, 193–202 (2004)
28. McNamara, D.S., Graesser, A.C., McCarthy, P.M., Cai, Z.: Automated Evaluation of Text
and Discourse with Coh-Metrix. Cambridge University Press, Cambridge (2014)
29. Graesser, A.C., McNamara, D.S., Kulikowich, J.M.: Coh-Metrix: Providing Multilevel
Analyses of Text Characteristics. Educ. Res. 40, 223–234 (2011)
30. McNamara, D.S., Crossley, S.A., McCarthy, P.M.: Linguistic Features of Writing Quality.
Writ. Commun. 27, 57–86 (2010)
31. Pinheiro, J.C., Bates, D.M.: Mixed-effects models in S and S-Plus. Springer, Heidelberg
(2000)
32. Bates, D., Maechler, M., Bolker, B., Walker, S.: lme4: Linear mixed-effects models using
Eigen and S4 (2013)
What Works: Creating Adaptive and Intelligent Systems 133
33. Clark, H.H., Brennan, S.E.: Grounding in Communication. In: Resnick, L.B., Levine, J.M.,
Teasley, S.D. (eds.) Perspectives on Socially Shared Cognition, pp. 127–149. American
Psychological Association, Washington, DC (1991)
34. Pickering, M.J., Garrod, S.: Toward a Mechanistic Psychology of Dialogue. Behav. Brain
Sci. 27, 169–190; discussion 190–226 (2004)
35. Fischer, F., Bruhn, J., Gräsel, C., Mandl, H.: Fostering Collaborative Knowledge Con-
struction with Visualization Tools. Learn. Instr. 12, 213–232 (2002)
36. Stahl, G.: From Individual Representations to Group Cognition. In: Stahl, G. (ed.) Study-
ing Virtual Math Teams, pp. 57–73. Springer, US (2009)
37. Diziol, D., Walker, E., Rummel, N., Koedinger, K.R.: Using Intelligent Tutor Technology
to Implement Adaptive Support for Student Collaboration. Educ. Psychol. Rev. 22, 89–102
(2010)
... By combining student collaboration with the cognitive support provided in the ITS, students may be able to more effectively construct knowledge to both avoid and overcome errors when they occur and effectively use the support provided through hints. Although ITSs have a strong history of modelling student learning to provide individualized cognitive support, much of the collaborative support is still fixed within these systems (Harsley et al., 2016;Rummel et al., 2012) with a limited number of ITSs exploiting the data collected to provide adaptive social support (Dowell, Cade, Tausczik, Pennebaker, & Graesser, 2014;Walker et al., 2014). In this section, we briefly review how dialogues and gaze data have been used to aid our understanding of collaborative learning processes and outcomes. ...
... When analysing students' speech and chats, the assessment can take place with surface level features to a more in depth analysis of the dialogue content. Many previous ACLS systems have used shallow indicators from dialogue to support student collaborations such as the number of student utterances (Dowell et al., 2014;Rosatelli & Self, 2004), used sentence openers (Baker & Lund, 1997;McManus & Aiken, 2016), or tracked particular sequences of dialogue actions (e.g. use of a question mark or dialogue talk moves, Adamson & Rosé, 2012). ...
Article
Full-text available
When students are working collaboratively and communicating verbally in a technology‐enhanced environment, the system cannot track what collaboration is happening outside of the technology, making it difficult to fully assess the collaboration of the students and adapt accordingly. In this article, we propose using gaze measures as a proxy for cognitive processes to achieve collaboration awareness. Specifically, we use Granger causality to analyse the causal relationships between collaborative and individual gaze measures from students working on a fractions intelligent tutoring system and the influence that the students' dialogue, prior knowledge, or success has on these relationships. We found that collaborative gaze patterns drive the individual focus in the pairs with high posttest scores and when they are engaged in problem‐solving dialogues but the opposite with low performing students. Our work adds to the literature by extending the correlational relationships between individual and collaborative gaze measures to causal relationships and suggests indicators that can be used within an adaptive system.
... Discussion boards have become a relatively common pedagogical learning tool, often associated with desirable learning outcomes, including collaborative meaning-making and knowledge construction [1][2][3][4][5][6]. The structure of discussion boards provide several unique communication features to online learning environments including asynchronous communication, storage of exchanged knowledge, and tree-like structures of topics, also called threads [7]. ...
... Ströbel et al. [178] applied linear mixed-effects models to study significant relationships between L1 complexity and L2 complexity for lexical and syntactic measures. Dowell et al. [179] investigated the linguistic patterns of students group chats, within an online collaborative learning exercise, on five discourse dimensions (narrativity, deep cohesion, referential cohesion, syntactic simplicity, word concreteness) extracted using Coh-Metrix. The results of linear mixed-effects models indicated that students who engaged in deeper cohesive integration and generated more complicated syntactic structures performed significantly better. ...
Preprint
Full-text available
Reading comprehension, which has been defined as gaining an understanding of written text through a process of translating grapheme into meaning, is an important academic skill. Other language learning skills - writing, speaking and listening, all are connected to reading comprehension. There have been several measures proposed by researchers to automate the assessment of comprehension skills for second language (L2) learners, especially English as Second Language (ESL) and English as Foreign Language (EFL) learners. However, current methods measure particular skills without analysing the impact of reading frequency on comprehension skills. In this dissertation, we show how different skills could be measured and scored automatically. We also demonstrate, using example experiments on multiple forms of learners' responses, how frequent reading practices could impact on the variables of multimodal skills (reading pattern, writing, and oral fluency). This thesis comprises of five studies. The first and second studies are based on eye-tracking data collected from EFL readers in repeated reading (RR) sessions. The third and fourth studies are to evaluate free-text summary written by EFL readers in repeated reading sessions. The fifth and last study, described in the sixth chapter of the thesis, is to evaluate recorded oral summaries recited by EFL readers in repeated reading sessions. In a nutshell, through this dissertation, we show that multimodal skills of learners could be assessed to measure their comprehension skills as well as to measure the effect of repeated readings on these skills in the course of time, by finding significant features and by applying machine learning techniques with a combination of statistical models such as LMER.
... Coh-Metrix incorporates automated computational methods of NLP, such as syntactic parsing and cohesion computation, to capture language characteristics at the word-level, sentence-level, and deeper levels of discourse. Coh-Metrix provides useful insights into learners' affective, social, and cognitive processes in a variety of digital learning environments (Choi et al., 2018;D'Mello & Graesser, 2012;Dowell et al., 2014;Graesser et al., 2011;Graesser et al., 2018;McNamara & Graesser, 2012). Coh-Metrix has been extensively validated through more than 150 published studies, which have demonstrated that Coh-Metrix indices can be used to detect subtle differences in text and discourse (Graesser, 2011;Graesser et al., 2011;McNamara et al., 2006;McNamara et al., 2014). ...
Article
Full-text available
Over the last decade, psychological interventions, such as the values affirmation intervention, have been shown to alleviate the male-female performance difference when delivered in the classroom, however, attempts to scale the intervention are less successful. This study provides unique evidence on this issue by reporting the observed differences between two randomized controlled implementations of the values affirmation intervention: (a) successful in-class and (b) unsuccessful online implementation at scale. Specifically, we use natural language processing to explore the discourse features that characterize successful female students’ values affirmation essays to gain insight on the underlying mechanisms that contribute to the beneficial effects of the intervention. Our results revealed that linguistic dimensions related to aspects of cohesion, affective, cognitive, temporal, and social orientation, independently distinguished between males and females, as well as more and less effective essays. We discuss implications for the pipeline from theory to practice and for psychological interventions.
... Current research on adaptive systems has long been driven by pre-defined characteristics that represent individuals' mental model for undertaking certain learning needs (Dowell, Cade, Tausczik, Pennebaker, & Graesser, 2014;Hung, Chang, & Lin, 2016). For instance, learners' state of emotion and cognition has been extensively utilized as the criterion in the design of adaptive systems (Santos, 2016), where differences in the learning characteristics and preferences of individuals can be attributed to the differences in the formation of their mental model capacity to undertake a certain behavior (Sarsam & Al-Samarraie, 2018a, 2018b. ...
Article
Full-text available
Individual preferences for learning environments can be linked to a specific behavior. The tendency of such behavior can somehow be associated with an individual’s ability to cognitively engage in the learning process without being distracted by other stimuli. An online continuous adaptation mechanism (OCAM) of learning contents was developed in order to regulate the presentation of learning contents based on changes in the learner’s aptitude level. This was claimed to stimulate a better cognitive and emotional response among learners, thus stimulating their engagement. A total of 41 students (36 male and 5 female; age 20–25 years) participated in this study. The results revealed that learners’ levels of concentration and cognitive load were positively influenced by the OCAM, which significantly increased their engagement. Our findings can be used to inform designers and developers of online learning systems about the importance of regulating the presentation of learning contents according to the aptitude level of individual learners. The proposed OCAM can improve learners’ ability to process specific information meaningfully and make the inferences necessary for understanding the learning content.
... Current educational research suggests collaborative learning or group-based learning increases the learning performance of a group as well as individual learning outcomes [147] [148]. In a collaborative learning environment, students learn in groups via interactions with each other by asking questions, explaining and justifying their opinions, explaining their reasoning, and presenting their knowledge [149]. ...
... We therefore computed a composite score of formality that integrated the five major dimensions of Coh-Metrix, wherein five dimensions were weighted equally [15]. This formality measure has been used to explore learners' social and cognitive processes in MOOCs [10,11,19], and has proven useful in building an understanding of individuals behavior in digital learning environments more broadly [9]. ...
Conference Paper
There has been limited research on how perceptions of socioeconomic status (SES) and opinion difference could influence peer feedback in Massive Open Online Courses (MOOCs). Using social comparison theory [12], we investigated the influence of ability and opinion-related factors on peer feedback text in a data science MOOC. Perceived SES of peers and the formality of written responses were used as the ability-related factor, while agreement between learners represented the opinion-related factor. We focused on understanding the behaviors of those learners who are most prevalent in MOOCs; those from high socioeconomic countries. Through two studies, we found a strong and repeated influence of agreement on affect and formality in feedback to peers. While a mediation effect of perceived SES was found, a significant effect of formality was not. This work contributes to an understanding of how social comparison theory can be operationalized in online peer writing environments.
... Current educational research suggests collaborative learning or group-based learning increases the learning performance of a group as well as individual learning outcomes [147] [148]. In a collaborative learning environment, students learn in groups via interactions with each other by asking questions, explaining and justifying their opinions, explaining their reasoning, and presenting their knowledge [149]. ...
Preprint
This paper provides interested beginners with an updated and detailed introduction to the field of Intelligent Tutoring Systems (ITS). ITSs are computer programs that use artificial intelligence techniques to enhance and personalize automation in teaching. This paper is a literature review that provides the following: First, a review of the history of ITS along with a discussion on the interface between human learning and computer tutors and how effective ITSs are in contemporary education. Second, the traditional architectural components of an ITS and their functions are discussed along with approaches taken by various ITSs. Finally, recent innovative ideas in ITS systems are presented. This paper concludes with some of the author's views regarding future work in the field of intelligent tutoring systems.
Article
Entrepreneurship programming has become a popular choice among higher education students. Entrepreneurial intent is regarded as a strong predictor of entrepreneurial behavior and success of entrepreneurial education programs, while ideation is viewed as a key skill needed for successful entrepreneurial behavior. Despite the widespread discussion of entrepreneurial intent in the literature, few studies have reported the actual impact of entrepreneurship education and more specifically, ideation exercises, on intent. The authors contend that ideation is a key skill and thus, barrier to entrepreneurial intentions when students have a lack of efficacy surrounding the ideation process. This study examined the impact of a 150-minute divergent activity training session and new venture ideation exercise on entrepreneurial intent in students enrolled in undergraduate entrepreneurship courses. These measures come together in this study to help further explain how entrepreneurship educators can drive more impactful entrepreneurial behavior in students. In this study, entrepreneurial intent significantly increased in students after the brief 150-minute intervention. This study infers that entrepreneurial self-efficacy of ideation skills are critical to increased entrepreneurial intent in college students, and exercises such as the ones conducted in this study can positively impact entrepreneurial intentions among students. Recommendations for future research and practice are provided.
Chapter
Full-text available
The availability of naturally occurring educational discourse data within educational platforms presents a golden opportunity to make advances in understanding online learner ecologies and enabling new kinds of personalized interventions focused on increasing inclusivity and equity. However, to gain a more substantive view of how peer interaction is influenced by group composition and gender, learning and computational sciences require new automated methodological approaches that will provide a deeper understanding of learners’ communication patterns and interaction dynamics across digitally-meditated group learning platforms. In the current research, we explore learners’ discourse by employing Group Communication Analysis (GCA), a computational linguistics methodology for quantifying and characterizing the discourse sociocognitive processes between learners in online interactions. The aim of this study is to use GCA to investigate the influence of gender and gender pairing on students’ intra- and interpersonal discourse processes in online environments. Students were randomly assigned to one of three groups of varying gender composition: 75% women, 50% women, or 25% women. Our results suggest that the sociocognitive discourse patterns, as captured by the GCA, reveal deeper level patterns in the way individuals interact within online environments along gender and group composition lines. The scalability of the methodology opens the door for future research efforts directed towards understanding, and creating more equitable and inclusive online peer-interactions.
Technical Report
Full-text available
Description Fit linear and generalized linear mixed-effects models. The models and their components are represented using S4 classes and methods. The core computational algorithms are implemented using the 'Eigen' C++ library for numerical linear algebra and 'RcppEigen' ``glue''.
Article
Full-text available
Computer analyses of text characteristics are often used by reading teachers, researchers, and policy makers when selecting texts for students. The authors of this article identify components of language, discourse, and cognition that underlie traditional automated metrics of text difficulty and their new Coh-Metrix system. Coh-Metrix analyzes texts on multiple measures of language and discourse that are aligned with multilevel theoretical frameworks of comprehension. The authors discuss five major factors that account for most of the variance in texts across grade levels and text categories: word concreteness, syntactic simplicity, referential cohesion, causal cohesion, and narrativity. They consider the importance of both quantitative and qualitative characteristics of texts for assigning the right text to the right student at the right time.
Chapter
An assessment of Collaborative Problem Solving (CPS) proficiency was developed by an expert group for the PISA 2015 international evaluation of student skills and knowledge. The assessment framework defined CPS skills by crossing three major CPS competencies with four problem solving processes that were adopted from PISA 2012 Complex Problem Solving to form a matrix of 12 specific skills. The three CPS competencies are (1) establishing and maintaining shared understanding, (2) taking appropriate action, and (3) establishing and maintaining team organization. For the assessment, computer-based agents provide the means to assess students by varying group composition and discourse across multiple collaborative situations within a short period of time. Student proficiency is then measured by the extent to which students respond to requests and initiate actions or communications to advance the group goals. This chapter identifies considerations and challenges in the design of a collaborative problem solving assessment for large-scale testing.
Article
Coh-Metrix is among the broadest and most sophisticated automated textual assessment tools available today. Automated Evaluation of Text and Discourse with Coh-Metrix describes this computational tool, as well as the wide range of language and discourse measures it provides. Section I of the book focuses on the theoretical perspectives that led to the development of Coh-Metrix, its measures, and empirical work that has been conducted using this approach. Section II shifts to the practical arena, describing how to use Coh-Metrix and how to analyze, interpret, and describe results. Coh-Metrix opens the door to a new paradigm of research that coordinates studies of language, corpus analysis, computational linguistics, education, and cognitive science. This tool empowers anyone with an interest in text to pursue a wide array of previously unanswerable research questions..
Conference Paper
We describe a study in which we tested features of online dialogue software meant to scaffold "social deliberative skills." In addition to hand coding of the dialogue text we are exploring the use of automated text analysis tools (LIWC and Coh-Metrix) to identify relevant features, and to be used in a Facilitator Dashboard tool in development.
Article
Although many college courses adopt online tools such as Q&A online discussion boards, there is no easy way to measure or evaluate their effect on learning. As a part of supporting instructional assessment of online discussions, we investigate a predictive relation between characteristics of discussion contributions and student performance. Inspired by existing work on dialogue acts, project-based learning, and instructional analysis of student-generated text in generating predictive models, we make use of dialogue roles, linguistic features, and work patterns. In particular, we model the Q&A dialog roles that participants play, emotional features covered by LIWC (Linguistic Inquiry and Word Count), cohesiveness of the dialogue, the coherence captured by Coh-Metrix, and temporal patterns of participation. We use a discussion corpus from eight semesters of a computer science course, covering conversations of 173 student groups (370 students). We first remove various noises in student discussion data and normalize the discussion data. We then apply machine learning techniques and text analysis tools for classifying dialogue features efficiently. The extracted dialogue and participation features are used as predictive variables for project grades. The correlation and regression analyses indicate that the number of answers provided to others, the number of positive emotion expressions, and how early students communicate their problems before the deadline correlate with project grades. This finding confirms the argument that in assessing student online activities, we need to capture how they interact, not just how often they participate.
Article
This study quantitatively synthesized the empirical research on the effects of social context (i.e., small group versus individual learning) when students learn using computer technology. In total, 486 independent findings were extracted from 122 studies involving 11,317 learners. The results indicate that, on average, small group learning had significantly more positive effects than individual learning on student individual achievement (mean ES = +0.15), group task performance (mean ES = +0.31), and several process and affective outcomes. However, findings on both individual achievement and group task performance were significantly heterogeneous. Through weighted least squares univariate and multiple regression analyses, we found that variability in each of the two cognitive outcomes could be accounted for by a few technology, task, grouping, and learner characteristics in the studies.