Content uploaded by Irene-Angelica Chounta
Author content
All content in this area was uploaded by Irene-Angelica Chounta on Jun 23, 2017
Content may be subject to copyright.
Building Arguments Together or Alone?
Using Learning Analytics to Study the Collaborative Construction
of Argument Diagrams
Irene-Angelica Chounta, Bruce M. McLaren, Maralee Harrell, Carnegie Mellon University
Email: ichounta@cs.cmu.edu, bmclaren@cs.cmu.edu, mharrell@cmu.edu
Abstract: Research has shown that the construction of visual representations may have a
positive effect on cognitive skills, including argumentation. In this paper we present a study
on learning argumentation through computer-supported argument diagramming. We
specifically focus on whether students, when provided with an argument-diagramming tool,
create better diagrams, are more motivated, and learn more when working with other students
or on their own. We use learning analytics to evaluate a variety of student activities: pre and
post questionnaires to explore motivational changes; the argument diagrams created by
students to evaluate richness, complexity and completion; and pre and post knowledge tests to
evaluate learning gains.
Introduction
Having students learn argumentation and critical reasoning through supported argument diagramming holds
great promise, but it is not clear whether working alone or with others is better for learning. In this paper we aim
to assess whether students produce better argument diagrams, are more motivated, and learn more when
working in small collaborative groups versus working individually. Related research has shown that the
construction of visual representations, such as diagrams, may have a positive effect on understanding, deeper
learning and other important cognitive skills, including critical thinking and argumentation (Harrell & Wetzel,
2013). In addition, collaboration has been shown to be beneficial, in particular, for learning to argue and co-
construct knowledge (Scheuer, McLaren, Harrell, & Weinberger, 2011). Thus, providing students with a tool
that can support both argument diagramming and collaboration might result in deeper learning and, potentially,
in helping students become better arguers.
In our research, we use learning analytics to study various aspects of the learning activity such as: a)
pre and post questionnaires to explore motivation; b) the richness, complexity and completion of created
argument diagrams; and c) pre and post knowledge tests to evaluate learning gains. Learning analytics is defined
as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes
of understanding and optimizing learning and the environments in which it occurs” (LAK 2011, Call for
Papers). To explore the advantages that a collaborative software tool [LASAD (Scheuer, Niebuhr, Dragon,
McLaren, & Pinkwart, 2012)] can provide to college-level students, we conducted a small classroom study with
undergraduate students. Prior studies of collaborative argumentation have almost exclusively been lab studies of
short duration. However, our aim was to study the benefits of a full semester’s use of the argumentation tool.
The study compared the practice of groups of 2-4 students who collaborated with LASAD for learning
argumentation to students who worked alone. Our overall goal was to answer the research question: Does
collaborative, computer-supported argument diagramming lead to more motivation, better understanding of
arguments, and better argumentation skills than individual, computer-supported argument diagramming?
Related work
Students need to learn to argue and debate in a well-founded, rational way in order to succeed in a variety of
academic subjects, including science, philosophy, and writing. Argumentation skills are vital to everyday life in
our complex, democratic society. Yet, these skills are often lacking in students and, hence, need to be explicitly
trained and exercised. Philosophy in particular – the topic we study in this paper – is an academic discipline that
emphasizes argumentation skills. A key task for learning about argumentation is argument diagramming in
which students take arguments, read them carefully, and reconstruct the arguments in a graphical form.
Argument diagramming, supported by computer-based tools, plays an important role in introductory philosophy
courses (Harrell, 2016). There are a variety of benefits to argument diagramming, including that the diagrams
make arguments explicit and inspectable. Students typically work individually, not collaboratively, on argument
diagramming exercises. However, we believe that students can benefit from discussing arguments and working
collaboratively as they diagram. Literature in the Learning Sciences has shown the potential benefits of
collaboration, such as the benefits of explanation (Fonseca & Chi, 2011) and co-construction of knowledge
(Webb, 2013) and, in particular, how these benefits have been observed in collaborative argumentation and
learning to argue, when the argumentation process is structured (Weinberger, Stegmann, & Fischer, 2010).
Thus, providing students with a tool that can support both argument diagramming and collaboration might result
in deeper learning about argumentation and, potentially, in helping students become better arguers.
Methodology and study setup
The study took place as part of an “Introduction to Philosophy” course over a four-month semester, with three
intervention sessions throughout the semester. The participants were university students (17 - 21 years old) from
various departments (computer science, engineering, social sciences, etc.). The goal of the intervention was to
introduce students to argument diagramming. We studied the practice of 19 students (8 females, 11 males) who
completed the course. The students were assigned to one of two conditions: the experimental (Collaborative)
condition, where 11 students worked in groups of 2-4 members and the control (Individual) condition, where 8
students worked individually. Both conditions had to construct three argument diagrams for three different
theses (e.g. the “The Impossibility of Moral Responsibility” by G. Strawson). The participants had to read the
arguments, identify the premises and formulate a conclusion through a diagrammatic representation that reflects
the underlying relations between them. Overall we studied 32 argument diagrams: 21 diagrams from the
individual condition and 11 diagrams from the collaborative condition. The 8 individuals created 3 diagrams
each, resulting in 24 diagrams overall. However, 3 of 24 diagrams were not completed (the participants were
absent). Similarly, 4 groups had to create 3 diagrams each, resulting in 12 diagrams overall. In one case, only
one group member was present for the activity; thus, this diagram was left out of the analysis. The creation of
the diagrams was supported by a web-based argumentation system (LASAD) that allows users to argue in a
structured fashion using graphical representations (Scheuer et al., 2012). LASAD supports both individual and
collaborative use (two or more users working synchronously on the same diagram) and it was designed to
specifically support argument diagramming.
For our study, we used questionnaires to assess motivational aspects adapting 13 questions from the
MSLQ – Motivated Strategies for Learning Questionnaire – instrument (Pintrich & De Groot, 1990) to capture
disposition towards classwork. The questions were rated on a 7-point Likert scale. We also studied potential
learning gains on the basis of pre/post knowledge tests and by evaluating the resulting argument diagrams in
terms of correctness and completeness. Finally we used metrics of activity based on the actions logged –
LASAD records detailed user actions – to analyze the diagrams created by the students, such as the number of
actions a user performs during the activity and time on task. To assess the size and complexity of the diagrams,
we used metrics such as (1) the number of objects in a diagram (#objects), (2) the number of relations in a
diagram (#relations), (3) the sum of objects and relations, (4) the ratio of relations per object, and (5) the
cyclomatic complexity of the diagram, which is widely used to indicate complexity of software programs. The
cyclomatic complexity is defined as: M = E − N + P (McCabe, 1976) where: E is the number of edges of the
graph, N is the number of nodes of the graph and P is the number of connected components. We made the
assumption that diagrams can be perceived as algorithmic flowcharts and therefore the cyclomatic complexity
can provide a measure of the diagram’s complexity. The number of objects and relations of a diagram and their
sum has been used in other studies as an indication of the size of a diagrammatic representation (Slotte &
Lonka, 1999), while the ratio of relations per object is as an indicator of the level of detail (Chounta, Hecking,
Hoppe, & Avouris, 2014). We refer to this process of measurement, collection, analysis and reporting of data as
“learning analytics” since its purpose is to provide insight and suggestions of how learning occurs within the
context of collaborative argumentation diagramming (LAK 2011, Call for Papers).
Results
From the analysis of the argument diagrams, shown in Table 1, it was evident that the groups constructed larger
(25% more objects) and more elaborate diagrams (38% more relations) than the individual participants. On
average, the argument diagrams created collaboratively were more detailed (higher ratio of relations/objects)
and more complex (higher cyclomatic complexity) than the diagrams of individuals. Group participants
performed fewer actions in total than the individuals but they spent more time on the task. (Given the small
number of participants, we acknowledge that further statistical analysis is not helpful).
We expected the groups to construct the argument diagrams faster since more people contribute to the
common goal, but this was not confirmed. This might indicate that group participants spent time discussing and
reflecting on the work of their peers. However, these metrics do not necessarily indicate diagrams of better
quality. To that end, the course instructor and a student helper rated the diagrams for correctness and
completeness on a [0, 5] range. The comparison between conditions showed that the diagrams created by groups
were rated higher than those of individuals (GradeCollab orative = 4.45 > GradeIndividual = 3.875).
Table 1: Diagram-related and user-specific metrics - on average - for the diagrams constructed collaboratively
and for the diagrams constructed by individuals
Diagram-related metrics User-specific metrics
Conditions (N=19) #objects #relations #relations/
# objects
cyclomatic
complexity #actions
Time on task
(minutes)
actions/time
on task
Collaborative (N=11) 11.27 14.00 1.22 3.73 60.30 20.96 2.91
Individual (N=8) 8.4 8.0 0.95 2.45 68.33 17.89 3.88
With respect to motivation, as assessed by pre-questionnaires, participants were positively motivated
towards class work, including usefulness and importance of the course. Individuals scored, on average, higher
(Mot_preIndividual = 5.35, SDIn dividual=0.22) compared to group participants (Mot_preCollabor ative = 5.18, SDCollaborative
= 0.44). With respect to the post-questionnaires, participants’ motivation decreased. The picture was similar for
both groups (Mot_postCollabo rative=4.9, sdColl aborative=0.27) and individuals (Mot_postIndividu al=4.98, sdIndividu al =
0.39). The difference in motivation between conditions was maintained but the standard deviation increased for
the individual participants (5 out of 8 participants rated motivation lower in post than pre-questionnaire). Both
group and individual participants rated lower the items referring to curriculum (e.g. “I liked what I learned in
the class”), indicating that their expectations might not have been met. Participants who worked in groups
maintained the same attitude with respect to giving up and they gave higher ratings to items referring to
perceived personal performance (e.g. “I believe I did very well in this class”). This might indicate that working
in groups made participants feel more confident about their performance.
To evaluate learning gains, participants took knowledge tests before and after the completion of the
study. These tests examined performance on five dimensions (Diagram Quality, Conclusion, Premises,
Connections, Argument Evaluation) and were rated on a [0, 3] range. The knowledge gain was computed as the
difference between pre and post knowledge tests. Table 2 shows the comparison between the performance of the
groups’ participants and the performance of individuals. The participants who worked collaboratively performed
better in the post than in the pre-knowledge tests, attaining a knowledge gain of 0.33 on average. The
participants who worked individually scored similarly in the pre and post-knowledge tests. The participants in
the collaborative condition scored the highest knowledge gain for diagram quality (M = 0.84, SD = 0.29). The
individual condition participants also scored the highest knowledge gain for the diagram quality category but
only half as good as the participants in the collaborative condition (M=0.428, SD=0.76). Furthermore, the
individual participants scored the lowest knowledge gain for Argument Evaluation (M= - 0.714, SD = 0.699). In
the same category, the collaborative condition participants scored similarly in the pre and post-knowledge test.
This might be an indication that the collaborative construction of arguments has a deeper effect on students’
understanding of arguments. The difference in scores between the two conditions was not statistically
significant; however, as already noted, this was a study with a small N. As such, we are mostly focused on
pinpointing suggestions of the effect of collaborative argumentation on learning gains and how this could be
further studied.
Table 2. Results of the pre and post knowledge tests, as well as the knowledge gain between the post and pre-
knowledge tests per grading category.
Pre-knowledge test Post-knowledge test Knowledge gain
Collaborative Individual Collaborative Individual Collaborative Individual
Diagram quality 1.5 1.886 2.34 2.314 0.84 0.428
Conclusion 2.6 2.143 2.5 2.428 -0.1 0.286
Premises 2.4 2.857 2.8 2.714 0.4 -0.143
Connections 1.1 1.571 1.5 1.857 0.4 0.286
Argument Evaluation 1.4 2 1.5 1.286 0.1 -0.714
Discussion
In this paper we presented a study of the use of computers for learning argumentation through argument
diagramming. Previous research has shown the importance of argument diagramming in argumentation learning
(Harrell & Wetzel, 2013). Prior research has focused on computer-supported argumentation and the benefits of
computer-mediated collaborative argumentation (Scheuer, Loll, Pinkwart, & McLaren, 2010). We specifically
focused on whether students, when provided with an argument-diagramming tool, create better diagrams, are
more motivated, and learn more when working with other students or on their own. Our basic research question
was: Does collaborative, computer-supported argument diagramming lead to more motivation, better
understanding of arguments, and better argumentation skills than individual, computer-supported argument
diagramming? To that end, we carried out a preliminary study where 19 undergraduate students used a software
tool to construct diagrams based on given (written) arguments. The students were divided into two conditions:
those who worked collaboratively in small groups of 2-4 people and those who worked individually. To analyze
the activity we used questionnaires to explore motivational aspects, and the argument diagrams created by
students and knowledge tests to evaluate learning gains.
The analysis revealed that participants were positively motivated towards the class before the study but
their motivation dropped after its completion. Both groups and individual participants indicated a loss of
motivation from pre to post-questionnaires on items that referred to curriculum. The drop in motivation might
reflect a drop in interest about the overall course and not the argument diagramming, per se. Participants who
collaborated in groups indicated higher motivation on perceived personal performance (e.g. “I believe I did very
well in this class”) in contrast to individuals, and they maintained the same attitude with respect to giving up
when work was uninteresting (“Even when study materials are dull and uninteresting, I keep working until I
finish”). This may be an indication that collaborative work made participants feel confident about their
performance. The collaboratively-created argument diagrams tend to be larger, more complex and were graded
higher than the ones created by individuals. The participants in the collaborative condition also attained higher
learning gains from pre to post-knowledge test.
Although this study was relatively small, we believe it provides insight on how to support
argumentation learning through collaborative construction of diagrammatic representations. The study suggests
that collaboration empowered participants with confidence and feelings of goal achievement. However, as
mentioned, these results are only suggestive, due to the small number of participants. Furthermore we focused
only on the activity that took place within the shared workspace but did not analyze the communication (i.e.,
chat messages) between group members. Additionally, since this was only preliminary research aimed at
studying the effect of the tool’s use, we focused on the activity of students and did not study the role of the
teacher. We plan to carry out studies with more participants in future studies and to study the use of the
collaborative tool in various learning designs, for example teaching argumentation through confrontation or
supported by game features.
References
Chounta, I.-A., Hecking, T., Hoppe, H. U., & Avouris, N. (2014). Two make a network: using graphs to assess
the quality of collaboration of dyads. In CYTED-RITOS International Workshop on Groupware (pp.
53–66). Springer.
Fonseca, B., & Chi, M. T. H. (2011). The self-explanation effect: A constructive learning activity. The
Handbook of Research on Learning and Instruction, 270–321.
Harrell, M. (2016). What Is the Argument?: An Introduction to Philosophical Argument and Analysis. Mit Press.
Harrell, M., & Wetzel, D. (2013). Improving first-year writing using argument diagramming. In Proc. of the
35th Annual Conf. of the Cognitive Science Society (pp. 2488–2493).
LAK 2011, Call for Papers. 1st International Conference on Learning Analytics and Knowledge 2011 |
Connecting the technical, pedagogical, and social dimensions of learning analytics. Retrieved from
https://tekri.athabascau.ca/analytics/
McCabe, T. J. (1976). A complexity measure. IEEE Transactions on Software Engineering, (4), 308–320.
Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom
academic performance. Journal of Educational Psychology, 82(1), 33.
Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer-supported argumentation: A review of
the state of the art. International Journal of Computer-Supported Collaborative Learning, 5(1), 43–
102.
Scheuer, O., McLaren, B. M., Harrell, M., & Weinberger, A. (2011). Will structuring the collaboration of
students improve their argumentation? In Artificial Intelligence in Education (pp. 544–546). Springer.
Scheuer, O., Niebuhr, S., Dragon, T., McLaren, B. M., & Pinkwart, N. (2012). Adaptive support for graphical
argumentation-the LASAD approach. IEEE Learning Technology Newsletter, 14(1), 8–11.
Slotte, V., & Lonka, K. (1999). Spontaneous concept maps aiding the understanding of scientific concepts.
International Journal of Science Education, 21(5), 515–531.
Webb, N. M. (2013). Information processing approaches to collaborative learning.
Weinberger, A., Stegmann, K., & Fischer, F. (2010). Learning to argue online: Scripted groups surpass
individuals (unscripted groups do not). Computers in Human Behavior, 26(4), 506–515.