Conference PaperPDF Available

Building Arguments Together or Alone? Using Learning Analytics to Study the Collaborative Construction of Argument Diagrams

Authors:

Abstract and Figures

Research has shown that the construction of visual representations may have a positive effect on cognitive skills, including argumentation. In this paper we present a study on learning argumentation through computer-supported argument diagramming. We specifically focus on whether students, when provided with an argument-diagramming tool, create better diagrams, are more motivated, and learn more when working with other students or on their own. We use learning analytics to evaluate a variety of student activities: pre and post questionnaires to explore motivational changes; the argument diagrams created by students to evaluate richness, complexity and completion; and pre and post knowledge tests to evaluate learning gains.
Content may be subject to copyright.
Building Arguments Together or Alone?
Using Learning Analytics to Study the Collaborative Construction
of Argument Diagrams
Irene-Angelica Chounta, Bruce M. McLaren, Maralee Harrell, Carnegie Mellon University
Email: ichounta@cs.cmu.edu, bmclaren@cs.cmu.edu, mharrell@cmu.edu
Abstract: Research has shown that the construction of visual representations may have a
positive effect on cognitive skills, including argumentation. In this paper we present a study
on learning argumentation through computer-supported argument diagramming. We
specifically focus on whether students, when provided with an argument-diagramming tool,
create better diagrams, are more motivated, and learn more when working with other students
or on their own. We use learning analytics to evaluate a variety of student activities: pre and
post questionnaires to explore motivational changes; the argument diagrams created by
students to evaluate richness, complexity and completion; and pre and post knowledge tests to
evaluate learning gains.
Introduction
Having students learn argumentation and critical reasoning through supported argument diagramming holds
great promise, but it is not clear whether working alone or with others is better for learning. In this paper we aim
to assess whether students produce better argument diagrams, are more motivated, and learn more when
working in small collaborative groups versus working individually. Related research has shown that the
construction of visual representations, such as diagrams, may have a positive effect on understanding, deeper
learning and other important cognitive skills, including critical thinking and argumentation (Harrell & Wetzel,
2013). In addition, collaboration has been shown to be beneficial, in particular, for learning to argue and co-
construct knowledge (Scheuer, McLaren, Harrell, & Weinberger, 2011). Thus, providing students with a tool
that can support both argument diagramming and collaboration might result in deeper learning and, potentially,
in helping students become better arguers.
In our research, we use learning analytics to study various aspects of the learning activity such as: a)
pre and post questionnaires to explore motivation; b) the richness, complexity and completion of created
argument diagrams; and c) pre and post knowledge tests to evaluate learning gains. Learning analytics is defined
as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes
of understanding and optimizing learning and the environments in which it occurs” (LAK 2011, Call for
Papers). To explore the advantages that a collaborative software tool [LASAD (Scheuer, Niebuhr, Dragon,
McLaren, & Pinkwart, 2012)] can provide to college-level students, we conducted a small classroom study with
undergraduate students. Prior studies of collaborative argumentation have almost exclusively been lab studies of
short duration. However, our aim was to study the benefits of a full semester’s use of the argumentation tool.
The study compared the practice of groups of 2-4 students who collaborated with LASAD for learning
argumentation to students who worked alone. Our overall goal was to answer the research question: Does
collaborative, computer-supported argument diagramming lead to more motivation, better understanding of
arguments, and better argumentation skills than individual, computer-supported argument diagramming?
Related work
Students need to learn to argue and debate in a well-founded, rational way in order to succeed in a variety of
academic subjects, including science, philosophy, and writing. Argumentation skills are vital to everyday life in
our complex, democratic society. Yet, these skills are often lacking in students and, hence, need to be explicitly
trained and exercised. Philosophy in particular – the topic we study in this paper – is an academic discipline that
emphasizes argumentation skills. A key task for learning about argumentation is argument diagramming in
which students take arguments, read them carefully, and reconstruct the arguments in a graphical form.
Argument diagramming, supported by computer-based tools, plays an important role in introductory philosophy
courses (Harrell, 2016). There are a variety of benefits to argument diagramming, including that the diagrams
make arguments explicit and inspectable. Students typically work individually, not collaboratively, on argument
diagramming exercises. However, we believe that students can benefit from discussing arguments and working
collaboratively as they diagram. Literature in the Learning Sciences has shown the potential benefits of
collaboration, such as the benefits of explanation (Fonseca & Chi, 2011) and co-construction of knowledge
(Webb, 2013) and, in particular, how these benefits have been observed in collaborative argumentation and
learning to argue, when the argumentation process is structured (Weinberger, Stegmann, & Fischer, 2010).
Thus, providing students with a tool that can support both argument diagramming and collaboration might result
in deeper learning about argumentation and, potentially, in helping students become better arguers.
Methodology and study setup
The study took place as part of an “Introduction to Philosophy” course over a four-month semester, with three
intervention sessions throughout the semester. The participants were university students (17 - 21 years old) from
various departments (computer science, engineering, social sciences, etc.). The goal of the intervention was to
introduce students to argument diagramming. We studied the practice of 19 students (8 females, 11 males) who
completed the course. The students were assigned to one of two conditions: the experimental (Collaborative)
condition, where 11 students worked in groups of 2-4 members and the control (Individual) condition, where 8
students worked individually. Both conditions had to construct three argument diagrams for three different
theses (e.g. the “The Impossibility of Moral Responsibility” by G. Strawson). The participants had to read the
arguments, identify the premises and formulate a conclusion through a diagrammatic representation that reflects
the underlying relations between them. Overall we studied 32 argument diagrams: 21 diagrams from the
individual condition and 11 diagrams from the collaborative condition. The 8 individuals created 3 diagrams
each, resulting in 24 diagrams overall. However, 3 of 24 diagrams were not completed (the participants were
absent). Similarly, 4 groups had to create 3 diagrams each, resulting in 12 diagrams overall. In one case, only
one group member was present for the activity; thus, this diagram was left out of the analysis. The creation of
the diagrams was supported by a web-based argumentation system (LASAD) that allows users to argue in a
structured fashion using graphical representations (Scheuer et al., 2012). LASAD supports both individual and
collaborative use (two or more users working synchronously on the same diagram) and it was designed to
specifically support argument diagramming.
For our study, we used questionnaires to assess motivational aspects adapting 13 questions from the
MSLQ – Motivated Strategies for Learning Questionnaire – instrument (Pintrich & De Groot, 1990) to capture
disposition towards classwork. The questions were rated on a 7-point Likert scale. We also studied potential
learning gains on the basis of pre/post knowledge tests and by evaluating the resulting argument diagrams in
terms of correctness and completeness. Finally we used metrics of activity based on the actions logged
LASAD records detailed user actions to analyze the diagrams created by the students, such as the number of
actions a user performs during the activity and time on task. To assess the size and complexity of the diagrams,
we used metrics such as (1) the number of objects in a diagram (#objects), (2) the number of relations in a
diagram (#relations), (3) the sum of objects and relations, (4) the ratio of relations per object, and (5) the
cyclomatic complexity of the diagram, which is widely used to indicate complexity of software programs. The
cyclomatic complexity is defined as: M = E N + P (McCabe, 1976) where: E is the number of edges of the
graph, N is the number of nodes of the graph and P is the number of connected components. We made the
assumption that diagrams can be perceived as algorithmic flowcharts and therefore the cyclomatic complexity
can provide a measure of the diagram’s complexity. The number of objects and relations of a diagram and their
sum has been used in other studies as an indication of the size of a diagrammatic representation (Slotte &
Lonka, 1999), while the ratio of relations per object is as an indicator of the level of detail (Chounta, Hecking,
Hoppe, & Avouris, 2014). We refer to this process of measurement, collection, analysis and reporting of data as
“learning analytics” since its purpose is to provide insight and suggestions of how learning occurs within the
context of collaborative argumentation diagramming (LAK 2011, Call for Papers).
Results
From the analysis of the argument diagrams, shown in Table 1, it was evident that the groups constructed larger
(25% more objects) and more elaborate diagrams (38% more relations) than the individual participants. On
average, the argument diagrams created collaboratively were more detailed (higher ratio of relations/objects)
and more complex (higher cyclomatic complexity) than the diagrams of individuals. Group participants
performed fewer actions in total than the individuals but they spent more time on the task. (Given the small
number of participants, we acknowledge that further statistical analysis is not helpful).
We expected the groups to construct the argument diagrams faster since more people contribute to the
common goal, but this was not confirmed. This might indicate that group participants spent time discussing and
reflecting on the work of their peers. However, these metrics do not necessarily indicate diagrams of better
quality. To that end, the course instructor and a student helper rated the diagrams for correctness and
completeness on a [0, 5] range. The comparison between conditions showed that the diagrams created by groups
were rated higher than those of individuals (GradeCollab orative = 4.45 > GradeIndividual = 3.875).
Table 1: Diagram-related and user-specific metrics - on average - for the diagrams constructed collaboratively
and for the diagrams constructed by individuals
Diagram-related metrics User-specific metrics
Conditions (N=19) #objects #relations #relations/
# objects
cyclomatic
complexity #actions
Time on task
(minutes)
actions/time
on task
Collaborative (N=11) 11.27 14.00 1.22 3.73 60.30 20.96 2.91
Individual (N=8) 8.4 8.0 0.95 2.45 68.33 17.89 3.88
With respect to motivation, as assessed by pre-questionnaires, participants were positively motivated
towards class work, including usefulness and importance of the course. Individuals scored, on average, higher
(Mot_preIndividual = 5.35, SDIn dividual=0.22) compared to group participants (Mot_preCollabor ative = 5.18, SDCollaborative
= 0.44). With respect to the post-questionnaires, participants’ motivation decreased. The picture was similar for
both groups (Mot_postCollabo rative=4.9, sdColl aborative=0.27) and individuals (Mot_postIndividu al=4.98, sdIndividu al =
0.39). The difference in motivation between conditions was maintained but the standard deviation increased for
the individual participants (5 out of 8 participants rated motivation lower in post than pre-questionnaire). Both
group and individual participants rated lower the items referring to curriculum (e.g. I liked what I learned in
the class”), indicating that their expectations might not have been met. Participants who worked in groups
maintained the same attitude with respect to giving up and they gave higher ratings to items referring to
perceived personal performance (e.g. “I believe I did very well in this class”). This might indicate that working
in groups made participants feel more confident about their performance.
To evaluate learning gains, participants took knowledge tests before and after the completion of the
study. These tests examined performance on five dimensions (Diagram Quality, Conclusion, Premises,
Connections, Argument Evaluation) and were rated on a [0, 3] range. The knowledge gain was computed as the
difference between pre and post knowledge tests. Table 2 shows the comparison between the performance of the
groups’ participants and the performance of individuals. The participants who worked collaboratively performed
better in the post than in the pre-knowledge tests, attaining a knowledge gain of 0.33 on average. The
participants who worked individually scored similarly in the pre and post-knowledge tests. The participants in
the collaborative condition scored the highest knowledge gain for diagram quality (M = 0.84, SD = 0.29). The
individual condition participants also scored the highest knowledge gain for the diagram quality category but
only half as good as the participants in the collaborative condition (M=0.428, SD=0.76). Furthermore, the
individual participants scored the lowest knowledge gain for Argument Evaluation (M= - 0.714, SD = 0.699). In
the same category, the collaborative condition participants scored similarly in the pre and post-knowledge test.
This might be an indication that the collaborative construction of arguments has a deeper effect on students’
understanding of arguments. The difference in scores between the two conditions was not statistically
significant; however, as already noted, this was a study with a small N. As such, we are mostly focused on
pinpointing suggestions of the effect of collaborative argumentation on learning gains and how this could be
further studied.
Table 2. Results of the pre and post knowledge tests, as well as the knowledge gain between the post and pre-
knowledge tests per grading category.
Pre-knowledge test Post-knowledge test Knowledge gain
Collaborative Individual Collaborative Individual Collaborative Individual
Diagram quality 1.5 1.886 2.34 2.314 0.84 0.428
Conclusion 2.6 2.143 2.5 2.428 -0.1 0.286
Premises 2.4 2.857 2.8 2.714 0.4 -0.143
Connections 1.1 1.571 1.5 1.857 0.4 0.286
Argument Evaluation 1.4 2 1.5 1.286 0.1 -0.714
Discussion
In this paper we presented a study of the use of computers for learning argumentation through argument
diagramming. Previous research has shown the importance of argument diagramming in argumentation learning
(Harrell & Wetzel, 2013). Prior research has focused on computer-supported argumentation and the benefits of
computer-mediated collaborative argumentation (Scheuer, Loll, Pinkwart, & McLaren, 2010). We specifically
focused on whether students, when provided with an argument-diagramming tool, create better diagrams, are
more motivated, and learn more when working with other students or on their own. Our basic research question
was: Does collaborative, computer-supported argument diagramming lead to more motivation, better
understanding of arguments, and better argumentation skills than individual, computer-supported argument
diagramming? To that end, we carried out a preliminary study where 19 undergraduate students used a software
tool to construct diagrams based on given (written) arguments. The students were divided into two conditions:
those who worked collaboratively in small groups of 2-4 people and those who worked individually. To analyze
the activity we used questionnaires to explore motivational aspects, and the argument diagrams created by
students and knowledge tests to evaluate learning gains.
The analysis revealed that participants were positively motivated towards the class before the study but
their motivation dropped after its completion. Both groups and individual participants indicated a loss of
motivation from pre to post-questionnaires on items that referred to curriculum. The drop in motivation might
reflect a drop in interest about the overall course and not the argument diagramming, per se. Participants who
collaborated in groups indicated higher motivation on perceived personal performance (e.g. “I believe I did very
well in this class”) in contrast to individuals, and they maintained the same attitude with respect to giving up
when work was uninteresting (“Even when study materials are dull and uninteresting, I keep working until I
finish”). This may be an indication that collaborative work made participants feel confident about their
performance. The collaboratively-created argument diagrams tend to be larger, more complex and were graded
higher than the ones created by individuals. The participants in the collaborative condition also attained higher
learning gains from pre to post-knowledge test.
Although this study was relatively small, we believe it provides insight on how to support
argumentation learning through collaborative construction of diagrammatic representations. The study suggests
that collaboration empowered participants with confidence and feelings of goal achievement. However, as
mentioned, these results are only suggestive, due to the small number of participants. Furthermore we focused
only on the activity that took place within the shared workspace but did not analyze the communication (i.e.,
chat messages) between group members. Additionally, since this was only preliminary research aimed at
studying the effect of the tool’s use, we focused on the activity of students and did not study the role of the
teacher. We plan to carry out studies with more participants in future studies and to study the use of the
collaborative tool in various learning designs, for example teaching argumentation through confrontation or
supported by game features.
References
Chounta, I.-A., Hecking, T., Hoppe, H. U., & Avouris, N. (2014). Two make a network: using graphs to assess
the quality of collaboration of dyads. In CYTED-RITOS International Workshop on Groupware (pp.
53–66). Springer.
Fonseca, B., & Chi, M. T. H. (2011). The self-explanation effect: A constructive learning activity. The
Handbook of Research on Learning and Instruction, 270–321.
Harrell, M. (2016). What Is the Argument?: An Introduction to Philosophical Argument and Analysis. Mit Press.
Harrell, M., & Wetzel, D. (2013). Improving first-year writing using argument diagramming. In Proc. of the
35th Annual Conf. of the Cognitive Science Society (pp. 2488–2493).
LAK 2011, Call for Papers. 1st International Conference on Learning Analytics and Knowledge 2011 |
Connecting the technical, pedagogical, and social dimensions of learning analytics. Retrieved from
https://tekri.athabascau.ca/analytics/
McCabe, T. J. (1976). A complexity measure. IEEE Transactions on Software Engineering, (4), 308–320.
Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom
academic performance. Journal of Educational Psychology, 82(1), 33.
Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer-supported argumentation: A review of
the state of the art. International Journal of Computer-Supported Collaborative Learning, 5(1), 43–
102.
Scheuer, O., McLaren, B. M., Harrell, M., & Weinberger, A. (2011). Will structuring the collaboration of
students improve their argumentation? In Artificial Intelligence in Education (pp. 544–546). Springer.
Scheuer, O., Niebuhr, S., Dragon, T., McLaren, B. M., & Pinkwart, N. (2012). Adaptive support for graphical
argumentation-the LASAD approach. IEEE Learning Technology Newsletter, 14(1), 8–11.
Slotte, V., & Lonka, K. (1999). Spontaneous concept maps aiding the understanding of scientific concepts.
International Journal of Science Education, 21(5), 515–531.
Webb, N. M. (2013). Information processing approaches to collaborative learning.
Weinberger, A., Stegmann, K., & Fischer, F. (2010). Learning to argue online: Scripted groups surpass
individuals (unscripted groups do not). Computers in Human Behavior, 26(4), 506–515.
... Although much research has shown its positive contributions to developing argumentation skills, how to enhance its effectiveness is a critical point to be highlighted. Researchers suggest that learning designs aiming at teaching argumentation skills should also be designed to cover both individual and social aspects (Chounta et al., 2017;Duschl & Osborne, 2002;Jackson, 1992). They also recommend considering peer interaction (Andriessen, 2005;Felton & Kuhn, 2001;Kuhn et al., 1997;Noroozi et al., 2012). ...
... Some researchers concluded that argument maps have a positive effect on learning argumentation skills (Alda g, 2005), while others found text-based is more successful (Munneke et al., 2007;Zumbach et al., 2008), and the remaining studies reached no statistically significant result (Pinkwart et al., 2009;Zumbach, 2009). Lastly, researches on using argument maps with other instructional strategies reviled that argument maps are more effective than taking notes (Kiili, 2013), criterion teaching is more effective than argument maps, but the combination of both has a negative effect (Nussbaum & Schraw, 2007) and working as a group size of 2 or 4 members rather than individual resulted in considerable development in argumentation skills (Chounta et al., 2017). Although these kinds of studies are valuable to shed light on how to enhance the effectiveness of argument maps (Rapanta & Walton, 2016), it is noteworthy they are less. ...
... Lastly, another reason for our findings may be that peer feedback improves the interaction between learners, and this increases learners to encounter different perspectives in argumentation. Chounta et al. (2017) also reached a similar result in their study. Their findings declared argument mapping as a group rather than individually argument mapping is more beneficial to improve argumentation skills. ...
Article
Full-text available
This study aims to explore the effect of argument maps supported with peer feedback on pre-service teachers' argumentation skills. The participants consisted of 43 (21 female, 22 male) pre-service teachers studying at the Department of Computer Education and Instructional Technologies. There were two groups as experiment (n ¼ 23) and control (20) in the research. The learners in the control condition developed arguments individually about teaching methods through argument maps. Similar to the control group, those in the experimental condition first accomplished the argument mapping tasks. Later, they performed peer feedback activity. Both quantitative and qualitative analyzes were conducted to gain a deep insight into the effect of these interventions on pre-service teachers' argu-mentation skills. The results showed that both control and experimental groups' argumentation skills improved statistically significant. Additionally, we found using argument maps with peer feedback gives more positive results compared to the only argument mapping. ARTICLE HISTORY
... Collaborative construction of argument diagrams has been shown to be beneficial, in particular, for learning to argue and to co-construct knowledge (Chounta, McLaren, & Harrell, 2017;Scheuer et al., 2014). Schwarz and De Groot ((2007) used graphic tools to promote critical reasoning for argumentation during a history class. ...
Preprint
Full-text available
In this article, an overview of the background, the research approaches and the patterns of practice in the field of collaborative learning are provided. A definition of collaborative learning and an overview of fundamental aspects that shape research and practice in this field are included. Pedagogies and learning theories that are used as foundations of the field alongside goals and objectives of collaborative learning approaches are discussed. Popular patterns of practice, exploring their application in classrooms and elaborating on the state of the art around those practices in research are outlined. A discussion about important topics, open questions and future directions are provided in conclusion.
... Collaborative construction of argument diagrams has been shown to be beneficial, in particular, for learning to argue and to co-construct knowledge (Chounta, McLaren, & Harrell, 2017;Scheuer et al., 2014). Schwarz and De Groot ((2007) used graphic tools to promote critical reasoning for argumentation during a history class. ...
Chapter
In this entry, an overview of the background, the research approaches, and the patterns of practice in the field of collaborative learning are provided. A definition of collaborative learning and an overview of fundamental aspects that shape research and practice in this field are included. Pedagogies and learning theories that are used as foundations of the field alongside goals and objectives of collaborative learning approaches are discussed. Popular patterns of practice, exploring their application in classrooms, and elaborating on the state of the art around those practices in research are outlined. A discussion about important topics, open questions, and future directions are provided in conclusion.
Article
It is crucial to design learning strategies for English as Foreign Language (EFL) learners in the context of proliferating and cultivating argumentative speaking. Argumentation mapping aims to provide a visual depiction of collaborative arguments in order to unravel students' engagement in constructing argumentative discussion. The employment of argumentation mapping is fundamental to maintaining students' virtual social practice. However, in the conventional argumentation mapping approach, the acquisition of argumentation skills does not yield successful argumentative dialogue-to-speaking practice. The collective reflection aims to promote a group of learners’ reflective thinking and higher order thinking by way of collaborative tasks and group monitoring. This strategy allows students to reflect on their group works when dealing with argumentative knowledge. Hence, this study referred to the educational theory of collective reflection practice and proposed its practice in the Collective Reflection-based Argumentation Mapping (CR-AM) learning strategy. This strategy was applied to an EFL speaking course to enhance students' argumentative speaking. Twenty-four students were recruited as the CR-AM strategy group, with 22 in the conventional Argumentation Mapping (AM) strategy group. The findings indicated that the CR-AM learning strategy could significantly improve students' argumentative speaking performance and their lexical complexity. Although students' critical thinking was at an equivalent level in both groups, students' communication and collaboration tendencies were much higher in the CR-AM strategy group. Based on the argumentation mapping discourse, it was verified that instructing students to extend their activities of classifying collective arguments and monitoring group reflection could effectively support their engagement in argumentation mapping discourse and enhance their argumentation skill performance.
Conference Paper
Full-text available
In this paper we explore the application of network analysis techniques in order to analyze synchronous collaborative activities of dyads. The collaborative activities are represented and visualized as networks. We argue that the characteristics and properties of the networks reflect the quality of collaboration and therefore can support the analysis of collaborative activities in an automated way. To support this argument we studied the collaborative practice of 228 dyads based on graphs. The properties of each graph were evaluated in comparison to ratings of collaboration quality as assessed by human experts. The activities were also examined with respect to the solution quality. The paper presents the method and the findings of the study.
Article
Full-text available
This study evaluated concept maps spontaneously constructed by applicants (N = 502) in a medical school entrance examination. In all, 36 maps were produced. Concept maps were evaluated for content of relevant terms and for the number of interrelationships indicated. The aim was to determine whether including relevant ideas on a concept map is related to the learning of those ideas. Because concept maps are effective tools for making the structure of knowledge explicit, it was hypothesized that the quality and content of spontaneously made maps would be related to improvement in the comprehension of text material. Understanding was assessed in terms of success in essay-type tasks designed to measure the ability to define, explain, and apply statistical knowledge. The results indicated that merely including the relevant concepts in a map has little effect on the comprehension of those concepts, whereas the extent and complexity of concept maps plays a powerful role in the understanding of scientific texts.
Conference Paper
Full-text available
Learning to argue in a computer-mediated and structured fashion is investigated in this research. A study was conducted to compare dyads that were scripted in their computer-mediated collaboration with dyads that were not scripted. A process analysis of the chats of the dyads showed that the scripted experimental group used significantly more words and engaged in significantly more broadening and deepening of the discussion than the non-scripted control group.
Article
Full-text available
Argumentation is an important skill to learn. It is valuable not only in many professional contexts, such as the law, science, politics, and business, but also in everyday life. However, not many people are good arguers. In response to this, researchers and practitioners over the past 15–20 years have developed software tools both to support and teach argumentation. Some of these tools are used in individual fashion, to present students with the “rules” of argumentation in a particular domain and give them an opportunity to practice, while other tools are used in collaborative fashion, to facilitate communication and argumentation between multiple, and perhaps distant, participants. In this paper, we review the extensive literature on argumentation systems, both individual and collaborative, and both supportive and educational, with an eye toward particular aspects of the past work. More specifically, we review the types of argument representations that have been used, the various types of interaction design and ontologies that have been employed, and the system architecture issues that have been addressed. In addition, we discuss intelligent and automated features that have been imbued in past systems, such as automatically analyzing the quality of arguments and providing intelligent feedback to support and/or tutor argumentation. We also discuss a variety of empirical studies that have been done with argumentation systems, including, among other aspects, studies that have evaluated the effect of argument diagrams (e.g., textual versus graphical), different representations, and adaptive feedback on learning argumentation. Finally, we conclude by summarizing the “lessons learned” from this large and impressive body of work, particularly focusing on lessons for the CSCL research community and its ongoing efforts to develop computer-mediated collaborative argumentation systems.
Article
A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.
Article
Students often face process losses when learning together via text-based online environments. Computer-supported collaboration scripts can scaffold collaborative learning processes by distributing roles and activities and thus facilitate acquisition of domain-specific as well as domain-general knowledge, such as knowledge on argumentation. Possibly, individual learners would require less additional support or could equally benefit from computer-supported scripts. In this study with a 2 × 2-factorial design (N = 36) we investigate the effects of a script (with versus without) and the learning arrangement (individual versus collaborative) on how learners distribute content-based roles to accomplish the task and argumentatively elaborate the learning material within groups to acquire domain-specific and argumentative knowledge, in the context of a case-based online environment in an Educational Psychology higher education course. A large multivariate interaction effect of the two factors on learning outcomes could be found, indicating that collaborative learning outperforms individual learning regarding both of these knowledge types if it is structured by a script. In the unstructured form, however, collaborative learning is not superior to individual learning in relation to either knowledge type. We thus conclude that collaborative online learners can benefit greatly from scripts reducing process losses and specifying roles and activities within online groups.
Article
This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph-theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The issue of using nonstructured control flow is also discussed. A characterization of nonstructured control graphs is given and a method of measuring the ″structuredness″ of a program is developed. The last section of this paper deals with a testing methodology used in conjunction with the complexity measure; a testing strategy is defined that dictates that program can either admit of a certain minimal testing level or the program can be structurally reduced.
Article
This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph-theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The control graphs of several actual Fortran programs are then presented to illustrate the correlation between intuitive complexity and the graph-theoretic complexity. Several properties of the graph-theoretic complexity are then proved which show, for example, that complexity is independent of physical size (adding or subtracting functional statements leaves complexity unchanged) and complexity depends only on the decision structure of a program.