Content uploaded by Owen Harney
Author content
All content in this area was uploaded by Owen Harney on Dec 01, 2015
Content may be subject to copyright.
Note: This paper has been accepted to the International Journal of Computer
Supported Collaborative Learning, and will be published in December 2015.
The final publication is available at Springer via http://dx.doi.org/10.1007/s11412-015-9223-
1
Investigating the effects of prompts on
argumentation style, consensus and perceived
efficacy in collaborative learning.
Abstract This paper investigates the effects of task-level versus process-level prompts on
levels of perceived and objective consensus, perceived efficacy, and argumentation style in
the context of a computer-supported collaborative learning session using Interactive
Management (IM), a computer facilitated thought and action mapping methodology. Four
groups of undergraduate psychology students (N = 75) came together to discuss the negative
consequences of online social media usage. Participants in the task-level group received
simple, task-level prompts in relation to the task at hand, whereas the process-level group
received both task-level prompts and more specific, and directed, process-level prompts.
Perceived and objective consensus were measured before the IM session, and were measured
again, along with perceived efficacy of the collaborative learning methodology, after the IM
session. Results indicated that those in the process-level prompt groups scored significantly
higher on perceived consensus and perceived efficacy of the IM methodology after the
session. Analysis of the group dialogue using the Conversational Argument Coding Scheme
revealed significant differences between experimental conditions in the style of
argumentation used, with those in the process-level prompt groups exhibiting a greater range
of argumentation codes. Results are discussed in light of theory and research on instructional
support and facilitation in computer-supported collaborative learning.
Keywords Computer Supported Collaborative Learning * Prompts * Facilitation *
Consensus * Argumentation
Introduction
Resolving complex scientific and social problems often requires the application of critical,
collaborative and systems thinking skills (Hogan, Harney & Broome, 2015). It has been
suggested that many people have limited critical thinking skills (Kuhn, 2005), and that
collaborative problem-solving and argumentation are rarely optimized in working groups
(Tannen, 1998). Furthermore, from an educational design perspective, third-level education
generally focuses on the development of individual and domain-specific thinking skills,
which do not often transfer well to other domains of enquiry and thus act as a barrier to multi-
disciplinary, collaborative systems thinking. Scholars increasingly recognise the need for the
development of generic, tool-supported, critical, collaborative and systems thinking skills
(Hogan, Harney & Broome, 2014). The challenge of integrating computer support and
collaborative learning, or technology and education, remains an important goal in the learning
sciences - a challenge which according to Stahl, Koschmann and Suthers (2006) is one that
the field of computer-supported collaborative learning (CSCL) seeks to address.
It has been suggested that the deployment of CSCL technologies in educational contexts and
environments is most effectively advanced through design-based research, which takes the
"group" as its fundamental, “paradigmatic unit of analysis” (Stahl, 2015, p. 1). This entails
“looking at how groups of students interact with various technological artifacts and observing
their meaning-making processes, their enacting of the technologies and their problem solving
as mediated by the technologies.” (Stahl, 2015, p.15). This paper reports on one of the first
experimental demonstrations of the formative impact of prompt style on students‟ enacting
of, and problem-solving in a CSCL context using a systems thinking technology, Interactive
Management (IM). Importantly, Asterhan and Schwarz (2010, p.261) noted: “the extent to
which students learn from collaborative activities depends on the depth and the quality of the
dialogue peers engage in.” Further, studies have shown the significant impact of prompting in
scaffolding shared meaning-making in collaborative learning settings (Wen, Looi, & Chen,
2015; Gelmini-Hornsby, Ainsworth, & O‟Malley, 2011). The research reported here provides
initial experimental insights into the effects of prompt style on outcomes in the application of
IM – as a high-potential, technology-mediated thought and action mapping methodology - in
a face-to-face, CSCL context.
Importantly, the CSCL literature highlights that merely bringing groups together to
work on a problem does not guarantee effective collaboration. Successful collaboration
requires the careful design of the learning environment for group interaction and the
provision of instructional support, leadership, facilitation and prompts to promote meaning-
making, problem solving, and consensus among students (Pea, 2004; Strijbos, Kirschner, &
Martens, 2004). While the importance of good facilitation in collaborative learning
environments is often highlighted by expert facilitators (Hmelo-Silver, 2002), there has been
limited experimental research focused on the effects of facilitator prompting styles on CSCL
outcomes. Furthermore, different facilitator prompting styles may have different effects on
different people within a group. Psychological variables, such as trust amongst group
members, may impact on the collaborative efforts of groups. Notably, higher levels of group
trust have been implicated in a range of behaviours associated with more effective
collaboration, including social negotiation, critical thinking and solution finding (Kreijns,
Kirschner, & Jochems, 2002). Similarly, dispositional trust is relevant in collaborative
contexts as those with high dispositional trust generally assume that others are trustworthy,
and presume that trusting others leads to positive outcomes (McKnight, Cummings, &
Chervany, 1998). In light of the above, the current study investigated the effects of task-level
versus process-level prompts, on processes and outcomes in a CSCL environment, while
controlling for dispositional trust as a covariate.
Instructional Support in Collaborative Learning Environments
Collaborative work and collaborative learning are becoming increasingly prevalent within
educational, organisational, and business settings. It has been argued that a team approach to
work is often more suitable for complex tasks, rather than assigning these tasks to one
individual expert (Barron, 2000; Dillenbourg, 1999; Gabelica, Bossche, Segers, & Gijselaers,
2012; Kirschner, 2009). A number of important conclusions have been derived from analyses
of collaborative teams in work and educational settings. For example, when left to their own
devices, teams often fail to reach their full potential, and they may consider collaborative
work to be too time-consuming and thus fail to sustain quality interactions and exchanges
(Dickinson & McIntyre, 1997; Rummel & Spada, 2005). Therefore, it is necessary to provide
collaborative teams with skilled facilitation and instructional support, which includes prompts
designed to sustain quality interactions during the collaborative learning process (Gabelica et
al., 2012).While the literature regarding the benefits of individual-level instructional support
in learning contexts is well-established (Gabelica et al., 2012, Hattie & Gan, 2011), less
research has been devoted to the analysis of prompts and facilitation effects in collaborative
learning settings and the specific types of prompts that promote collaborative argumentation
and consensus-building in these settings.
Generally, prompts are used as part of instructional support and scaffolding protocols
(e.g. Stevenson, Hickendorff, Resing, & de Boeck, & Heiser, 2013; Gamlem & Munthe,
2014) and come in many forms, including guiding questions, sentence openers, or question
stems which provide learners with hints, clues, suggestions or reminders that help them to
complete a task. Prompts act as scaffolding that support and inform the learning process (Gan
& Hattie, 2014). Prompts may also be considered as “strategy activators” which “induce
productive learning processes” (Berthold, Nückles, & Renkl, 2007, p.566). Prompts may be
used to elicit explanations (Chi, 2000, Chi, deLeeuw, Chiu, & LaVancher, 1994),
elaborations (Brown & Palincsar, 1989) or collaborative thinking aloud (Hogan, 1999).
Prompts have been used across a variety of instructional domains with diverse student
groups, including to increase reflection and knowledge integration in middle school science
students (Davis, 2003), to increase knowledge representation, problem-solving, evaluation,
and monitoring skills in undergraduate information technology students (Ge & Land, 2003),
and improve the quality of written peer feedback in secondary student‟s chemistry reports
(Gan & Hattie, 2014). Such feedback and scaffolding protocols can take various forms,
utilizing various forms of prompts, for example: task-level prompts, process-level prompts,
and self-regulatory prompts, amongst others (Hattie & Timperley, 2007). For example, in
their study, Gan and Hattie (2014) provided students with prompts designed to elicit peer
feedback, such as "What other questions can he/she ask about the task?”. These types of
questions provided learners with a type of process-level prompt that facilitated collaborative
enquiry and problem solving. The current study uses similar prompts, modelled on the work
of Hattie and Timperley (2007), and Gan and Hattie (2014).
Notably, many reviews and meta-analyses have demonstrated the benefits of task-
level instructional support for individual learning outcomes (e.g., Alvero, Bucklin, & Austin,
2001; Balcazar, Shupert, Daniels, Mawhinney, & Hopkins, 1989; Denson, 1981; Guzzo,
Jette, & Katzell, 1985; Ilgen, Fisher, & Taylor, 1979; Kluger & DeNisi, 1996; Mento, Steel,
& Karren, 1987; Neubert, 1998). Task-level prompting provides information on how well a
task is being performed. Task-level prompts may focus on, for example, distinguishing
correct from incorrect answers, acquiring more or different information, and building more
surface knowledge (Gabelica et al., 2012; Hattie & Timperley, 2007). However, research
suggests that such task-level prompting does not always have beneficial effects on individual
learners and can result in negative performance effects in some situations (see Kluger &
DeNisi, 1996 for a review).
According to Hattie and Timperley (2007) one of the main shortcomings of task-level
instructional support is that it often does not transfer well to other tasks or problems and is
therefore limited in its value beyond the specific task at hand. One explanation for this is that
when prompts, or other instructional support, are heavily focused on immediate task goals,
individuals may not reflect upon the cognitive strategies involved in the learning or problem-
solving process. For example, in a LOGO-based angle and rotation mathematics tasks,
Simmons and Cope (1993) found that students (ages 9-11) who were provided with
immediate, visual, task-level feedback by being able to see their rotation on a computer, spent
less time developing strategies for solving the problem, and engaged in more simple trial and
error, than students who performed the task via pen and paper.
Process-level approaches seek to address the shortcomings of task-level approaches.
At the process-level, prompts are used to address processes and strategies necessary to
complete the task (Ketelaar, den Brok, Beijaard & Boshuizen, 2012). Process-level support
targets procedural knowledge and may provide support for error detection, information
searching, and steps for revision of work done (Gan & Hattie, 2014, Gan (2011). Process-
level prompts have been found to be effective in many domains. For example, Schoenfeld
(1985) found that prompting students to provide justifications for their learning was effective
for knowledge use in mathematical problem solving tasks. Process-level prompts which
promote reflection on learning have also been found to have positive effects on writing-to-
learn tasks (Hübner, Nückles, & Renkl, 2010), teacher education (Harford & MacRuairc,
2008) and e-learning (Krause, Stark, & Mandl, 2009). The design of the task-level and
process-level prompts used in the current study was informed by both the prompting, and
feedback, literature in relation to types of instructional support.
While prompting has been argued to be powerful and effective in shaping team
learning and team performance (Kozlowski & Ilgen, 2006; Woolley, 2009), the application of
these methods to CSCL settings has not been explored as extensively as in individual learning
settings. Importantly, results and insights from studies of instructional support at the
individual level cannot simply be transferred and applied to teams or other collaborative
groups (Gabelica et al., 2012; Barr & Conlon, 1994; Dewett, 2003; Nadler, 1979). For
example, Barr and Conlon (1994) suggest that the unique effects of instructional support in a
team environment may be due to the distribution of prompts among team members, a process
which is dependent on the interaction of a number of individual-level and team-level
variables including: the interaction between team members; the nature and efficacy of group
communication; and individual perceptions of information. The potential discrepancy
between individual and group level support may be especially relevant in the case of process-
level prompting. In a collaborative context, process-level prompting may address individual
and group behaviours, actions and strategies during the course of team learning, however,
research examining the impact of process-level support in teams is still in its infancy
(Gabelica et al., 2012; Hattie & Timperley, 2007). As such, in the current study we
investigated the effects of task-level versus process-level prompting in the context of a group
decision-making process that involves collaborative argumentation, consensus-building and
the development of a systems-based understanding of a common problem.
Prompting, Argumentation, and Group Decision Making
A core focus of the current study is collaborative argumentation. The ability to engage in
dialogue, debate and collaborative argumentation is an essential human skill. The utility of
this skill is evident in many scenarios - an academic engaging in debate, a researcher positing
a theory, a politician lobbying for a policy, an entrepreneur pitching a product idea, to name
but a few. In scenarios such as these, an individual must cite relevant facts, coordinate
reasons and objections in relation to „factual‟ claims, respond to rebuttals and counterclaims
in a logical manner, and make full use of their powers of persuasion to convince others of
their position and justify their conclusions (Scheuer, Loll, Pinkwart & McLaren, 2010).
However, argumentation is more than just a method for persuading others, it is also an
essential tool which individuals employ collaboratively in order to arrive at rational decisions
and conclusions that promote the adaptive success of the group. The power of collaborative
learning derives, in part, from its potential to facilitate cognitive coherence in groups (Stahl,
2010). Stahl points out that there currently exist multiple theoretical frameworks, each with
its own model of the influences on collaborative learning. Importantly, in an attempt to
synthesize the major categories of influences in this context - including team knowledge
artifacts, team outcomes, tasks, technology and media, interaction context, culture of
discourse community, individual voices, and individuals‟ resources and experiences - Stahl
places dialogical interaction at the centre of these influences. This dialogical interaction
represents the means by which learners enter into a collective knowledge-building agency.
Given the importance of such dialogical interaction in collaborative learning settings, it is
imperative that research attend to the means and methods by which such dialogue,
collaborative argumentation and consensus building can be supported and facilitated.
However, as noted above, people often demonstrate limited argumentation skills (Tannen,
1998). This has been documented in various studies which report problems with
argumentation in informal settings (Kuhn, 1991), as well as in specific, professional and
scientific domains (Stark, Puhl & Krausse, 2009). In order to address these limitations, there
is a growing trend in the use of instructional support to enhance argumentation skills,
particularly in collaborative learning settings (Scheuer, Loll, Pinkwart & McLaren, 2010).
Collaborative argumentation is not a simple process whereby individuals provide a series of
reasons and objections in relation to a set of claims - it may involve many and diverse types
of talk that are coordinated in a more or less coherent manner. For example, the
Conversational Argument Coding Scheme (CACS), used in the current study, identifies 16
conversational codes grouped under 5 argumentation categories, with the codes representing
different levels and types of argumentation (see Method section).
Instructional support designed to facilitate dialogic argumentation in collaborative
learning classrooms can take many forms. One commonly used strategy is question asking, a
strategy which has been found to have positive effects on argumentation in both university
and high-school students (Graesser, Person, & Huber, 1993). Graesser et al. refer to question
asking as a fundamental strategy of engaging with learners in collaborative learning settings.
Question asking can serve a number of functions, including: prompting students to check
each other‟s information, prompting provision of further explanation and encouraging
justification of assertions (Webb, 1995). King (1990), in a sample of undergraduate and
graduate university students taking an education methods course, found that higher-order
questions - including open-answer questions, deep-reasoning questions aimed at causes and
consequences, and goal-oriented questions - are effective in eliciting explanations, in which
justifications may be enclosed. Furthermore, Veerman, Andriessen and Kanselaar (2000), in
the context of teaching about effective pedagogical interactions in a university student sample
found that asking open questions, as well as questions aimed at inferring knowledge, were
positively associated with argumentation performance, measured by reference to frequency of
information exchange (e.g. checking, challenging, and countering) and constructive activities
(e.g. explaining, evaluating, and summarising). In this way, question asking can be used as a
form of process-level prompt, as questions can be used to move beyond an assessment of the
correctness of a student‟s response, to address the process, strategy or logic used by the
student. As such, question asking, at different levels of complexity is central to our definition
of both task-level and process-level prompts in the current study.
From a technological perspective, researchers have sought to develop computer-
supported tools to both teach and support argumentation. The field of CSCL has, in
particular, been interested in argumentation and how students can benefit from it (Baker,
2003; Schwarz & Glassner, 2003; Andriessen, 2006; Stegmann, Weinberger, & Fischer,
2007; Muller Mirza, Tartas, & Perret-Clermont, 2007). Collaborative argumentation is
viewed as a key way in which students can acquire critical and reflective thinking skills
(Andriessen, 2006). Several researchers have investigated the use of CSCL tools in
supporting argumentation, using tools such as Belvedere (Paolucci, Suthers, & Weiner,
1995), SenseMaker (Bell, 2004), Drew (Baker, Quignard, Lund, & Sejourne, 2003), pro–con
tables (Schwarz & Glassner, 2003), and matrices (Suthers & Hundhausen, 2003). One of the
primary reasons for using these tools is that they provide visual representations of the
thinking and argumentation learners are engaged with, and thus stimulate collaboration and
sharing of ideas (Bell, 2004; Van Bruggen, Boshuizen, & Kirschner, 2003). Furthermore,
these tools often require students to make explicit their assertions, claims and arguments, and
support collaborative consideration of shared ideas, allowing for the recognition of gaps or
contradictions in argument structures (Suthers & Hundhausen, 2003).
Importantly, argumentation tools are used in specific pedagogical contexts; hence
success is determined not only by the specific software tools that are used, but also by the
overall setting in which the software is employed. Inevitably, it is necessary to guide students
in their use of CSCL tools if learning gains at the level of the individual and group are to be
maximised. For this reason, understanding the impact of facilitation and instructional support
on the development of argumentation skills and problem solving in collaborative settings is
critical.
Interactive Management
One CSCL tool which can be used to facilitate problem-solving and collaborative
argumentation skills is Interactive Management (IM). IM is a computer facilitated thought
and action mapping methodology designed to facilitate group creativity, group problem
solving, group design and collective action in the context of complexity (Warfield &
Cardenas 1994). Interactive Management is designed to facilitate cooperative inquiry and
consensus in relation to a problem. Established as a formal system of facilitation in 1980 after
a developmental phase that started in 1974, IM was designed to assist groups in dealing with
complex issues (see Ackoff, 1981; Argyris, 1982; Deal & Kennedy, 1982; Rittel & Webber,
1973; Simon, 1960). The theoretical constructs that inform IM draw from both behavioural
and cognitive sciences, with a strong basis in general systems thinking. Emphasis is given to
balancing behavioural and technical demands of group work (Broome & Chen, 1992), while
honouring design laws concerning variety, parsimony, and saliency (Ashby, 1958; Boulding,
1966; Miller, 1956).
There are a series of steps in the process (see Figure 1). First, a group of (typically,
between 12 and 20) people, with an interest in resolving a problematic situation come
together and are asked to compile a set of raw ideas which they feel might have an influence
on the problem in question. Group discussion and voting helps the group to identify the
factors they agree have the most critical impact on the problem. Next, using IM software,
Interpretative Structural Modelling (ISM), each of the critical issues is compared
systematically in pairs by asking the question: “Does issue A significantly influence issue
B?” Unless there is a clear majority consensus that A impacts on B, the relation does not
appear in the final analysis. This process continues until all of the critical issues have been
compared. The ISM software then generates a problematique, which is a graphical
representation of the problem-structure, showing how all the critical problem factors are
interrelated. This consensus-based problematique becomes the catalyst for discussion,
planning of solutions and collective action in response to the problem (Warfield, 2006).
Although Warfield designed IM as a consensus-based problem-solving tool, there remains a
paucity of research investigating the role of facilitation and prompting in an IM systems
thinking environment
Figure. 1. Steps in the Interactive Management process
Social psychological factors in collaborative settings
Stahl (2010) contends that the power of collaborative learning stems from its potential to
unite multiple people in achieving the coherent cognitive effort of a group. A primary goal of
CSCL is to explore how this synergy occurs and seek to design and implement methodologies
which can support and enhance this process. With this in mind, a number of social
psychological variables were considered in the current study.
One outcome of interest in the current study is consensus. Both perceived and
objective consensus are potentially critical variables which need to be considered in efforts to
enhance the successful workings of groups using CSCL tools, particularly if the goal is to use
CSCL tools to enhance group problem solving and decision making. The term consensus
refers to the extent to which two or more people agree in their ratings of a target (Kenny,
Albright, Malloy & Kashy, 1994). Reaching consensus on a solution to a problem is
advantageous for many reasons, especially with regard to implementing an action plan
designed to resolve a problematic situation. If there is a high level of consensus amongst
group members as to key decisions and conclusions, progress toward a solution to a shared
problem may be easier to achieve. For example, Mohammed and Ringseis (2001) found that
groups who reported higher levels of consensus in relation to a problem had greater
expectations about the implementation of decisions reached by the group, and also
experienced higher levels of overall satisfaction. The authors also found that the highest
levels of consensus were evident in groups in which the members questioned each others'
(1) Generate and Clarify Ideas (system elements)
Statement Number of Sum of ranks
Category
votes
2. Lack of clear incentives to 4 16 8
23. Clashing personalities and 4 10 4
12. Challenge of identifying l 3 8 6
4. Lack of identity for the new 3 9 2
17. Uncertainty regarding new 2 7 2
25. Lack of reward systems to 2 6 8
9. Difficulty in defining clust 2 6 1
24. Unrecognized value of soci 2 7 2
5. Specialization (mitigates ag 2 6 5
7. Lack of clear language that 2 6 5
19. Overdependence on "bureauc 2 4 6
22. Some individuals want to w 2 2 4
3. Lack of motivation or intere 2 7 7
13. Lack of opportunity for fo 1 3 3
26. Turf issues: individuals w 1 5 4
32. Someone needs to commit si 1 4 6
20. Divergence in methods, pro 1 5 5
28. Not really an existing, re 1 4 3
33. Institute based on what we 1 2 6
14. Lack of information/certai 1 1 5
15. Lack of translation of res 1 2 8
-----------------------------------------------------------------
------
(3) Structure Elements
(4) Evaluate graphical representation of
group logic (element relations)
(5) Evaluate the reasoning supporting
each relation in the system of logic
(2) Vote, rank order, and select
elements
for structuring
suggestions, accepted legitimate suggestions and incorporated others' viewpoints into their
own perspective. What is less clear from such results is whether the facilitation and support
provided to groups during collaborative discussions influences consensus-building, and if
these effects are similar for both perceived and objective consensus. Perceived consensus
refers to the extent to which members of a group report feeling that consensus exists within
the group. Objective consensus, on the other hand, refers to actual levels of agreement, as
opposed to perceived levels of agreement.
Another important outcome considered in the current study is the group‟s judgment of
the efficacy of the CSCL tool that they are using. Higher levels of perceived efficacy of the
CSCL tool are an important social outcome. If CSCL tools such as IM are to be adopted by
groups for use in educational and professional settings, it is imperative that they are perceived
as efficacious by the user group. Again, it is unclear if specific types of facilitation and
prompts influence the perception that group members have in relation to the tools and
methodologies that they are using.
Finally, an important social psychological variable to consider in the context of
dialogic interaction group consensus, and perceived efficacy of group processes, is the level
of trust that exists amongst group members. Research suggests that higher levels of shared
trust in a group leads to increased levels of knowledge sharing (Roberts & O‟Reilly, 1974),
with individual group members perceiving knowledge sharing as less costly (Currall &
Judge, 1995). Furthermore, higher levels of shared trust in a group may increase the
likelihood that knowledge received is adequately understood and absorbed so that the
individual can put it to use (Mayer, Davis & Schoorman, 1995). This research suggests that
both trust and the facilitation of dialogue may influence other important outcomes in
collaborative learning environments, including perceived and objective consensus and
perceived efficacy of the methodologies and tools that support learning. Consistent with this
view, Harney, Hogan and Broome (2012) found that collaborative groups working in an
environment that encouraged open dialogue and discussion, and groups higher in
dispositional trust, reported higher levels of perceived consensus, objective consensus and
perceived efficacy of collaborative learning methodologies, when compared with groups
where levels of dispositional trust were lower and where open dialogue and discussion was
restricted. However, the study by Harney, Hogan and Broome (2012) did not manipulate
facilitator prompting strategies. Facilitator strategies warrant investigation in CSCL contexts
as they may interact with levels of dispositional trust in a group to influence outcomes such
as the nature of group argumentation, the level of perceived and objective consensus achieved
by a group, and the perceived efficacy of the CSCL tool used by the group. Therefore, the
current study included dispositional trust as a covariate in the analysis of experimental
prompting effects.
The current study
The current study investigates the effects of task-level versus process-level prompts on
perceived and objective consensus, perceived efficacy, argumentation style, and collaborative
systems model complexity in the context of an IM session. In light of the evidence reviewed
above, it was hypothesised that prompting style during collaborative dialogue and
argumentation is a critical factor in shaping key outcomes of collaborative learning.
Specifically, it was hypothesised that:
1. Process-level prompts would produce higher levels of perceived and objective
consensus and higher perceived efficacy of the IM CSCL tool.
2. Groups that receive task-level prompts would report lower levels of perceived and
objective consensus, and perceived efficacy of the IM collaborative learning tool.
3. Process-level prompts would result in more complex and varied forms of
argumentation in groups. In particular, it is hypothesised that the process-level
prompt condition would result in higher frequency use of propositions,
amplifications, justifications, acknowledgements and challenges (see Table 1).
4. Process-level prompts would result in the development of more complex systems
models. If, as hypothesised, the process-level prompts cultivate more diverse,
sophisticated forms of argumentation, then it follows that this would lead to more
complex and differentiated relational thinking that is less likely to be biased in any
simple heuristic manner by previous relational judgements, and thus result in more
complex representations of the relationships between ideas in the systems structuring
phase. With a more diverse pattern of voting, the matrix structures are likely to be
more differentiated and thus result in more complex systems representations.
Method
Design
A one way ANCOVA was used to assess the effects of prompting style (task-level versus
process-level) on perceived efficacy of IM, while controlling for dispositional trust. A 2
(condition: task-level versus process-level) x 2 (time: pre-intervention versus post-
intervention) mixed ANCOVA was used to assess the effects of task-level versus process-
level prompts on perceived consensus, again controlling for dispositional trust. A Statistica™
coefficient comparison test was used to assess the statistical significance of differences in
objective consensus across groups before and after the experimental manipulation (i.e.,
differences in Kendall‟s W). Finally, a series of 2 (condition: task-level versus process-level)
x 2 (present versus not present) chi-squared tests were used to examine frequency differences
in dialogic argumentation events across prompting conditions using the CACS coding
system.
Participants
Participants were first and second year psychology students (N = 75) comprising 28 males
and 47 females, aged between 18 and 27 years (M = 19.60, SD = 3.15), from the National
University of Ireland, Galway. Participants were offered research participation credits in
exchange for their participation.
Measures/Materials
Trust
Dispositional trust was measured using a combination of the scales developed by Pearce,
Sommer, Morris, & Frideger, (1992) and that of Jarvenpaa, Knoll and Leidner (1998). The
Pearce et al. scale included 5 items; the Jarvenpaa, Knoll and Leidner scale included 6 items.
The 11 items were rated on a 5-point Likert scale (1 = strongly agree, 5 = strongly disagree;
e.g., “Most people tell the truth about the limits of their knowledge”, “Most people can be
counted on to do what they say they will do”, and “One should be very cautious to openly
trust others when working with other people”). The scale had good internal consistency in
the current study (α = .72).
Perceived efficacy
Perceived efficacy of the IM process itself was measured using a scale developed for use in a
previous (Harney, Hogan, & Broome, 2012). The scale included 7 items rated on a 5-point
Likert scale (1 = strongly agree, 5 = strongly disagree; e.g. “I believe that Interactive
Management can be used to solve problems effectively”). The scale had good internal
consistency (α = .88).
Perceived consensus
The method of measurement used in this study was similar to that used by Kenworthy and
Miller (2001): participants first gave their opinion (via the voting of problems relations) and
were then asked to rate how representative their opinions were in relation to the opinion of
other members of their group. While Kenworthy and Miller asked participants for a
percentage estimate, we decided to test their perceived consensus using a 5-item scale with
five-point Likert ratings (1 = strongly agree, 5 = strongly disagree; e.g., “Generally speaking,
my peers and I approach online social media in a similar manner”). The scale had good
internal consistency (α = .77).
Objective consensus
Objective consensus was measured using Kendall‟s coefficient of concordance (Kendall‟s W)
in relation to Likert scale judgement across a random set of ten relational statements. These
relational statements were generated from a set of propositions compiled by the authors in
advance of the IM session, and which participants considered during the IM session. A
sample item from this set is: “Increased dissatisfaction with one‟s own life significantly
aggravates increased unfair judgements of others”. Items were scored by each individual
using a 5-point Likert scale (1 = strongly agree, 7 = strongly disagree). Objective consensus,
as measured by Kendall‟s W, was computed for each group before and after the experimental
manipulation (i.e., task-level versus process-level prompts). High values occur when there is
greater agreement between raters in the group.
Style of argument
Style of argument was assessed using the Conversational Argument Coding Scheme (Seibold
& Meyers, 2007). The Conversational Argument Coding Scheme (CACS) was developed to
investigate the argumentative micro processes of group interaction (Beck, Gronewold &
Western, 2012). The CACS includes 5 five argument categories, which contain a total of
sixteen argument codes (See Table 1). The five argument categories include: generative
mechanisms (assertions and arguables), which are “potentially disagreeable statements” and
are considered to reflect simple arguments (Meyers & Brashers, 1998); reasoning activities
(elaborations, responses, amplifications, and justifications) which are higher-level argument
messages and are most often extensions of generative mechanisms; convergence-seeking
activities (agreement and acknowledgements), which include recognition and/or agreement
with other statements; disagreement-relevant intrusions, which consist of statements denying
agreement with arguables, or posing further questions; and delimitors (frames,
forestall/secure and forestall/remove), which consist of messages designed to frame or
contextualize the conversation. The remaining codes are termed nonarguables (process,
unrelated and incompletes) which consist of statements regarding how the group approach the
task, side issues and incomplete or unclear ideas and statements. Multiple Episode Protocol
Analysis (MEPA; Erkens, 2005) was used to facilitate the CACS analysis. MEPA is
computer software designed for interaction analysis, in which transcribed data can be coded
or labelled on several dimensions or levels.
Table 1. Conversational Argument Coding Scheme (Seibold & Meyers, 2007)
Code
Example from transcript
I. Arguables
A. Generative Mechanisms
1. Assertions: Statements of fact or
opinion
2. Propositions: Statements that call for
support, action, or conference on an
argument-related statement
“I just suggested increased feelings
of anger towards yourself and
others”
“Wouldn‟t this just be related
directly to being self-conscious
about your image anyways?”
B. Reasoning activities
3. Elaborations: Statements that
support other statements by
providing evidence, reasons, or other
supports
4. Responses: Statements that defend
arguables met with disagreement
5. Amplifications: Statements that
explain or expound upon other
statements to establish the relevance
of the argument through inference
6. Justifications: Statements that offer
validity of previous or upcoming
statements by citing a rule of logic
(provide a standard whereby
arguments are weighed)
“..because, you know, sometimes
you‟re emotionally affected and a lot
of the time people will feel angry”
“..but then whoever has been left out
just ends up being paranoid about
people that they thought they could
trust and things”
“Em, I guess what‟s been suggested
is that you might have less time to
engage in other activities and from
the range of interests you might
have, you might not have time for
them anymore”
“Just putting yourself in that
situation like, if you just put yourself
in the shoes of a person who
wouldn‟t be invited and just think
about what they would feel?”
II. Convergence-seeking activities
7. Agreement: Statements that express
agreement with another statement
8. Acknowledgement: Statements that
indicate recognition and/or
comprehension of another statement
but not necessarily agreement with
another‟s point
“Yeah, I think what she said is right”
“I think that it could be, for some
people, but I‟ve never experienced
guilt from Facebook”
III. Disagreement-relevant intrusions
9. Objections: Statements that deny the
truth or accuracy of an arguable
10. Challenges: Statements that offer
“I don‟t think it‟s significant”
“Is it increased perception of being
problems or questions that must be
solved if agreement is to be secured
on an arguable
judged in a positive way though?
You change your personality to be
judged in a positive way, because
you think you were being judged in
a negative way?”
IV. Delimitors
11. Frames: Statements that provide a
context for and/or qualify arguables
12. Forestall/secure: Statements that
attempt to forestall refutation by
securing common ground
13. Forestall/remove: Statements that
attempt to forestall refutation by
removing possible objections
“Em, it‟s probably within
themselves, that they doubt
themselves so much more.”
No examples in transcript
No examples in transcript
V. Nonarguables
14. Process: Non-argument-related
statements that orient the group to its
task or specify the process the group
should follow
15. Unrelated: Statements unrelated to
the group‟s argument or process
(tangents, side issues, self-talk, etc.)
16. Incompletes: Statements that do not
contain a complete, clear idea
because of interruption or a person‟s
discontinuing a statement
No examples in transcript
No examples in transcript
“I don‟t know, eh…(discontinued)”
Complexity of IM problematiques
These complexity scores are based on total activity of the paths of influence in the structure.
This involves computing the sum of the antecedent and succedent scores for each element.
The antecedent score is the number of elements lying to the left of an element, which
influences it. The succedent score is the number of elements lying to the right of an element
in the structure, which influences it (Warfield & Cardenas, 1994).
Interpretative Structural Modelling
Interpretive Structural Modelling (ISM) is a computer-mediated, idea-structuring
methodology that is designed to facilitate group problem solving (Warfield & Cardenas,
1994). The ISM programme was run on a PC by facilitators. The relations which groups were
asked to consider and vote on were displayed on a large screen via an overhead projector.
Procedure
During recruitment, prospective participants were presented with information in relation to
the nature of the study, including details as to its focus on collaborative inquiry and the
personal and social consequences of online social media. Participants were invited to register
online via SurveyGizmo, and were required to complete a dispositional trust scale as part of
the registration process. Participants were randomly allocated to one of four groups, two in
the task-level condition (n = 20, n = 20) or process-level condition (n = 17, n = 18).
Interactive Management sessions
A total of four IM sessions were carried out, with no more than 20 students in any one
session. Each session lasted approximately 180 minutes. Participants in each of the four
sessions were directed to a room in which chairs were arranged in a circle, such that all of the
group members could see each other. Before the IM session began, each participant was
given a document which contained a participation information sheet, a perceived consensus
scale and an objective consensus scale. The participants were asked to read the information
sheet, which contained an introductory paragraph about online social media. Participants
were then required to complete the aforementioned scales. Once all scales had been
completed, participants were given a list of potential negative consequences of social media
usage, which were compiled based on a review of the literature. Next, the IM process was
explained to participants and then the session began.
The design of the prompting conditions was informed by the work of Hattie and
Timperley (2007) and Hattie and Gan (2011). The task-level condition consisted primarily of
simple, task-level prompts, while the process-level condition consisted of task-level prompts,
with the addition of process-level prompts. In each condition, an independent facilitator was
given a specific set of prompts or instructions which could be used as part of the process (see
Figure 2). A second facilitator was present to oversee the process, and assist with the input of
ideas into the ISM software. In both conditions, participants were asked to silently generate a
set of ideas in addition to the idea set provided which they felt had a significant impact on the
problem at hand (i.e. negative consequences of online social media). This is referred to as the
Idea Generation phase of IM. Specifically, the nominal group technique (NGT) was used
(Delbeq, Van De Ven, & Gustafson, 1975). The NGT is a method that allows individual ideas
to be pooled, and is ideally used when there are high levels of uncertainty during the idea
generation phase. NGT involves five steps: (a) presentation of a stimulus question; (b) silent
generation of ideas in writing by each participant working alone; (c) presentation of ideas by
participants, with recording on flipchart by the facilitator of these ideas and posting of the
flipchart paper on walls surrounding the group; (d) serial discussion of the listed ideas by
participants for sole purpose of clarifying their meaning; and (e) implementation of a closed
voting process in which each participant is asked to select and rank five ideas from the list,
with the results compiled and displayed for review by the group. In the current study,
participants began by generating ideas in response to the question: “What are the negative
effects of online social media?” Once the initial silent idea generation was complete, and
each participant had their own list of ideas to offer, the facilitator went around the room, to
each participant asking them to present their idea to the rest of the group. They were asked to
explain their idea clearly and succinctly. The facilitator would then open the discussion up to
the group, by asking “Does anyone have any other ideas?” While these guidelines were also
followed by the facilitator in the process-level prompt condition, there was also the addition
of some further prompts. In the process-level prompt condition, the facilitator could, where
necessary, ask for further clarification, suggest that some ideas offered may be similar in
nature and require further examination, suggest merging of ideas, suggest breaking down of
ideas which appear to have multiple-components, suggest considering the relevance of the
idea offered in the problem-context, and suggest considering the generalizability of the idea
offered (see Figure 2).
The next phase is the Idea Structuring phase. This is the phase during which the
primary computer supported collaboration took place, using the ISM software. In an effort to
reduce cognitive load, facilitate focus, and build the components of the systems model, the
ISM software presents on screen two elements at a time, asking the question “Does A
significantly influence B?” As each of these relational statements is presented on the screen,
the facilitator would open the discussion to the room, and ask if anyone has a “yes” or “no”
preference at this stage. As participants indicated their preference, the facilitator would ask
why they had this stated preference, and then request other opinions from the group. The
facilitator would then request a show of hands from the group, and a vote would be taken and
recorded by the ISM software. Again, these guidelines were also followed by the facilitator in
the process-level prompt condition, but with the addition of further prompts and instructions.
In the process-level condition, the facilitator could, where necessary, ask for contrary
opinions, ask for support or evidence, ask the group to further consider the relevance of
arguments provided, and suggest considering the generalizability of the reasons and evidence
offered (see Figure 2).
Figure 2. Task-level and process-level prompts.
Results
Due to the fact that prompts were delivered at the group level, and perceived efficacy and
perceived consensus were measured at the individual level, unconditional models using SAS
Proc Mixed were conducted. These models were tested separately for each prompt condition,
and each outcome, to determine whether to not there was any significant clustering by group
status. These analyses indicated that the intra-class correlations ranged between 0 and 0.0022
(p = .49). As such, it was deemed that further multi-level analysis was not necessary. The
results of ANCOVA testing of perceived efficacy and perceived consensus is presented
below.
Perceived efficacy
Perceived efficacy of the IM methodology was assessed at post-test only. A one way
ANCOVA was used to assess the effects of prompting style (condition: task-level versus
process-level) on perceived efficacy of IM, while controlling for dispositional trust. The
ANCOVA revealed a significant main effect of condition, F(1,72) = 38.00, p < .001,
ηp2= .345, d = 1.51, with higher perceived efficacy in the process-level condition (M = 24.38,
SD = 2.71) than in the task-level condition (M = 22.98, SD = 2.63). No other effects were
observed.
Perceived consensus
A 2 (condition: task-level versus process-level) x 2 (time: pre-intervention versus post-
intervention) mixed ANCOVA was used to assess the effects of task-level versus process-
level prompts on perceived consensus, again controlling for dispositional trust. The
ANCOVA revealed a significant time x condition interaction, F(1,72) = 8.91, p =.004,
ηp2= .11, d = 0.83, with a significantly greater increase in perceived consensus in the process-
level condition from pre (M = 18.06, SD = 2.22) to post (M = 20.54, SD = 2.55; t = 4.33, p
<.01) than in the task-level condition from pre (M = 17.95, SD = 2.83) to post (M = 18.10, SD
= 2.34; t = .18, p = .86). The results also revealed a significant main effect of the covariate,
dispositional trust, on perceived consensus, F(1,72) = 6.48, p = .013, ηp2= .083, d = 0.82,
with higher trust associated with higher levels of perceived consensus.
Objective consensus
Kendall‟s coefficient of concordance (Kendall‟s W) was used to measure concordance (i.e.,
agreement of ratings in relation to specific ISM paths of influence) within groups before and
after the experimental manipulation. While there was a trend for objective consensus to
increase in all groups, these differences were not statistically significant (p > .05 for all four
comparisons; see Table 2).
Table 2. Objective consensus
Condition
Pre
Post
Task-level
.23
.25
Process-level
.21
.23
Argument style
A series of 2 (condition: task-level versus process-level) x 2 (present versus not present) chi-
squared tests were used to assess the statistical significance of differences in argumentation
codes (as per the CACS) across prompting conditions. Of the 16 possible CACS argument
codes which comprise the five argument categories, 12 were observed in the process-level
condition at least once, 8 were observed in the task-level condition at least once, and 4 were
not observed in any condition. Significant differences were observed across conditions for 3
argument codes, with higher frequency occurrence in the process-level prompt condition in
each case, specifically, for Amplifications (x2 (1) = 9.99, p = .002, V = .123, d = 0.76),
Challenges (x2 (1) = 7.45, p = .006, V = .118, d = 0.67), and Propositions (x2 (1) = 6.27, p
= .012, V = .108, d = 0.61). In each of the remaining codes, with the exception of objections,
higher incidence was also observed in the process-level condition than in the task-level
condition, however, these differences were not statistically significant. Descriptive data are
presented in Figure 3.
Figure 3. Incidence of CACS codes across prompting conditions.
* = significant at .05 level, ** = significant at 0.01 level
Finally, analysis of the IM-generated problematiques (see Figures 4 - 7), shows significant
differences in complexity of argument structures across conditions. The average complexity
score for the problematiques generated by the process-level prompt groups is 25.5. The
average complexity score for the problematiques generated by the task-level prompt groups is
14.5.
Figure 4. IM problematique generated in the process-level prompt condition
0
20
40
60
80
100
120
140
160
180
16
0
16
62
6
76
0
56
4
138
0
100
18
4
**
44
60
20
92
12
**
92
6
172
*
10
122
Task-level Process-level
Figure 5. IM problematique generated in the process-level prompt condition
Figure 6. IM problematique generated in the task-level prompt condition
Figure 7. IM problematique generated in the task-level prompt condition
Discussion
The current study examined the effects of task-level versus process-level prompts, on
perceived efficacy of the IM method, perceived consensus, objective consensus, and
argumentation style and complexity in the context of an IM session. Results indicated that,
compared to those in the task-level prompt condition, those in the process-level prompt
condition, reported higher levels of perceived consensus in response to the group design
problem. Furthermore, those in the process-level prompt condition also reported higher levels
of perceived efficacy of the IM process. Finally, analysis of the dialogue from the IM
sessions revealed that those in the process-level prompt condition exhibited higher levels of
sophistication in their arguments, as revealed by their CACS scores and the complexity of
their IM-generated problematiques.
Achieving higher levels of consensus and promoting more coherent collective action
was a core objective of John Warfield‟s when he first developed the IM methodology
(Warfield & Cardenas, 1994). Importantly, in the current study, while perceived consensus
levels increased in both prompting conditions, the increase was significantly greater in the
process-level condition. This suggests that while the IM method itself is effective in
promoting consensus in a collaborative group, the role of the facilitator and in particular the
instructional support provided by the facilitator has a significant impact on the consensus-
building process.
Furthermore, the observed link between higher dispositional trust and higher perceived
consensus in the current study is consistent with previous research which suggests that trust
can influence critical psychosocial processes that may impact on levels of consensus in a
collaborative group. For example, research suggests that dispositional trust is associated with
a preference for social negotiation, critical thinking and solution finding (Kreijns et al.,
2002). These factors may have influenced the positive relationship between dispositional trust
and perceived consensus in relation to the collaborative efforts of the group in the current
study. It is noteworthy that, while perceived consensus increased significantly in the process-
level prompt condition, a significant increase was not seen in objective consensus. This
suggests that while the group felt that they were moving towards greater levels of shared
understanding and agreement, enhanced by the style of facilitation, their actual level of
agreement, in terms of Likert scale agreement/disagreement with IM relational statements,
did not increase to the same degree. In practical terms, the implications of the two forms of
consensus not coinciding may be different depending on which is higher. If perceived
consensus is higher than objective consensus, it would be expected that the group would
continue be satisfied with the group process and function effectively, as previous research
suggests (e.g. Mohammed & Ringseis, 2001). However, if objective consensus is high, but
perceived consensus is low, this suggests that although the level of objective agreement in
relation to the topic is high, the group is not aware of this level of agreement, or does not feel
that their interactions and discussions reflect agreement. This, in turn, might suggest that the
group is not functioning optimally, or that other factors may be having a negative impact on
consensus-based interactions.
The results here also suggest more time may be required to increase objective
consensus in relation to complex issues, whereas increased levels of perceived consensus
may be cultivated in a relatively short time frame by the facilitator and by certain qualities of
the collaborative discussion (e.g., turn-taking, inclusiveness, democratic decision making). It
is also possible that, based on the findings of this study, and consistent with findings from
previous research, the positive group behaviours associated with higher perceived consensus
(positive expectations and overall satisfaction with the group process), may help a group to
achieve high objective consensus over time.
These results represent significant findings in relation to collaborative learning, and
CSCL in particular, as higher levels of perceived consensus are likely to lead to higher levels
of endorsement and engagement by the group in any action or response to a shared problem.
For example, if a group feels strongly that there is a strong level of consensus in relation to
the understanding and conception of a problem that they are working on together, they are
more likely to be committed to, and satisfied with, any plan which comes from the newly-
formed collaborative understanding (Mohammed & Ringseis, 2000). The effect of process-
level prompts on perceived efficacy has further implications for CSCL. While results showed
that, broadly speaking, participants across both prompting conditions found the computer-
facilitated group design methodology to be a useful and valid method of mapping and
structuring the interdependencies between problem relations (for example, on average,
between 80 - 90% of participants across both conditions agreed or strongly agreed with the
statement “I believe that Interactive Management can be used to help a group achieve
consensus about a problem”), those in the process-level prompt group reported significantly
higher levels of perceived efficacy in relation to the IM process. Therefore, the prompts
provided by the facilitator may be important for the overall success of the process, and for the
level of support for the methodology by the group. This support for, or endorsement of, the
methodology may be important in the context of efforts to sustain the ongoing use of a
collaborative methodology as part a problem solving strategy adopted by students or other
working groups.
With regard to the types of argumentation coded by reference to the CACS coding
system, overall, reasoning activities accounted for 37% of coded utterances, generative
mechanisms accounted for 20%, disagreement-relevant intrusions accounted for 15%,
convergence-seeking activities accounted for 14% and delimitors accounted for only 0.5%.
The remaining 13.5% of coded utterances were nonarguables. This suggests that the
argumentation across groups was reasonably complex, as the arguments did not rely heavily
on generative mechanisms (assertions and propositions) as is typically the case in simple
argumentation (Canary, Brossmann, & Siebold, 1987). While these figures suggest that, in
general, the argumentation was reasonably complex, the results of the CACS analysis in
MEPA showed that the process-level prompt condition displayed higher levels of argument
sophistication, with higher incidence of CACS codes across all major categories.
Furthermore, when compared with those in the task-level prompt group, participants in the
process-level prompt condition demonstrated significantly higher levels of propositions,
amplifications and challenges. This suggests that the process-level prompt condition was
engaging at a higher-level with the claims presented during the IM structuring work, and
made more effective moves towards reaching a level of understanding and consensus within
the group prior to voting. For example, while elaborations (i.e., statements that support other
statements by providing evidence, reasons or other support e.g. “Because of peer pressure,
you know, people trying to get you to do things”) were similarly evident in both groups,
amplifications (i.e., statements that explain or expound upon other statements to establish the
relevance of an argument through inference e.g. “I think they are related because change in
personality would be more kind self-conscious I suppose, but not perception of being
judged,”) were observed more often in the process-level prompt condition. In this way, those
in the process-level prompt condition were moving beyond accumulation of evidence and
support in their reasoning activity - they were working further to establish how this reasoning
relates to the problem at hand, and more specifically the relevance of their reasoning.
Similarly, while the frequency of objections (i.e. statements that deny the truth or accuracy of
an arguable e.g. “No, I think it would be the other way around”) were almost identical across
the two prompt conditions, challenges (i.e. statements that offer problems or questions that
must be solved if agreement is to be secured on an arguable e.g. “Well it kind of depends, on
whether your self-consciousness affects your ability to socialise”) occurred more often in the
process-level prompt condition. This suggests that those in the process-level prompt
condition engaged more critically with the information at hand, and engaged in more
productive argumentation. Finally, of the 16 types of argument codes which comprise the
CACS, 12 were observed at least once in the process-level prompt condition, whereas only 8
were observed at least once in the task-level prompt condition, highlighting the greater
diversity of argumentation styles demonstrated by the process-level prompt groups.
The IM methodology is well established and has been used successfully in a wide variety of
scenarios to accomplish many different goals, including assisting city councils in making
budget cuts (Coke & Moore, 1981), developing instructional units (Sato, 1979), improving
the U.S. Department of Defence acquisition process (Alberts, 1992), promoting world peace
(Christakis, 1987), improving tribal governance processes in Native American communities
(Broome, 1995a, 1995b; Broome & Christakis, 1988; Broome & Cromer, 1991), and training
facilitators (Broome & Fulbright, 1995). However, IM is a facilitated process (Hogan,
Harney, & Broome, 2014), the success of which is heavily influenced by the support,
guidance, and instruction provided by the facilitator. While the importance of good
facilitation is often highlighted by expert facilitators (Hmelo-Silver, 2002), the current study
provides one of the first experimental demonstrations of the effects of prompt style on
outcomes in the application of IM.
In the current study, the students were tasked with developing a consensus based
model of the negative consequences of online social media usage, a focus of discussion that
most students reported as interesting and relevant. The process of model building involved
the generation of ideas in relation to the problem, rank-ordering and voting on the most
critical ideas, and discussion and decision-making regarding the interdependencies between
these ideas. Overall, when examining the relational complexity of the models or structural
hypotheses generated by students, the current study revealed that those in the process-level
prompt condition arrived at a more complex, consensus-based understanding of the relations
between the negative consequences of online social media usage. While each group began
with the same initial set of ideas, added an almost equivalent number of additional ideas, and
ultimately structured the same number of ideas during the model building process, the results
of the groups‟ collaborative efforts differed in important ways. During this process, the effect
of higher dispositional trust and process-level prompts were shown to have positive effects on
social psychological variables which are of key importance in collaborative learning settings,
that is, consensus and perceived efficacy. Process-level prompts also helped to promote an
enhanced style of dialogue and argumentation and increased the overall complexity of
consensus-based models generated by the groups.
A closer look at the models or problematiques generated by groups reveals variations
in complexity, which are in line with the varying degrees of argument complexity measured
by the CACS. For example, while “decrease in personal privacy” appears as a primary driver
in two of the problematiques (one in each prompt condition, that is, Figure 4 and 7), the paths
of influence stemming from this idea are more elaborate in the process-level prompt
condition (see Figure. 4). In both the task-level prompt and process-level prompt models
referred to above, “decrease in personal privacy” had a significant aggravating effect on
“increased jealousy in relation to the lives of others”. However, in the process-level prompt
condition, this path of influence is mediated by poorer self-image. This suggests that the
process-level prompt groups, through more complex and varied argumentation and
exploration, further developed this relationship, and reached a consensus on a potential
mediating factor. The additional complexity in these problematiques, is consistent with, and
representative of, the statistically significant differences in prevalence of more complex and
varied CACS codes. In other words, the consequence of different patterns of argumentation is
reflected in the models generated by the groups. Crucially, when taken alongside the finding
that students in the process-level prompt condition reported higher levels of consensus and
perceived efficacy, this suggests that the use of effective prompting not only enhances the
quality of students interactions with the CSCL tool, but also their motivation to do so in
future. Finally, the finding that process-level prompting resulted in higher levels of perceived
consensus has significant implications for learning in the group context. An increase in
perceived consensus here reflects changes in attitudes and opinions in relation to the topic,
showing that the process level prompting facilitated students‟ learning from their peers, and
the generation of a shared level of understanding. Furthermore, by measuring the student‟s
attitudes in relation to their perception of consensus within the group, the students are given
the opportunity to reflect on their learning throughout the CSCL process. This reflection is
important, given that, according to Michaelsen and Sweet (2008), students often fail to realise
how much they have learned in team-based learning. By taking time to think about their
perceived consensus after the group discussion, students become aware of the resulting
changes in their attitudes and opinions. During this process, the students may be reflecting on
shared mental models. Tjosvold (2008) argues that open-minded discussion of diverse views
is a social process which results in increased awareness of the complexity of a problem. By
means of such argumentation and discussion, the group approaches a convergence of
meaning in order to develop shared mental models (Van den Bossche, Gijselaers, Segers,
Woltjer, & Kirschner, 2011). The increase in perceived consensus in the process-level prompt
condition in the current study may reflect a similar learning process.
Conclusion
The results of this study suggest that adequate facilitation, in particular the use of process-
level prompts to support reflection and deliberation, plays a vital role in the outcomes of
computer-supported collaborative learning. This study has shown that process-level prompts
have a positive effect on the IM collaborative learning process, a process which has both
educational and organisational applications. The positive effect of prompting was evidenced
by the harnessing of a positive sense of perceived efficacy of the collaborative methodology,
by supporting and enhancing levels of consensus, and by supporting productive
argumentation. These results are consistent with the views of Pea (2004) and Strijbos,
Kirschner, and Marten (2004), specifically, that in order to cultivate successful collaboration
in students, attention must be paid to the design of the collaborative environment, including
the provision of scaffolding, leadership, and support by the facilitator. Furthermore, when
considered alongside research findings suggesting that teams may not operate at optimum
levels on their own, or that they may fail to achieve and sustain quality interactions due to the
time-consuming nature of the work (Dickinson & McIntyre, 1997; Rummel & Spada, 2005),
the results of the current study highlight the importance of effective facilitation and
instruction in CSCL settings for a variety of outcomes. In order to optimize the power of
CSCL, collaborative problem-solving and collective action, it is necessary to both create the
optimal working conditions for achieving consensus, and to provide the right supports and
framework for effectively harnessing a groups‟ collective intelligence. Such a framework
should address three key factors: the tools, individual talents, and team dynamics that support
collaborative learning and collective action (Hogan, Harney & Broom, 2014). By engaging
with collaborative tools such as IM, and providing the right support during the IM
collaborative process, we believe it is possible to enhance the collective power of teams, by
cultivating and harnessing their collaborative, critical and systems thinking skills.
Limitations and future research
There are a number of limitations to the current study which must be noted. First, in relation
to argumentation, the IM sessions were conducted in an educational environment, with
discussions focused on a problem that may not have been considered critical to students.
Despite the fact that many CACS designs have adopted a similar approach (e.g. Beck,
Gronewold & Western, 2012), the nature of the problem selected may have had an effect on
the nature and level of complexity of argumentation in each group. Future research should
examine the effects of prompting and facilitation in a variety of real-world problem solving
and decision-making contexts with groups that are working to resolve more critical problems
that impinge upon their adaptive success as a group. Students in the current study however,
did appear to engage with the topic in a way that reflected their interest in the personal and
social consequences of social media usage.
Second, the groups in this study consisted of between 17 and 20 participants. These
groups may be considered to be quite large relative to other collaborative learning groups.
However, the group size is consistent with standard IM procedure, with groups typically
consisting of 12 to 20 participants. Also, the size of the group is consistent with tutorial class
sizes in our university. As such, we feel that the group size reflects a classic IM systems
building session and contributes to the ecological validity of the findings, which demonstrate
that students in classic tutorial size groups can work collaboratively to develop systems
models in relation to complex problems. We do, however, recognise that variations in group
size may influence collaborative dynamics and the effects of prompts and facilitation on
group deliberation and decision making. Future research should attempt to replicate these
effects using a combination of both smaller groups and larger groups in the same experiment,
that is, to test directly for the effects of group size on outcomes.
Third, there was a gender imbalance in the sample of this study with a ratio of
approximately 3:2 females to males. This is a common sampling issue in university samples,
particularly with regard to psychology-based research (Skinner & Louw, 2009). In relation to
the results of this study, while some research has found that, in CSCL settings, females are
more likely to qualify and justify their assertions (Fahy, 2003, Smith, McLaughlin &
Osbourne, 1997) whereas males tend to assert opinions as facts (Fahy, 2002), other studies
(e.g. Ding & Harskamp, 2009), have found that gender differences are diminished when hints
(e.g. prompts) are provided.
Future research should also seek to analyse network dynamics in the group to see if process-
level prompting is associated with higher levels of coordination in the group, as opposed to
higher prevalence of key argument types. Also, research should seek to examine if peer-
centered process-level prompting is more effective than facilitator-driven prompting in
promoting exploratory talk and higher levels of coordinated and networked activity in
collaborative groups.
References
Ackoff, R. L. (1981). Creating the corporate future: Plan or be planned for. New York: John
Wiley and Sons.
Alvero, A., Bucklin, B., & Austin, J. (2001). An Objective Review of the Effectiveness and
Essential Characteristics of Performance Feedback in Organizational Settings (1985–
1998). Journal of Organizational Behavior Management, 21(1), 3–30. doi:
10.1300/J075v21n01_02.
Alberts, H. (1992, March). Acquisition: Past, present and future. Paper presented at the
meeting of the Institute of Management Sciences and Operations Research Society,
Orlando, FL.
Andriessen, J. (2006). Arguing to learn. In R. K. Sawyer (Ed.), The Cambridge Handbook of
the Learning Sciences (pp. 443–460). New York: Cambridge University Press.
Argyris, C. (1982). Reasoning, learning, and action: Individual and organizational. San
Francisco: Jossey-Bass.
Ashby, W. R. (1958). An Introduction to Cybernetics. New York: Wiley.
Asterhan, C. S., & Schwarz, B. B. (2010). Online moderation of synchronous e-
argumentation. International Journal of Computer-Supported Collaborative Learning,
5(3), 259-282. doi: 10.1007/s11412-010-9088-2.
Baker, M. J. (2003). Computer-Mediated Argumentative Interactions for the Co-Elaboration
of Scientific Notions. In J. Andriessen, M. J. Baker, & D. D. Suthers (Eds.), Arguing to
learn: Confronting Cognitions in Computer-Supported Collaborative Learning
Environments (pp. 47–78). Dordrecht: Kluwer Academic.
Baker, M. J., Quignard, M., Lund, K., & Séjourné, A. (2003, June). Computer-supported
collaborative learning in the space of debate. Paper presented at the International
Conference on Computer Support for Collaborative Learning: Designing for Change in
Networked Learning Environments, Dordrecht, The Netherlands: doi:10.1007/978-94-
017-0195-2_4.
Balcazar, F. E., Shupert, M. K., Daniels, A. C., Mawhinney, T. C., & Hopkins, B. O. (1989).
An Objective Review and Analysis of Ten Years of Publication in the Journal of
Organizational Behavior Management. Journal of Organizational Behavior Management,
10(1), 7–38. doi:10.1300/J075v10n01_02.
Barr, S. H., & Conlon, E.J. (1994). Effects of distribution of feedback in work groups.
Academy of Management Journal, 37(3), 641-655. doi:10.2307/256703.
Barron, B. (2000). Achieving Coordination in Collaborative Problem-Solving Groups. The
Journal of the Learning Sciences, 9(4), 403–43. doi:10.1207/S15327809JLS0904_2.
Beck, S. J., Gronewold, K., & Western, K. (2012). Intergroup argumentation in city
government decision making: The Wal-Mart dilemma. Small Group Research, 43(5), 87-
612. doi: 10.1177/1046496412455435.
Bell, P. (2204). Promoting students‟ argument construction and collaborative debate in the
science classroom. In M.C. Linn, E.A. Davis, & P. Bell (Eds.), Internet environment for
science education (pp.115-143). Mahwah, NJ: Lawrence Erlbaum Associates.
Berthold, K., Nückles, M., & Renkl, A. (2007). Do learning protocols support learning
strategies and outcomes? The role of cognitive and metacognitive prompts. Learning and
Instruction, 17, 564–577. doi: 10.1016/j.learninstruc.2007.09.007.
Boulding, K. E. (1966). The impact of the social sciences. New Brunswick, NJ: Rutgers
University Press.
Broome, B. J. (1995a). Collective design of the future: Structural analysis of tribal vision
statements. American Indian Quarterly, 19(2), 205-228.
Broome, B. J. (1995b). The role of facilitated group process in community-based planning
and design: Promoting greater participation in Comanche tribal governance. In L. R. Frey
(Ed.), Innovations in group facilitation: Applications in natural settings (pp. 27-52).
Cresskill, NJ: Hampton Press.
Broome, B. J. (2006). Applications of Interactive Design Methodologies in Protracted
Conflict Situations. Facilitating group communication in context: Innovations and
applications with natural groups: Hampton Press.
Broome, B. J., & Chen, M. (1992). Guidelines for computer-assisted group problem-solving:
Meeting the challenges of complex issues. Small Group Research, 23(2), 216-236.
doi:10.1177/1046496492232005.
Broome, B.J., & Christakis, A. N. (1998). A culturally-sensitive approach to tribal
governance issue management. International Journal of Intercultural Relations, 12(2),
107-123. doi:10.1016/0147-1767(88)90043-0.
Broome, B. J., & Cromer, I. L. (1991). Strategic planning for tribal economic development: A
culturally appropriate model for consensus building. International Journal of Conflict
Management, 2(3), 217-234. doi:http://dx.doi.org/10.1108/eb022700.
Broome, B. J., & Fulbright, L. (1995). A multi-stage influence model of barriers to group
problem solving. Small Group Research, 26(1), 25-55. doi:10.1177/1046496495261002.
Brown, A. L., & Palincsar, A. S. (1989). Guided, cooperative learning and individual
knowledge acquisition. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays
in honor of Robert Glaser (pp. 393-451). Hillsdale, NJ: Lawrence Erlbaum Associates,
Inc.
Canary, D. J., Brossmann, B. G., & Siebold, D., R. (1987) Argument structures in decision-
making groups. Southern Speech Communication Journal, 53(1), 18-37. doi:
10.1080/10417948709372710.
Chi, M. T. H. (2000). Self-explaining expository texts: The dual processes of generating
inferences and repairing mental models. In Glaser, R. (Ed). Advances in Instructional
Psychology (pp. 161-238). Mahwah, NJ: Lawrence Erlbaum Associates.
Chi, M. T. H., de Leeuw, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations
improves understanding. Cognitive Science, 18, 439-477. doi:
10.1207/s15516709cog1803_3.
Christakis, A. N. (1987). Systems profile: The Club of Rome revisited. Systems Research,
4(1), 53-58. doi:10.1002/sres.3850040107.
Coke, J. G., & Moore, C. M. (1981). Coping with a budgetary crisis: Helping a city council
decide where expenditure cuts should be made. In S. W. Burks & J. F. Wolf (Eds.),
Building city council leadership skills: A casebook of models and methods (pp. 72-85).
Washington, DC: National League of Cities.
Currall, S.C. & Judge, T.A. (1995). Measuring trust between organizational boundary role
persons. Organizational Behavior and Human Decision Processes, 64(2), 151-170. doi:
10.1006/obhd.1995.1097
Davis, E.A. (2003). Prompting Middle School Science Students for Productive Reflection:
Generic and Directed Prompts. The Journal of the Learning Sciences, 12(1), 91-142. doi:
10.1207/S15327809JLS1201_4
Deal, T. E. & Kennedy, A. A. (1982). Corporate cultures: The rites and rituals of corporate
life. Reading, MA: Addison-Wesley.
Delbeq, A. L., Van De Ven, A. H., & Gustafson, D. H. (1975). Group techniques for
program planning: A guide to nominal group and Delphi processes. Glenview, IL: Scott,
Foresman.
Denson, R. W. (1981). Team Training: Literature Review and Annotated Bibliography.
Brooks Air Force Base, TX: Air Force Human Resources Laboratory.
Dewett, T. (2003). Towards an Interactionist Theory of Group-Level Feedback.
Management Research News, 26(10-11), 1-21. doi:10.1108/01409170310784041
Dickinson, T. L., & McIntyre, R. M. (1997). A conceptual framework for teamwork
measurement. In M. T. Brannick, E. Salas, & C. Prince (Eds.), Team Performance and
Measurement: Theory, Methods, and Applications (pp. 19–43). Mahwah, NJ: Lawrence
Erlbaum Associates.
Dillenbourg, P. (1999). What do you mean by collaborative learning? In P. Dillenbourg (Ed.),
Collaborative learning: Cognitive and Computational Approaches (pp. 1–19). Oxford:
Elsevier.
Ding, N. & Harskamp, E. G. (2009). Gender Difference in Students‟ Cognitive
Representations during Collaborative Problem-Solving in Physics. International Journal
of Science Education. Retrieved May 2nd, 2015 from:
https://www.rug.nl/research/portal/files/14562479/Chapter%205.
Erkens, G. (2005). Multiple Episode Protocol Analysis. (Version 4.10). [Software] Available
from http://edugate.fss.uu.nl/mepa/.
Fahy, P. (2002). Use of linguistic qualifiers and intensifiers in computer conference. The
American Journal of Distance Education, 16(1), 5-22. doi:
10.1207/S15389286AJDE1601_2.
Fahy, P. (2003). Indicators of support in online interaction. International Review of Research
in Open and Distance Learning, 4(1). Retrieved May 2nd, 2015 from:
http://www.irrodl.org/index.php/irrodl/article/view/129/600.
Gabelica, C., Bossche, P. V. D., Segers, M., & Gijselaers, W. (2012). Feedback, a powerful
lever in teams: A review. Educational Research Review, 7(2), 123-144.
doi:10.1016/j.edurev.2011.11.003.
Gamlem, S. M., & Munthe, E. (2014). Mapping the quality of feedback to support students‟
learning in lower secondary classrooms. Cambridge Journal of Education, 44(1), 75-92.
doi: 10.1080/0305764X.2013.855171.
Gan, M. J., & Hattie, J. (2014). Prompting secondary students‟ use of criteria, feedback
specificity and feedback levels during an investigative task. Instructional Science, 42(6),
861-878. doi: 10.1007/s11251-014-9319-4.
Ge, X., & Land, S. M. (2003). Scaffolding students‟ problem-solving processes in an ill-
structured task using question prompts and peer interactions. Educational Technology
Research and Development, 51(1), 21-38. doi: 10.1007/BF02504515.
Gelmini-Hornsby, G., Ainsworth, S., & O‟Malley, C. (2011). Guided reciprocal questioning
to support children‟s collaborative storytelling. International Journal of Computer-
Supported Collaborative Learning, 6(4), 577-600. doi: 10.1007/s11412-011-9129-5.
Graesser, A.C., Person, N.K. & Huber, J. (1993). Question asking during tutoring and in the
design of educational software. In M. Rabinowitz, ed., Cognitive foundations of
instruction, pp. 149–172. Hillsdale, NJ: Lawrence Erlbaum.
Guzzo, R. A., Jette, R. D., & Katzell, R. A. (1985). The effects of psychologically based
intervention programs on worker productivity: A meta-analysis. Personnel Psychology,
38(2), 275–291. doi:10.1111/j.1744-6570.1985.tb00547.x.
Harford, J., & MacRuairc, G. (2008). Engaging student teachers in meaningful reflective
practice. Teaching and teacher education, 24(7), 1884-1892. doi:
10.1016/j.tate.2008.02.010.
Harney, O., Hogan, M.J., Broome, B. (2012). Collaborative learning: the effects of trust and
open and closed dynamics on consensus and efficacy. Social Psychology of Education,
15(4), 517–532. doi:10.1007/s11218-012-9202-6.
Hattie, J.A.C., & Gan. M. (2011). Instruction based on feedback. In R. Mayer & P.
Alexander (Eds.), Handbook of Research on Learning and Instruction. (pp. 249-271).
New York: Routledge.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research,
77(1), 81–112. doi: 10.3102/003465430298487.
Hübner, S., Nückles, M., & Renkl, A. (2010). Writing learning journals: Instructional support
to overcome learning-strategy deficits. Learning and Instruction, 20, 18-29. doi:
10.1016/j.learninstruc.2008.12.001
Hmelo-Silver, C. E. (2002, January). Collaborative ways of knowing: Issues in facilitation.
Computer Support for Collaborative Learning: Foundations for a CSCL Community.
Paper presented at the International Conference on Computer Supported Collaborative
Learning, Boulder, Colarado. doi:10.1.1.16.9070.
Hogan, K. (1999). Thinking aloud together: A test of an intervention to foster students'
collaborative scientific reasoning. Journal of Research in Science Teaching, 36, 1085-
1109. doi: 10.1002/(SICI)1098-2736(199912)36:10<1085::AID-TEA3>3.0.CO;2-D.
Hogan, M.J., Harney, O. M., & Broome, B. (2014). Integrating Argument Mapping with
Systems Thinking Tools - Advancing Applied Systems Science. In A. Okada, S.
Buckingham Shum, & T. Sherborne (Eds), Knowledge Cartography: Software Tools and
Mapping Techniques (pp. 401-421). London: Springer.
Hogan, M.J., Harney, O. M., & Broome, B. (2015). Catalyzing Collaborative Learning and
Collective Action for Positive Social Change through Systems Science Education. In, R.
Wegerif, J. Kaufman, & L. Li (Eds). The Routledge Handbook of Research on Teaching
Thinking. (In Press).
Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on
behavior in organizations. Journal of Applied Psychology, 64(4), 349–371.
doi:10.1037/0021-9010.64.4.349.
Jarvenpaa, S. L., Knoll, K., & Leidner, D. (1998). Is Anybody Out There? The Antecedents
of Trust in Global Virtual Teams. Journal of Management Information Systems, 14(4), 29–
64.
Ketelaar, E., Den Brok, P., Beijaard, D., & Boshuizen, H. P. (2012). Teachers‟ perceptions of
the coaching role in secondary vocational education. Journal of Vocational Education &
Training, 64(3), 295-315. doi: 10.1080/13636820.2012.691534.
Kenny, D. A., Albright, L., Malloy, T. E., & Kashy, D. A. (1994). Consensus in interpersonal
perception: Acquaintance and the big five. Psychological Bulletin, 116(2), 245–358.
doi:10.1037/0033-2909.116.2.245.
Kenworthy, J. B., & Miller, N. (2001). Perceptual asymmetry in consensus estimates of
majority and minority members. Journal of Personality and Social Psychology, 80(4),
597–612. doi:10.1037/0022-3514.80.4.597.
King, A. (1990). Enhancing peer interaction and learning through guided student- generated
questioning. Educational Psychologist 27(4): 111–126. doi:10.3102/00028312027004664.
Kirschner, P. A. (2009). Epistemology or pedagogy, that is the question. In S. Tobias, & T.M.
Duffy (Eds.), Constructivist theory applied to instruction: Success or failure? (pp. 144-
157) New York: Routledge.
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance. A
historical review, a meta-analysis, and a preliminary feedback intervention theory.
Psychological Bulletin, 119(2), 254–284. doi: 10.1037/0033-2909.119.2.254.
Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and
teams. Psychological Science in the Public Interest, 7(3), 77–124. doi:10.1111/j.1529-
1006.2006.00030.x.
Krause, U. M., Stark, R., & Mandl, H. (2009). The effects of cooperative learning and
feedback on e-learning in statistics. Learning and Instruction,19(2), 158-
170.doi:10.1016/j.learninstruc.2008.03.003.
Kreijns, K., Kirschner, P. A, & Jochems, W. (2002). The sociability of Computer-Supported
Collaborative Learning environments. Educational Technology and Society, 5(1), 8–22.
doi:10.1.1.95.4422.
Kuhn, D. (1991). The skills of argument. Cambridge: Cambridge University Press.
Kuhn, D. (2005). Education for thinking. Cambridge, Mass.: Harvard University Press.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of
organizational trust. The Academy of Management Review, 20(3), 709-734. doi:
10.5465/AMR.1995.9508080335.
McKnight, D.H., Cummings, L.L., & Chervany, N.L. (1998) Initial trust formation in new
organizational relationships. Academy of Management Review, 23(3), 513-530.
doi:10.5465/AMR.1998.926622.
Mento, A., Steel, R. P., & Karren, R. J. (1987). A meta-analytic study of the effects of goal
setting on task performance: 1966–1984. Organizational Behavior and Human Decision
Processes, 39(1), 52–83. doi:10.1016/0749-5978(87)90045-8.
Meyers, R. A., & Brashers, D. E. (1998). Argument in group decision making: Explicating a
process model and investigating the argument-outcome link. Communication
Monographs, 65(4), 261-281. doi:10.1080/03637759809376454.
Michaelsen, L., K., & Sweet, M. (2008). The essential elements of team-based learning. New
Directions for Teaching and Learning, 116I, 7-27. doi: 10.1002/tl.330
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our
capacity for processing information. Psychology Review, 63(2), 81-97.
doi:10.1037/h0043158.
Mohammed, S., & Ringseis, E. (2001). Cognitive diversity and consensus in group decision
making: The role of inputs, processes, and outcomes. Organizational Behavior and
Human Decision Processes, 85(2), 310–335. doi:10.1006/obhd.2000.2943.
Muller Mirza, N., Tartas, V., Perret-Clermont, A.-N., & de Pietro, J.-F. (2007). Using
graphical tools in a phased activity for enhancing dialogical skills: An example with
digalo. International Journal of Computer-Supported Collaborative Learning, 2(2–3),
247–272. doi: 10.1007/s11412-007-9021-5.
Nadler, D. A., (1979). The effects of feedback on task group behavior: a review of the
experimental research. Organisation Behaviour and Human Performance, 23(3), 309-338.
doi:10.1016/0030-5073(79)90001-1.
Neubert, M. J. (1998). The value of feedback and goal setting over goal setting alone and
potential moderators of this effect: A meta-analysis. Human Performance, 11(4), 321–335.
doi:10.1207/s15327043hup1104_2.
Paolucci, M., Suthers, D. D., & Weiner, A. (1995, May). Belvédère: stimulating students’
critical discussion. Paper presented at The Conference on Human Factors in Computing
Systems, Denver, Colarado. doi:10.1145/223355.223461.
Pea, R.D. (2004). The social and technological dimensions of scaffolding and related
theoretical concepts for learning, education, and human activity. The Journal of the
Learning Sciences, 13(3), 423-451. doi:10.1207/s15327809jls1303_6.
Pearce, J. L., Sommer, S. M., Morris, A., & Frideger, M. (1992). A configurational approach
to interpersonal relations: Profiles of workplace social relations and task
interdependence. Graduate School of Management, University of California, Irvine.
Rittel, H., & Webber, M. (1973). Dilemmas in a general theory of planning. Policy Sciences,
4(2), 155-169. doi:10.1007/BF01405730.
Roberts, K., & O‟Reilly, C. (1974). Measuring organizational communication. Journal of
Applied Psychology, 59(3), 321-326. doi:10.1037/h0036660
Rummel, N., & Spada, H. (2005). Learning to Collaborate: An instructional approach to
promoting collaborative problem solving in computer-mediated settings. The Journal of
the Learning Sciences, 14(2), 201–241. doi: 10.1207/s15327809jls1402_2.
Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer-supported
argumentation: A review of the state of the art. International Journal of Computer-
Supported Collaborative Learning, 5(1), 43-102. doi:10.1007/s11412-009-9080-x.
Schoenfeld, A. H. (1985). Mathematical problem solving. New York: Academic Press.
Schwarz, B. B., & Glassner, A. (2003). The blind and the paralytic: Fostering argumentation
in social and scientific issues. In J. Andriessen, M. J. Baker, & D. D. Suthers (Eds.),
Arguing to learn: Confronting cognitions in computer-supported collaborative learning
environments (pp. 227–260). Dordrecht: Kluwer Academic.
Seibold, D. R., & Meyers, R. A. (2007). Group argument: A structuration perspective and
research program. Small Group Research, 38(3), 312-336.
doi:10.1177/1046496407301966.
Simmons, M., & Cope, D. (1993). Angle and rotation: Effects of different types of feedbacks
on the quality of response. Educational Studies in Mathematics, 24(2), 163-176.
doi:10.1007/BF01273690.
Simon, H. A. (1960). The new science of management decisions. New York: Harper & Row.
Skinner, K., & Louw, J. (2009). The feminization of psychology: Data from South Africa.
International Journal of Psychology, 44(2), 81-92. doi: 10.1080/00207590701436736
Smith, C., McLaughlin, M., & Osborne, K. (1997). Conduct controls on Usenet. Journal of
Computer-Mediated Communication, 2(4). doi: 10.1111/j.1083-6101.1997.tb00197.x
Stahl, G. (2010). Guiding group cognition in CSCL. International Journal of Computer-
Supported Collaborative Learning, 5(3), 255-258. doi:10.1007/s11412-010-9091-7.
Stahl, G. (2015). The group as paradigmatic unit of analysis: The contested relationship of
CSCL to the learning sciences. In M. Evans, M. Packer & K. Sawyer (Eds.) Reflections on
the learning sciences: Past, present, and future. Cambridge, UK: Cambridge University
Press.
Stahl, G., Koschmann, T., & Suthers, D. (2006). Computer-supported collaborative learning:
An historical perspective. In R.K. Sawyer (Ed.) Cambridge handbook of the learning
sciences (pp.409-426). Cambridge, UK: Cambridge University Press.
Stark, R., Puhl, T., & Krause, U.-M. (2009). Improving scientific argumentation skills by a
problem-based learning environment: Effects of an elaboration tool and relevance of
student characteristics. Evaluation and Research in Education, 22(1), 51-68.
doi:10.1080/09500790903082362.
Stegmann, K., Weinberger, A., & Fischer, F. (2007). Facilitating argumentative knowledge
construction with computer-supported collaboration scripts. International Journal of
Computer-Supported Collaborative Learning, 2(4), 421–447. doi:10.1007/s11412-007-
9028-y.
Stevenson, C. E., Hickendorff, M., Resing, W. C., Heiser, W. J., & de Boeck, P. A. (2013).
Explanatory item response modeling of children's change on a dynamic test of analogical
reasoning. Intelligence, 41(3), 157-168. doi:10.1016/j.intell.2013.01.003.
Strijbos, J.-W., Kirschner, P. A. & Martens, R. L. (Eds.). (2004). What we know about CSCL:
And implementing it in higher education. Boston, MA: Springer.
Suthers, D., & Hundhausen, C. (2003). An empirical study of the effects of representational
guidance on collaborative learning. Journal of the Learning Sciences, 12(2), 183-219.
doi:10.1207/S15327809JLS1202_2.
Tannen, D. (1998). The argument culture: Moving from debate to dialogue. New York:
Random House Trade.
Tjosvold, D. (2008). The conflict-positive organization: It depends upon us. Journal of
Organizational Behavior, 29, 19–28.
Van Bruggen, J.M., Boshuizen, H. P., & Kirschner, P. A. (2003). A cognitive framework for
cooperative problem solving with argument visualization. In P. A. Kirschner, S. J.
Buckingham Shum, & C. S. Carr (Eds.), Visualizing argumentation: Software tools for
collaborative and educational sense-making. London: Springer.
Van den Bossche, P., Gijselaers, W., Segers, M., Woltjer, G., & Kirschner, P. (2011). Team
learning: building shared mental models. Instructional Science,39(3), 283-301.
Veerman, A.L., Andriessen, J.E.B. & Kanselaar, G. (2000). Learning through synchronous
electronic discussion. Computers & Education, 34(2–3): 1–22. doi:10.1016/S0360-
1315(99)00050-0.
Warfield, J. N. (2006). An introduction to systems science. Singapore: World Scientific.
Warfield, J., & Cardenas, R. (1994). A handbook of interactive management. Ames: Iowa
State University Press.
Webb, N.M. (1995). Group collaboration in assessment: multiple objectives, processes and
outcomes. Educational Evaluation and Policy Analysis, 17(2): 239–261.
doi:10.3102/01623737017002239.
Wen, Y., Looi, C. K., & Chen, W. (2015). Appropriation of a representational tool in a
second-language classroom. International Journal of Computer-Supported Collaborative
Learning, 10(1), 77-108. doi: 10.1007/s11412-015-9208-0.
Woolley, A. W. (2009). Means vs. ends: Implications of process and outcome focus for team
adaptation and performance. Organization Science, 20(3), 500-515.