ArticlePDF Available

Examining the interdependence in the growth of students' language and argument competencies in replicative and generative learning environments

Authors:

Abstract

Language and argument are epistemic tools that learners can use to help them generate and validate knowledge for themselves, as emphasized in NGSS and previous NRC reports. Not all learning environments elicit or support the use of these epistemic tools equally, thus affecting how students grow in competence in relation to their use. The present study examined growth in students' competencies in language and argument during one semester, with a comparison of two learning environments—replicative versus generative—using students' lab reports. It also examined interdependence between these growth patterns. The participants (n = 30) were simultaneously enrolled in two required introductory‐level science lab courses—Chemistry‐I Lab (generative) and Physics‐I Lab (replicative)—taken during the first fall semester at the university. The students' written reports for each weekly lab (n = 490) were collected at the end of the semester and scored to quantify students' quality of argument (as holistic argument) and language use (as multimodal representation). This growth was modeled using linear mixed‐effects regression for each competence and each environment. Quadratic modeling was also used to show whether the trend of the growth demonstrated constant increase or a leveling off. Findings provide evidence that students showed higher growth in language and argument competencies in their lab reports for the generative learning environment than in their lab reports for the replicative learning environment. The findings also suggest that there is marked interdependence between the growth patterns of the argument and language competencies. Implications are discussed for learning environments to promote language and argument development. The interdependence of argument and language growth highlights that encouraging language use in a generative manner can be a promising direction for improving argumentation and, by extension, science learning in science classrooms.
RESEARCH ARTICLE
Examining the interdependence in the growth
of students' language and argument
competencies in replicative and generative
learning environments
Ali Cikmaz
1
| Gavin Fulmer
1
| Fatma Yaman
2
|
Brian Hand
1
1
University of Iowa, Iowa City,
Iowa, USA
2
Yozgat Bozok University, Yozgat, Turkey
Correspondence
Ali Cikmaz, University of Iowa, Iowa
City, IA.
Email: ali-cikmaz@uiowa.edu
Abstract
Language and argument are epistemic tools that
learners can use to help them generate and validate
knowledge for themselves, as emphasized in NGSS and
previous NRC reports. Not all learning environments
elicit or support the use of these epistemic tools
equally, thus affecting how students grow in compe-
tence in relation to their use. The present study exam-
ined growth in students' competencies in language and
argument during one semester, with a comparison of
two learning environmentsreplicative versus
generativeusing students' lab reports. It also exam-
ined interdependence between these growth patterns.
The participants (n=30) were simultaneously enrolled
in two required introductory-level science lab
coursesChemistry-I Lab (generative) and Physics-I
Lab (replicative)taken during the first fall semester
at the university. The students' written reports for each
weekly lab (n=490) were collected at the end of the
semester and scored to quantify students' quality of
argument (as holistic argument) and language use
(as multimodal representation). This growth was
modeled using linear mixed-effects regression for each
competence and each environment. Quadratic
Received: 30 September 2019 Revised: 11 May 2021 Accepted: 15 May 2021
DOI: 10.1002/tea.21715
|
© 2021 National Association for Research in Science Teaching.
J Res Sci Teach. 2021;132. wileyonlinelibrary.com/journal/tea 1
modeling was also used to show whether the trend of
the growth demonstrated constant increase or a level-
ing off. Findings provide evidence that students showed
higher growth in language and argument competencies
in their lab reports for the generative learning environ-
ment than in their lab reports for the replicative learn-
ing environment. The findings also suggest that there is
marked interdependence between the growth patterns
of the argument and language competencies. Implica-
tions are discussed for learning environments to pro-
mote language and argument development. The
interdependence of argument and language growth
highlights that encouraging language use in a genera-
tive manner can be a promising direction for improving
argumentation and, by extension, science learning in
science classrooms.
KEYWORDS
argumentation, epistemic tools, language, linear mixed-effects
regression, longitudinal growth modeling
1|INTRODUCTION
The release of new science standards such as the Next Generation Science Standards (NGSS;
NGSS Lead States, 2013), and associated curriculum, emphasize that students need to develop
and utilize epistemic tools as they move through their schooling. For example, using Science
and Engineering Practices are expected to help students participate in knowledge generation
processes that result in understanding of Crosscutting Concepts and Disciplinary Core Ideas.
Underpinning this movement are reports such as the NRC's (2000) How People Learn and
(2007) Taking Science to School that push the field away from information transfer practices
toward the adoption of knowledge generation approaches, by applying the epistemic practices
of the discipline (NRC, 2012). This shift highlights the need for learners to develop epistemic
tools as a necessary component of the knowledge generation process, particularly for helping
individuals take more ownership for generating their own knowledge (Hofer, 2016). Such shifts
in K-12 education standards may pave the way for successful undergraduate science education
(Bowman Jr & Govett, 2015) as faculty members incorporate epistemic practices of science into
content-driven courses consistent with the Framework (Padilla & Cooper, 2012). Promoting the
development and use of these epistemic tools requires a learning environment that pays more
attention to the process of knowledge generation rather than focusing only on the replication of
science content knowledge (Ford & Forman, 2006).
We focus on two epistemic toolsargument and languagebecause they are essential in
doing and learning about science. Argumentation is a pivotal practice of science (NRC, 2012)
and epistemic tool used to generate scientific knowledge (Duschl, 2008; Sandoval &
2CIKMAZ ET AL.
|
Millwood, 2007). Engaging with arguments as a process can only be done through language
using oral or written interactions to refine, reform, and regenerate knowledge (Yore
et al., 2003). This importance is also recognized at the undergraduate level, with a growing
awareness of the importance of writing, writing-to-learn, and of language more broadly in
disciplinary-specific undergraduate science education (e.g., Archila et al., 2018; Grzyb
et al., 2018; Prichard, 2005; Reynolds et al., 2012). This has led authors to examine the differ-
ences in quality of argumentation among science majors and nonscience majors (Lin, 2014), the
differential effects of verification or inquiry-oriented labs on student argumentation quality
(Grooms et al., 2014), and the intersection of writing instruction not only on competence in
writing itself but also on scientific argumentation (e.g., Birol et al., 2013; Grzyb et al., 2018).
Although argumentation is widely viewed as an essential epistemic tool, the epistemic role
of language has not received the same attention. The NRC (2012) framework noted that every
science or engineering lesson is in part a language lesson(p. 76), which is centered on the
importance of reading text and producing a genre of text. This communicative role of language
is aligned with the derived sense of language (Norris & Phillips, 2003) and is associated with the
product of learning rather than the process of knowledge generation. However, Norris and
Phillips (2003) have argued for giving more attention to the fundamental sense of language in
science, wherein reading and writing are constitutive parts of science(p. 226) because science
cannot be done or advanced without language. This extends not only to text but to all forms of
language used for science (e.g., diagram, graph, mathematical, drawing, and so forth;
Kress, 2005; Lemke, 1998). This fundamental sense of science literacy is aligned to the epistemic
role of language in that it focuses on the process of using language to generate knowledge, not
only to communicate knowledge. It is therefore necessary for researchers to take on this funda-
mental, epistemic role of language in studying science learning environments (Coirier
et al., 1999; Gee, 2004; Norris & Phillips, 2003; Osborne, 2002).
Argument and language are also interdependent, as students must use language to engage
in argumentation and through argumentation can improve their understanding of language
(Tang & Moje, 2010). However, while the development of competencies for argument
(Asterhan & Schwarz, 2016) and language (Prain & Hand, 2016) have been studied as outcomes
within generative learning environments, there remains a gap in the literature on the
interdependent nature of these competencies. We argue that, by comparing the effects of differ-
ent learning environments on students' argument and language, we can provide insight into
whether argument and language competencies grow in similar patterns over a semester.
To understand the ways in which learners use epistemic tools, we can compare how stu-
dents engage these practices across learning environments that differ in emphasis; in this case
between a knowledge generation and replication environment. Comparing students' engage-
ment in epistemic practices across these learning environments allows us to understand not
only the possible effects on content knowledge, but also the proposed mechanism of epistemic
practices as emphasized in NGSS and in multiple previous reports. Studies on active learning
approaches at the undergraduate level that focus on knowledge generation show that these
environments lead to better student achievement (Freeman et al., 2014; Prince, 2004) and have
similar pedagogical structures that engage instructional methods that require students to take
over control of the learning and thus actively engage in the learning process(Gabelica &
Fiore, 2013, p. 462). Despite the insights of these studies, prior work has only focused on the
macroscopic level of characterizing the classroom as generative or replicative, without
addressing the critical role of students' use of particular epistemic tools in these different learn-
ing environments. We address this gap directly by following one group of students as they
CIKMAZ ET AL.3
|
moved between a knowledge generation environment and a knowledge replication environ-
ment. We examine if the students use and develop the epistemic tools of argumentation and
language as part of their learning.
In this study, we first compare the effects of two different learning environments, one repli-
cative (physics) and one generative (chemistry), on the growth of argument and language com-
petence over a semester. We do so by examining the students' competence in presenting holistic
argument and using multimodal representations as exhibited in laboratory reports prepared by
a cohort of students who take two different laboratory courses concurrently. Although examin-
ing only laboratory reports focuses on the product rather than the process of using argument
and language, examining these reports over time provides an indication of the students' growing
competencewe address this further in Section 5. Second, we investigate whether there is an
interdependence between argument and language growth in a specific learning environment.
Our research questions are:
1. How do (a) argument and (b) language growth occur in two different learning environments
(replicative vs. generative) across a semester as demonstrated in lab reports prepared by a
cohort of students who move between two different lab courses?
2. Is there an interdependence between argument growth and language growth in
(a) generative and (b) replicative learning environment?
2|BACKGROUND
2.1 |Theoretical framework
This study draws on the theoretical foundation of Norris and Phillips (2003), that positions lan-
guage as being fundamental to scientific literacy. Science cannot happen without language,
regardless of what type of learning environment is being utilized by the teacher. Given this fun-
damental sense of language, it follows naturally that argument is dependent on language.
Despite the recognition that the general epistemic role of language is pushed into the back-
ground(Gee, 2004, p. 13) in much work on science teaching and learning, its importance
looms large in studying how students engage in argumentation: argument is dependent upon
the language that students use to think, observe, record, communicate, and so on (Tang &
Moje, 2010). Language use for science learning is built through engaging in science argumenta-
tion to ask questions, design investigations, generate evidence from data, and to make claims.
Students' participation in any learning environment that uses argument and language will, by
necessity, require them to utilize both to be successful. As such, students in these environments
will build both their language and argument competencies while participating in the classroom
environment. In the following sections we present further background on argument and lan-
guage, and how they are visible within learning environments, before moving on to the
Methods and Findings.
2.2 |ArgumentationThe process of generating an argument
Argumentationthe process of generating an argumentpromotes students' understanding of
how knowledge is constructed in science over and above its promotion of science content
4CIKMAZ ET AL.
|
knowledge (NRC, 2012; Osborne, 2010), and is a foundational epistemic practice in science
learning (Duschl et al., 2007). Argument is integral to effective learning environments
(Duschl & Osborne, 2002; Jiménez-Aleixandre & Erduran, 2007; NRC, 2012), and helps learners
to internalize argumentative practices as a social norm of disciplinary science (Nussbaum &
Asterhan, 2016). This is more apparent when argument is not treated as the product of an
inquiry but used in immersive environments as an enmeshed component of inquiry
(Cavagnetto, 2010, p. 352), and where argument is viewed as a nonlinear cycle of construction
and critique (Ford, 2008, 2012). Research on long-term interventions supports this view in
showing that knowledge of, and practice in, argumentation can be formed if argumentative
skills are promoted by the learning environment (Crowell & Kuhn, 2014; Kuhn et al., 2016).
Research on argumentation processes and competence in undergraduate and preservice
teacher education finds similar patterns. Undergraduate science majors outperform nonmajors
in uncovering evidence statements in written arguments (Lin, 2014), but may still construct rel-
atively weaker scientific arguments that are not explicitly supported through causal mecha-
nisms especially in inquiries that focus on mathematical derivations or calculations rather than
on explaining observed phenomena (Moon et al., 2016). Students can improve in their argu-
mentation skills through supports, such as encouraging consideration of competing theories
(Acar & Patton, 2012), offering online supports for argument formation (Fan et al., 2020), or a
cycle of argument-driven inquiry (Grooms et al., 2014). Sadler (2006) noted how argumentation
was perceived positively by the preservice teachers in a U.S. context, and how exposure to
explicit instruction on argument structures (e.g., claims, data, warrants) and the role of class-
room discourse was generally effective in improving the quality of arguments.
There are three distinct argument approaches proposed in the science education literature.
First, Osborne et al. (i.e., Erduran et al., 2004; Osborne, 2010; Osborne et al., 2016) focused on
Toulmin's Argumentation Pattern (TAP). Second, McNeill et al. (2006) formed a modified ver-
sion of TAP to propose the ClaimEvidenceReasoning structure that has been adopted by
other researchers who examined argumentation in science classroom (e.g., Berland &
Reiser, 2009; Sampson & Clark, 2009). Third, Hand et al. (e.g., Choi et al., 2014; Keys
et al., 1999) adapted question-claim-evidence based on their work on the Science Writing
Heuristic (SWH). We adopt Walton's approach to argument as persuasion, which has been
argued by Osborne et al. (2016) to improve understanding of classroom argumentation after
noting that Although Toulmin's (1958) model of practical argument plays a central role in our
learning progression for argumentation, it is not sufficient(Osborne et al., 2016, p. 826).
Walton (1996, 2016) asserts that the main function of an argument is persuasion, and out-
lines three essential characteristics of an argument: (1) unsettledness, (2) inferring conclusions
from premises, and (3) a sequence of reasoning (Walton, 1996). In practice, this means an argu-
ment develops to help resolve some unsettled issue determined by the context, and the issue
may be settled by employing a sequence of reasoning to offer conclusions based on a set of pre-
mises. For our implementation of argument based on Walton's (1996) structures, we begin with
some unsettledness (usually in the form of a question), the inference of conclusions from pre-
mises (usually as a claim), and a sequence of reasoning (through the use of supporting evidence
to bolster a claim and resolve a question). This formulation emphasizes the term evidence as the
use of reasoning about data to prepare it for use in an argument, because data becomes evi-
dence when scientists use reasoning to interpret and transform the data with respect to their
intended claim to resolve the unsettledness (Sampson et al., 2013).
In the context of science learning, argument-based inquiry starts with a question about an
unsettled issue (Haack, 2004; Marrero, 2016) through problematized content (Engle &
CIKMAZ ET AL.5
|
Conant, 2002) and, based on predictions from prior knowledge, a design is developed to collect
data. Importantly, the quality of design frames the quality of data collection, claims, and evi-
dence. The quality of each component of question, design, data, claim, and evidenceand the
strength of the connections among themindicates the quality of argument (Haack, 2004).
Argumentation participants engage in a cycle among the components, negotiating each one to
refine the overall quality of the argument through a continuous cycle of construction and cri-
tique (Ford, 2008, 2012). Any unexpected, unintended, or irrelevant conclusions can restart the
negotiation to achieve resolution, where resolutionis possible when members persuade
others publicly, or themselves privately, through a clear connection and high cohesion among
the question, claim, and evidence structure (Kuhn, 1993; Yore et al., 2003).
Within the context of argumentative learning environments, we can judge the quality of a
written argument by examining the argument holistically based on how the author has struc-
tured the internal relevance and cohesion between components of the argument (Choi, 2008;
Choi et al., 2013), regardless of which argument structure is adopted such as Question-Design-
Claim-Evidence or Aim-Justification-Conclusion. Any improvements in the quality of argument
would be observable in improvements in the internal cohesion among argument components
(Rapanta et al., 2013), such as consistency between a Claim and the proposed Evidence or a Jus-
tification and the resulting Conclusion.
2.3 |LanguageThe fundamental sense of science learning
Language is an integral component of science that serves two roles and takes on various forms,
or modes. Language has roles as a product but also as process of knowledge generation. Lan-
guage is clearly a product for storing, reporting, and communicating scientific knowledge as an
outcome of some investigation (Yore et al., 2003). As a process, drawing on the fundamental
sense of science literacy (Norris & Phillips, 2003), using language is what allows ideas to be
formed and manipulated inside one's head, and critically evaluated and restructured during the
act of speaking, listening, writing, and so on. An overemphasis on the product role of language
as an epistemic tool (Gee, 2004) can lead to overly-structured approaches (Cavagnetto, 2010)
where students are expected to learn the language of science before using the language of sci-
ence (e.g., Halliday & Martin, 2003)resulting in replication and memorization (Prain &
Hand, 2016). By contrast, learning about language in a generative environment occurs by using
the language in the way the learner lives the language (Ardasheva et al., 2015), including draw-
ing on more familiar and everyday language when building up scientific ideas (Wellington &
Osborne, 2001).
Scientific concepts are multimodal semiotic hybrids in that they are simultaneously and
essentially verbal, mathematical, visual-graphical, and actional-operational(Lemke, 1990,
p. 87). Learning science involves utilizing language as text, drawings, figures, graphs, numerical
data, spoken language, and so on. Using various representations and making connections and
translations among them promotes knowledge generation (Klein, 2001; Lemke, 1998; Waldrip
et al., 2010). Speaking, listening, writing, drawing, and other modes of representation have dif-
ferent cognitive functions that are complementary (Rivard & Straw, 2000) and more effective
when integrated (Mayer, 2009). Using only one mode is insufficient for sustained science learn-
ing (Von Aufschnaiter et al., 2008; Yore & Treagust, 2006), thus underscoring the need to incor-
porate and utilize multiple modes within any learning environment to maximize science
learning (Chen et al., 2016). As with argumentation, multimodal representation (MMR)
6CIKMAZ ET AL.
|
competencethe competence in using various modes of representation to express one's ideas
is developed within an environment that involves cultural practices of representations
(Disessa, 2004). Allowing students to use everyday language and promoting transitions from
every day to more canonical language through the learning experience supports learners in gen-
erating their own knowledge (Lemke, 1990; Tang, 2015; Wellington & Osborne, 2001). The
importance of utilizing language this way is that the scientific knowledge generated by students
is their own knowledge, and not some form of knowledge that has been transferred as a product
to be replicated.
Language's forms also intersect with the process role: the act of writing, speaking, drawing,
and so forth contributes to learning (e.g., Galbraith, 2009; Klein, 1999; Klein, 2006; Tynjälä
et al., 2001; Yore et al., 2003). This occurs because the production of languageespecially (but
not limited to) writingis knowledge constitutingwhen students have to conduct a constant
interaction between their current rhetorical goal and the old ideas they have in memory
(Galbraith, 2009). So, learning environments that promote the epistemic role of writing allow
students not only to improve in disciplinary forms of engagement but also gain appreciation of
the epistemic power of writing (Klein, Boscolo, Gelati, & Kirkpatrick, 2014; Klein, Boscolo,
Kirkpatrick, & Gelati, 2014), and environments that support scientific speaking lead to stronger
arguments, deeper scientific knowledge, and improved scientific writing (Curto & Bayer, 2005).
Research on language use and competence in undergraduate science education emphasizes
the effectiveness of writing to learn and of integrating multimodality and discourse. For exam-
ple, Reynolds et al. (2012), in a review of undergraduate writing-to-learn studies, found an
across-the-board positive effect that seemed to be heightened with two elements of writing to
learn: reflection by the students on the nature of scientific knowledge, and emphasizing the
development of a reasoned argument. How students approach the writing process also plays an
important role in the quality of their language use. For example, Verkade and Lim (2016) found
that students who favored deep approaches scored better on their writing about an original
source article, which is positively associated with students' prior experience with science writing
and their attitudes toward science (Taylor & Drury, 2005). Language use and its role in argu-
mentation are also clearly intersecting, as engaging in rich language practices for argumenta-
tion has positive effects on undergraduates' content understanding (Grooms et al., 2014) and in
closing gender gaps in attitudes toward science (Walker et al., 2012).
Within the context of argumentative learning environments, we can begin to judge the com-
petence of language use when students are required to consider the presence of an audience
(whether contemporaneously when talking or mentally when writing; del Longo &
Cisotto, 2014); the appropriateness of language for the audience; the selection and connections
of representations that would sway the audience; and the possible objections and counterclaims
the audience could raise. Moreover, the production of written argument necessitates multi-
dimensional cohesion within and between the components of argument (Coirier et al., 1999)
and forms of language for transitions, translation between representations, and flow
(Klein, 2001). Therefore, any improvements concerning the quality of language would be
exhibited in the cohesion among representations, the use and flow of language in the prepara-
tion of an argument, and how the student considers the audience for the argument. Incorporat-
ing these elements has been labeled as competence with MMRs (Disessa, 2004;
McDermott, 2009), and is associated with higher quality knowledge generation. For the remain-
der of this article, we use language competence and MMR competence interchangeably for two
reasons: (1) because language includes all forms of representations (Disessa, 2004) that are
important for communicating and generating science knowledge through using semantic
CIKMAZ ET AL.7
|
systems, not just text (Lemke, 1990); and (2) because our operationalization of MMR
competencedescribed in more detail in Methodsincorporates not only the presence of mul-
tiple modes but also the embeddedness, flow, cohesion, and audience associated with the
writing task.
2.4 |Learning environment and competence development
Despite the substantial research on argument and language as separate entities, there has been
little attention paid to examining how argumentation and language are interconnected as com-
ponents of the knowledge generation process in immersive learning environments. Haas and
Flower (1988) reported how a learner knows scientific vocabulary, can recall content, identify
and locate information; but the same learner tends to paraphrase, summarize and retell when
they are supposed to analyze, criticize, and interpret in replication-oriented settings (as cited in
Norris & Phillips, 2003) without demonstrating higher cognition. On the contrary, immersive
learning environments are promising to develop either argument or language competence
(Cavagnetto, 2010) because argument (Asterhan & Schwarz, 2016) and language (as MMR;
Disessa, 2004) competencies can be gradually developed when promoted and effectively
practiced.
In research, argument competence (e.g., Choi, 2008; Rapanta et al., 2013; Sandoval, 2014)
and MMR competence (e.g., Disessa, 2004; McDermott, 2009; Neal, 2017) have generally been
examined separately, and there are few studies that consider both simultaneously (Hand &
Choi, 2010; Yaman, 2020). The studies that examine both competencies together generally focus
on only one learning environment. Although argument and MMR competencies have their
own dependent functions, Tang and Moje (2010) argue that these competencies are dependent
on each other. Moreover, Cavagnetto (2010) and Norton-Meier (2008) imply there is a depen-
dency between the use of language and argument by stating that argument is a form of lan-
guage. We argue that there is a need for an empirical study to examine the growth patterns of
these competencies within and between learning environments. This study addresses this
important need, by examining the growth of and interdependency between argument and
MMR competency across different environments as demonstrated in students' lab reports.
3|METHODS
To examine how students in two distinct learning environments differ in argument and repre-
sentational competence growth over one semester, we apply a longitudinal case study design to
explore effects of different learning environments as potential causes but where, much like any
causal-comparative design, all conditions cannot be controlled (Brewer & Kuhn, 2019;
Fulmer, 2018). Data sources were students' lab writing samples. We examined students' labora-
tory report writings to uncover their language and argument quality generated as an outcome
of participating in the two different learning environments. These laboratory writings serve as a
good data source to uncover how students use language, utilize different forms of modes
(MMR) and achieve connection, translation and transition between text and nontext modes.
To minimize the effect of uncontrolled differences among preexisting groups of participants,
a single cohort of students was examined in two different learning environments: replicative
(structured) versus generative (immersive). Figure 1 provides a visual representation of how the
8CIKMAZ ET AL.
|
single-cohort students took two contemporaneous laboratory courses over the same semester.
We adopted this approach to restrict additional confounding effects of comparing different stu-
dents in different sections across these two different environments.
3.1 |Participants
This study was conducted with a cohort of college freshmen (n=30) enrolled in the Science
Teacher Education program at a public university in the Central Anatolia Region of Turkey.
The program was ranked 56th among 68 science teacher education programs nationwide in
FIGURE 1 Data collection schedule, study flow, and structure of lab reports
CIKMAZ ET AL.9
|
Turkey. There were 25 females and 5 males in the cohort, with an age range of 1719 years old.
The students, who were from middle and low socioeconomic status, came from several regions
and cities of the country. All participants had graduated from high school with a major in sci-
ence, where their previous learning experiences had mostly been lecture-based and teacher-
centered.
3.2 |Procedures and settings
The participants were simultaneously enrolled in two required introductory-level science lab
courses, Chemistry-I Lab and Physics-I Lab, taken during the first fall semester at the univer-
sity. The participants had little experience of lab implementation in secondary school because
much of secondary science instruction in the local context is theoretical rather than practical,
so both physics and chemistry lab learning environments were novel for the participants. The
teaching experience of the physics and chemistry professors were 5 and 8 years, respectively.
The two laboratory environments are summarized in Table 1 based on interviews conducted
with each professor. Based on Table 1, we see four important distinctions in the two learning
environments. First, the environments differed in the direction of communications among par-
ticipants. In the physics laboratory, it was mostly teacher-to-student talk, with the instructor
serving as an authority figure for managing discussions and sharing knowledge. In the chemis-
try laboratory, it was mostly student-to-student talk, with the instructor participating in discus-
sions as a knowledgeable other (Vygotsky, 1962) but not managing the discussion. This is
consistent with an immersive classroom environment (Cavagnetto, 2010). Second, the source of
laboratory procedures differed. The chemistry instructor helped students negotiate and plan
procedures to address their questions, that is, to generate their designs and procedures to
address the questions they had posed. The physics instructor provided students with a step-by-
step plan for applying prescribed procedures following a verification of cookbookmode that
may inhibit students' sense of autonomy and engagement (Brownell et al., 2012; Parreira &
Yao, 2018).
Third, the environments differed in the audience for the written laboratory report. Writ-
ing naturally requires an audience for the writing, even if it is taking note for oneself. For
many classroom assignments, the audience is generally assumed as the instructor, but offer-
ing alternative audiences can affect how students think about and communicate in their writ-
ing (Magnifico, 2010). We considered this change of audience from the instructor to a
layperson as one of the differences between learning environments and we examined how
the students' writing is appropriate for a layperson. Fourth, we noted that the nature of argu-
ment was different in the two environments. The chemistry laboratory explicitly encouraged
students to engage in argumentation both interpersonally and in their writings by providing
a suggested structure of question-claims-evidence. The physics laboratory, on the other hand,
did not explicitly engage students in argument interpersonally but did require students to
report their laboratory writings using a structure to connect a (provided) hypothesis to a fore-
gone conclusion, which is consistent with the argument from evidence to hypothesis form of
scientific argumentation that is common in verification laboratory settings (Ozdem
et al., 2013; Walton, 1996). Therefore, while both environments involve some form of argu-
mentative writing, the argument structures are different. These differences among the set-
tings support the capacity to compare the quality of students' use of argument and language
across the two environments.
10 CIKMAZ ET AL.
|
TABLE 1 Comparison of laboratory environments
Physics lab (replicative) Chemistry lab (generative)
Approach No explicit approach specified.
Description of instructor indicates
structured, nondialogic,
knowledge-transmission-based,
and cookbook style.
The SWH approach, practiced by instructor
for 2 years. Description of by instructor
indicates dialogical, knowledge
construction/generative, nonstructured,
focusing on implementation depends
heavily on the prior knowledge and goals
of the enrolled students.
Student groups Students formed initial groups for
this lab class and stayed with the
same group through the end of the
semester.
Students formed initial groups for this lab
class and stayed with the same group
through the end of the semester.
Student
preparation
Students must
study for and pass a recollection-
based quiz to attend a lab
session, or
revise for the quiz and attend a
make-up lab session.
Students must
prepare and submit an individual concept
map on the topic,
prepare individual beginning questions
on the topic (based on any resource e.g.,
lecture notes, lab manual, previous lab
results, or prior personal experiences),
outline laboratory safety, and
share individual questions with
groupmates and agree on group questions
to investigate.
Materials and
instructor role
Lab manual, created by the
instructor, providing
Step-by-step directions,
Materials list,
Data tables to be filled by
students
Instructor provided
Specified lab materials placed at
stations
Explanations of laboratory steps
as needed
Summary of expected findings at
the conclusion of the lab session
Lab manual, created by the instructor,
providing
Space for adding pre and post-
concept maps,
General conceptual questions (not
required to answer),
Prompts for beginning questions
(individual, group, and whole class), lab
safety, design, data, claims and evidence,
and reading and reflection
components, and
Suggestions for experiment, materials,
and chemicals
Instructor provided
Lab materials as requested by students
Encouragement to state knowledge claims
and provide justifications
Encouragement to challenge other
students' claims and justifications
No feedback on whether ideas were
correct or wrong
Student role and
lab environment
Students worked with groupmates to
follow procedures and take notes
for the lab reports, usually with
limited dialogue.
Students wrote down their group questions,
then negotiated whole-class question(s)
that would be testable and feasible.
(Continues)
CIKMAZ ET AL.11
|
3.3 |Data collection
Students' lab reports were collected as a product of each learning environment to analyze argu-
ment and representation competencies and compare the two learning environments. As
Figure 1 shows, the chemistry Lab course had 10 weeks of lab activity while the physics Lab
course had 7 weeks of lab activity during the same semester. The students produced the
lab reports for the respective courses as a course requirement; they were not instructed to alter
their reports for this research study. The physics lab report structure was a fairly traditional for-
mat developed by the professor. The chemistry lab report was adapted from the SWH approach
TABLE 1 (Continued)
Physics lab (replicative) Chemistry lab (generative)
After the students completed each
lab activity, the professor
explained topics and what had to
be done and found in the
activities. If the students had
questions, they could ask. After
doing so, the lab sessions ended.
The lab activity rarely exceeded the
lab session period; most students
finished their lab activities early.
Teacher-to-student talk was
common. Students were not
encouraged to talk more.
Students discussed and decided on research
design suitable for answering the class's
question(s).
Students worked in groups to conduct an
experiment: collecting and interpreting
data, offering claim(s) and supporting
evidence as small group discussion/
dialogue.
Each group wrote down their findings,
claims, and evidence on a whiteboard to
share and discuss with other groups as
whole class discussion/dialogue.
Sometimes the conversations extended
beyond the lab session period.
Teacher-to-student and Student-to-student
talk were prevalent; and students were
regularly encouraged to talk more.
Student reports and
assessments
Lab report
After each lab, students prepared lab
reports
consisting of aim (hypothesis),
theoretical knowledge, data and
calculations, questions and
answers, and conclusion and
comments;
without page limits, required
format or template; or any
explicit guidance on using
multiple modes of
representation; and
no explicit audience for the report
was proposed to students.
a
Exams
Students took one midterm exam
and one final exam.
Lab report
After each lab, students prepared lab reports
consisting of initial concept map,
beginning question(s), design, data and
calculation, claim, evidence, reflection
(how my ideas changed), question and
answers, and final concept map;
without page limits, required format, or
any explicit guidance on using multiple
modes of representation; and
when asked, the instructor suggested
students write for someone who does not
know the topicoffering a specific
audience that is not the instructor herself.
Exams
Students took one midterm exam and one
final exam.
Abbreviation: SWH, Science Writing Heuristic.
a
In such case, students are likely to presume the audience is the instructor himself (Magnifico, 2010).
12 CIKMAZ ET AL.
|
by the professor, with inclusion of concept maps. The students submitted their lab reports to
the instructor as part of course assignments, then anonymized reports were prepared and sub-
mitted to the first author of the present study. For each student, the authors sought to collect a
total of 17 lab reports including 10 chemistry and 7 physics (see Figure 1). Because of missing
data, in total 490 lab reports, instead of 510, were collected (13 missing for chemistry [4.3%];
7 missing for physics [3.3%]). Although no page minimum or maximum was present for either
course, there were large differences between two courses in the number of pages for lab reports:
on average eight pages for the chemistry lab and three for the physics lab.
3.4 |Measures
Each writing sample was scored separately for argument and language. Two scoring rubrics
were used (Table 2): for argument, the Holistic Argument Framework (Choi, 2008); and for lan-
guage, the MMR Rubric (McDermott, 2009). Translations of sample lab reports, with descrip-
tions of the scoring, are provided in Electronic Supplementary Materials.
The Holistic Argument Framework (Table 2) is a holistic rubric that incorporates elements
of argument strength and coherence regardless of which argument structure has been adopted
(if any), making it suitable for scoring argumentative reports across different learning environ-
ments (Choi et al., 2013) and allows us to study the interdependence with language competence
(Tang & Moje, 2010). A single holistic score is determined for each report based on an overall
judgment of the quality of the argument in the report. Each report was scored on a 10-point
scale.
The MMR rubric is an analytical rubric initially developed by McDermott (2009) to
assess Embeddedness and Appropriateness for the Audience in students' reports, and was
modified for this study by adding Cohesion and Flow categories. This expands the scope of
the rubric to include these aspects of MMR use in the overall language, and addresses the
notion in MMR of the presence but also the integration of modes (Prain & Waldrip, 2008;
Tang, 2015) by addressing how the report presents various modes in relation to each other
and the extent to which they are incorporated to support a text. Embeddedness addresses
how nonverbal modes are close to the verbal text and whether nonverbal modes are men-
tioned/explained in the verbal text. Cohesion denotes the relation and connection between
all modes and the transition between verbal texts and nonverbal modes. Flow represents the
readability and clarity of the overall text regardless of the nonverbal modes. Appropriate-
ness for Audience addresses how the written text is suitable to a reader who does not know
the topic, that is, a layperson.
1
As an analytical rubric, each category is scored and then a
total MMR value is calculated by summing the scores of the four categories, then rescaling
to a maximum of 10 points.
All writing samples were scored by an internal rater, the first author. Interrater reliability
for all three rubrics together was checked by randomly selecting 10% of writing samples in each
of chemistry and physics reports for an external rater to code, and then correlating the scores of
the two raters (Creswell & Creswell, 2017). Since the present rubric scorings for MMR and argu-
ment were intentionally ordered and scaled with range 010, a correlation coefficient is very
well suited to this purpose: it is more conservative than the kappa coefficient (i.e., an observed
interrater correlation is more likely to be reduced than increased, so acceptable value for the
coefficient is stronger evidence of rater agreement; Stemler, 2004) and the correlation addresses
the ordinal quality of the scoring rubric better than kappa (Banerjee et al., 1999), particularly
CIKMAZ ET AL.13
|
TABLE 2 Argument and language scoring rubrics
Scoring matrix for the quality of the holistic argument (chemistry)
24 6 8 10
Trivial argument Weak argument Moderate argument Complete argument Powerful and enriched
argument
No testable questions,
invalid claims, and
unreliable evidence
Testable but trivial
questions, invalid
claims, and unreliable
evidence
Testable and appropriate
questions, adequate
claims, appropriate
evidence and reflection
Significant questions,
valid claims, strong
evidence and
meaningful reflection
Essential questions,
sound claims, strong
evidence, meaningful
reflection, and easy to
follow reasoning
No connection between
QCE
Weak QCE connection:
Claim may not address
question completely, or
evidence may not link
to the claim
Moderate QCE
connection: Claims
address part of a
question, or evidence
only partially support
claims
Strong QCE connection:
Claim addresses the
question directly;
evidence is specific to
the claim
Rich QCE connection:
Claim addresses the
question and answer it
fully; evidence specific
to the claim and links
made clearly.
No reflection May not have reflection Easy to follow argument
components
Scoring matrix for the quality of holistic argument (physics)
24 6 8 10
Trivial argument Weak argument Moderate argument Complete argument Powerful and enriched argument
Nontestable aim
and unclear
conclusion
Testable but trivial aim and unclear
conclusion
Testable aim and appropriate
conclusion
Significant and testable aim
and clear conclusion
Significant and testable aim,
sound conclusion and
meaningful reflection
No transition and
connection from
aim to
conclusion (A-C)
Weak A-C transition and
connection: conclusion may have
little explicit link to aim, or lack
specifics on how aim was
achieved
Moderate A-C transition and
connection: conclusion
mentions aim and
summarizes how findings
address the aim
Strong A-C transition and
connection: conclusion
refers to aim and shows
how the findings address
the aim
Enriched A-C transition and
connection: conclusion refers
to aim, provides clear
discussion of how aim was
addressed
14 CIKMAZ ET AL.
|
TABLE 2 (Continued)
Scoring matrix for the quality of language use
Representation (MMR) embeddedness
12 345
Mode is just next to the text.
No reference or explanation
Mode is only referenced in the
text, but separate from the text
and no explanation about the
modes
Mode is referenced in the
text and placed close to
make connection, but no
explanation about the
modes
Mode is referenced and
explained in the text
but separate from the
text
Mode is referenced and
explained in the text, and
very close to each other (easy
to make connection)
Representation (MMR) cohesiveness
12 345
No apparent cohesion:
Intermode transition is sharp
Weak cohesion: Intermode
transition is salient
Moderate cohesion:
Intermode transition is
salient but appropriate
Complete cohesion:
Intermode transition is
easy
Strong cohesion: Intermode
transition is smooth and easy
General flow the lab report
12 345
No flow from one component
to another
Uncertain flow from one
component to another: reader
can tell how components are
related but may not be
expressed
Certain flow from one
component to another: text
or wording provides
linkages connecting
components
Flow smoothly from one
component to another:
clear connections as
you read components
Flow perfectly from one
component to another: can
easily tell the relationship of
one component in overall
report
Difficult to follow and read Not easy to follow and read Easy to follow and read Easy and welcoming to
follow and read
Inviting and catching to follow
and read
(Continues)
CIKMAZ ET AL.15
|
TABLE 2 (Continued)
Scoring matrix for the quality of language use
Appropriateness level for audience (who does not know the topic)
01 23
None Low Medium High
Used jargon without
explanation
Used jargon with limited
explanations
Used jargon and some
explanations
Used jargon and their
explanations are
included
Language of the report is easy
to understand for the
instructor. Everything in the
report is strongly context-
dependent
Language of the report is easy to
understand for one who has
science background
Language of the report is
easy to understand for one
who has some science
background
Language of the report is
easy to understand for
everybody
Abbreviations: MMR, multimodal representation; QCE, question-claim-evidence.
16 CIKMAZ ET AL.
|
where the scoring approximates a numerical scale (Maclure & Willett, 1987). The internal rater
has 5 years of teaching experience and previously worked on scoring of argument and MMR
over 3 years on related research projects. The external rater was a doctoral student trained to
score students' arguments but with no prior experience on MMR. Pearson's correlation coeffi-
cient for argument and language were 0.893 and 907, respectively. After reaching a high
coefficient, the internal rater completed scoring the remaining samples.
3.5 |Data analyses
3.5.1 | Linear mixed-effect regression modeling
Linear mixed-effects regression modeling (LMER; Fitzmaurice et al., 2004; Bates, 2010) is used
to analyze the data to answer the research questions. LMER models are an extension of linear
regression models to allow including both fixed and random effects and are particularly used
when there is violation of independence in the data, such as arises from a hierarchical structure
or repeated measurements on the same subject (Bates, 2010; Winter, 2013). Adding a random
effect (random intercept, random slope, or both) for each subject resolves this violation and
allows subjects to be handled as independent by embedding a subject-specific model within the
larger regression model (Winter, 2013).
An application of LMER modeling is for longitudinal studies that examine change over time
by collecting repeated measurements of the same subject through time (Bates, 2010). Longitudi-
nal studies allow the assessment of within-subject changes in the response over time whereas
cross-sectional studies, as single occasion measurement of the response enable to attain estima-
tion of between-individual difference in the response (Fitzmaurice et al., 2004). By employing
LMER, a random-effect for the intercept and a random effect for the slope is assigned to every
individual regarding time (Bates, 2010); by doing so, it makes it easy to control cohort effects
(Van Belle et al., 2004). Additionally, LMER modeling resolves any problems that can emerge
from missing data when applied to longitudinal data sets (Bates, 2010) or in situations where
some cases have more response data (e.g., in this study, with 7 physics lab reports and 10 chem-
istry lab reports).
3.5.2 | Model development
In the present study, R-Studio (R Core Team, 2019) and lme4 (Bates et al., 2015) were used for
LMER modeling of longitudinal growth. Multiple LMER models were performed to: (1) compare
two learning environments regarding argument, and MMR growths over the semester and
(2) examine whether there is a parallel growth between argument and MMR growth within
each learning environment over the semester. Argument and MMR growths over the semester
were predicted by course (physics and chemistry), time (Week, over 10 weeks, parameterized so
that the physics and chemistry labs were matched on the week during which students com-
pleted the respective lab), and course by time interaction (Course Week) as fixed effects. The
interaction term was employed to see whether the argument or MMR regression lines between
courses are divergent or convergent. As random effects, there are intercepts for subjects and by-
subject random slopes for the effect of time. To summarize, the equation for the LMER models
for argument and MMR between courses were as follows.
CIKMAZ ET AL.17
|
Argumentij ¼β0þβ1COURSEi
ðÞþβ2WEEKij

þβ3COURSEi
ðÞWEEKij

þb0iþb2iWEEKij

þєij

MMRij ¼β0þβ1COURSEi
ðÞþβ2WEEKij

þβ3COURSEi
ðÞWEEKij

þb0iþb2iWEEKij

þєij

In these equations and those that follow, the βcoefficients are fixed effects, the bcoefficients
are random effects, the єis an error term, and the isubscripts index the course (i.e., chemistry
or physics) and the jsubscripts index the week (i.e., from 1 through 14).
For the second question, argument and MMR growthwithineachlearningenvironment
over the semester were predicted by regressing the rubric score (on a 10-point scale for
either argument or MMR) as a dependent variable on the rubric type (argument and
MMR), time (Week, over 10 weeks), and type by time interaction (Type Week) as fixed
effects. The interaction term was employed to see whether the argument and MMR regres-
sion lines within each course are divergent or convergent. As random effects, there are
intercepts for subjects and by-subject random slopes for the effect of time. Within course
equations are
Physicsij ¼β0þβ1TYPEi
ðÞþβ2WEEKij

þβ3TYPEi
ðÞWEEKij

þb0iþb2iWEEKij

þєij

Chemistryij ¼β0þβ1TYPEi
ðÞþβ2WEEKij

þβ3TYPEi
ðÞWEEKij

þb0iþb2iWEEKij

þєij

After testing linear models, quadratic models were also tested to determine if they fit the data
better than the linear model alone. Quadratic models can explain, if there is growth, whether
the trend of the growth is increasing constantly or indicates leveling off. By doing so, results of
the quadratic model provide insight into whether developing epistemic competencies is contin-
uous or there are separate phases for development and utilization of argument and language
competence. Quadratic models are represented as
Argumentij ¼β0þβ1COURSEi
ðÞ
þβ2WEEKij

þβ3WEEKij

2þβ4COURSEi
ðÞ
WEEKij

þβ5COURSEi
ðÞWEEKij

2þb0iþb2iWEEKij

þєij

MMRij ¼β0þβ1COURSEi
ðÞþβ2WEEKij

þβ3WEEKij

2þβ4COURSEi
ðÞWEEKij

þβ5COURSEi
ðÞWEEKij

2þb0iþb2iWEEKij

þєij

to compare linear and quadratic growth across learning environments (Research Question 1), and
Physicsij ¼β0þβ1TYPEi
ðÞþβ2WEEKij

þβ3WEEKij

2þβ4TYPEi
ðÞWEEKij

þβ5TYPEi
ðÞWEEKij

2þb0iþb2iWEEKij

þєij

18 CIKMAZ ET AL.
|
Chemistryij ¼β0þβ1TYPEi
ðÞþβ2WEEKij

þβ3WEEKij

2þβ4TYPEi
ðÞWEEKij

þβ5TYPEi
ðÞWEEKij

2þb0iþb2iWEEKij

þєij

to compare the difference in growth by competency types (Research Question 2).
4|RESULTS
The present study examined the growth patterns of the quality of students' argument and MMR
use in their lab reports over the course of the semester. The first research question addresses
the possible impact of the learning environments (replicative vs. generative) on the growth rates
of students' quality of argument and MMR use in their writings across the semester. For the sec-
ond research question, the argument and MMR growth patterns were compared for each course
to check whether the patterns of growth for argument and MMR are similar.
4.1 |Students' growth pattern in chemistry versus physics
4.1.1 | Argument growth pattern in chemistry versus physics
The findings from the LMER models comparing semester-long growth of argument quality
between physics and chemistry can be seen in Table 3, with Figure 2 demonstrating the impact
on argument growth visually across the semester (descriptive statistics are available as Elec-
tronic Supplementary Material). The LMER model results of argument growth showed that,
although there was no significant difference initially between the courses (intercepts for phys-
ics: β
0
=3.679; chemistry: β
0
=4.014), the slopes of argument growth in each course diverged
significantly (course-by-week interaction coefficient: β
3
=0.611, t
(482)
=10.902, p< 0.001). The
TABLE 3 Comparison of argument between physics and chemistry lab reports
Linear model Quadratic model
Estimates of fixed effects
Fixed effect Coefficient SE Coefficient SE
Intercept 3.679*** 0.368 4.428*** 0.870
Course 0.335 0.386 1.669 0.912
Week 0.079 0.047 0.341 0.281
Week
2
0.020 0.021
Course week 0.611*** 0.056 1.506*** 0.310
Course week
2
0.078** 0.024
Estimates of variance components
Random effects Variance SD Variance SD
Intercept 0.6493 0.806 0.6872 0.829
Week 0.0006 0.025 0.0020 0.044
Note: Course: physics =0, chemistry =1. *p< 0.05; **p< 0.01; ***p< 0.001.
CIKMAZ ET AL.19
|
coefficient for the course-by-week interaction indicates that the students' arguments showed a
higher slope in chemistry, with a week-to-week increase of 0.611 points more, on average.
Whereas the quality of argument in the chemistry lab reports was increasing across the semes-
ter, there is no significant change in the physics lab reports. Figure 2(a) shows this as a straight
line with positive slope for the holistic argument score in chemistry lab reports, compared to a
nearly flat line for the holistic arguments in the physics lab reports. In addition, the quadratic
model, which fit the data better than the linear model (χ
2(2)
=24.532, p< 0.001), showed there
was a significant, negative quadratic value for course by week
2
interaction (β
5
=0.078, t
(480)
=3.203, p< 0.001). This means that, for chemistry lab reports, the growth in argument
use was not constant, but rather had a large initial growth rate which leveled off slightly across
the semester (see Figure 2(b)).
To help exemplify the changes in students' argument competence over the course of the
semester, we review the translated examples from one student's written lab reports near
the beginning and end of the term (see Electronic Supplementary Material). At the beginning of
the term, in chemistry, the student's lab report in Week 1 was a relatively weak argument that
did not show a match between question, claim, and evidence, and that provided only observa-
tion notes instead of a justification of the claim using evidence. By contrast, the report in Week
9 incorporated well-developed arguments that included testable questions, the answers for
those questions were phrased as claims, and the justifications of those claims were explicitly
supported using evidenceas shown in the following excerpt:
To repeat my claim,the substance that has less molecular weight diffuse faster.
The gas having less molecular weight moves faster than the gas having greater molec-
ular weight.The best evidence of this is our experiment.In our experiment,we investi-
gated the diffusion of NH3(g) and HCl(g).NH3(g) diffused faster because the
FIGURE 2 Linear (a) and quadratic (b) plots for argument and multimodal representation (MMR) growth in
chemistry and physics
20 CIKMAZ ET AL.
|
molecular weight of NH3 is 17.03,and the molecular weight of HCl is 36.46.when we
looked at,we see that the molecular weight of NH3 is less,so it diffuses more by mov-
ing faster.
All of this was reported with a nice flow between argument components. For example, the stu-
dents use the words claimand evidencewhile developing their arguments. Using these
words, promote the readers' awareness of argument components and help students create better
arguments. However, in physics, the student did not have any improvement in argument even
though the beginning lab report had the same score with chemistry. This helps to demonstrate
how students' written lab reports can exhibit changes in argument quality.
4.1.2 | MMR growth pattern in chemistry versus physics
The LMER models comparing semester-long growth in MMR quality growth between
physics and chemistry had similar results with argument quality growth. Results can be seen in
Table 4, and Figure 2 includes visual representation of the data. At the beginning of the semes-
ter, there was no significant difference between courses concerning MMR quality (intercept for
physics: β
0
=2.886; chemistry: β
0
=2.994). However, across the semester, the slopes diverged
significantly as seen in course by week interaction coefficient (β
3
) of 0.477 (t
(482)
=8.282,
p< 0.001). The coefficient for the course-by-week interaction indicates that the students' MMR
use showed a higher slope in chemistry, with a week-to-week increase of 0.477 points more, on
average. So, the MMR quality growth in the chemistry lab reports showed a significantly
increasing pattern across the semester while there is no significant change in the physics lab
reports (see linear slopes in Figure 2(a)). Again, a quadratic model fit the data better
(χ
2
(2) =10.762, p< 0.01), showing that there was a significant, negative value for course by
week
2
interaction (β
5
=0.053, t
(480)
=2.089, p< 0.05). In short, for chemistry lab reports,
there was not constant growth in quality of MMR use but a large growth rate that leveled off
slightly across the semester (see Figure 2(b)).
To help exemplify the changes in students, we review the same student's translated lab
reports near the beginning and end of the term (Electronic Supplementary Material). At the
beginning of the term, the student's chemistry lab report in Week 1 was mostly text that incor-
porated only one other mode, a mathematical equation, and without showing any
embeddedness and cohesion of mode. By contrast, the chemistry report in Week 9 incorporated
three different modes (drawing, mathematical equation, chemical equation) with high levels of
embeddedness and cohesion. Students placed the nontext modes near the text where they
referred to them, and started to use connecting words, like In the figure aboveand, as it is
seen on the adjoining equationsto explicitly draw attention to nontext modes. Using
embeddedness and this kind of wording called out the nontext modes and resulted in additional
explanation of nontext modes within the text. By doing so, quality of embeddedness, cohesion,
and flow is improved. Furthermore, in Week 1 the student's report offered the supporting mate-
rial but did not go into depth on how it was relevant to the argumentthe student just inserted
the equationwhereas for Week 9 the student focused on how to justify her/his claim utilizing
different modes and connecting to the text with explanations (e.g., In the figure above,it is pos-
sible to see the evidence of my claim.NH3 moves and diffuses faster than HCl because the molecu-
lar weight of NH3 is less than the molecular weight of HCl.Therefore,it diffuses more.).
However, in physics, the student did not utilize any different modes in the argument itself,
CIKMAZ ET AL.21
|
using text not only in Week 1 but also in Week 9. These examples, again, help to demonstrate
how students' language competence is observed to grow over the course of the semester.
4.2 |The interdependence of epistemic competencies in chemistry/
physics
Results above demonstrated that the students' chemistry lab reports show significantly more
growth over the semester than the physics lab reports in terms of the argument and MMR. The
growth of epistemic competencies of argument and MMR use show similar patterns in either
linear or quadratic models. However, to see statistically whether there is an interdependence
between growth of argument and language, it is important to consider how these growth rates
of argument and MMR within each course may be related. This is done by modeling the out-
come as a type (argument or MMR) within each course separately.
4.2.1 | Argument versus MMR growth patterns in chemistry
The LMER results for comparison of argument and MMR in chemistry (Table 5) showed there
was an initial significant difference between argument and MMR (intercept for argument:
β
0
=4.016; MMR: β
0
=2.797), and regardless of the Types (argument and MMR), there was a
constant increase weekly (Week coefficient: β
2
=0.529, t
(578)
=11.786, p< 0.001). This coeffi-
cient for week indicates that students were showing a week-on-week increase of 0.529 points,
on average, and that the pattern was similar for both argument and MMR outcome. Comparing
the difference between the argument and the MMR score is not worthwhile and practical in this
case because the scores are not on a common metric even though they are in the same numeri-
cal scale. Our main goal here is to compare the patterns of the Argument and the MMR
TABLE 4 Comparison of language-MMR between physics and chemistry lab reports
Linear model Quadratic model
Estimates of fixed effects
Fixed effect Coefficient SE Coefficient SE
Intercept 2.886*** 0.372 3.383*** 0.904
Course 0.108 0.397 1.251 0.951
Week 0.053 0.052 0.122 0.294
Week
2
0.013 0.022
Course week 0.477*** 0.058 1.084*** 0.324
Course week
2
0.053* 0.025
Estimates of variance components
Random effects Variance SD Variance SD
Intercept 0.562 0.750 0.594 0.770
Week 0.010 0.102 0.012 0.107
Note: Course: physics =0, chemistry =1. *p< 0.05; **p< 0.01; ***p< 0.001.
Abbreviation: MMR, multimodal representation.
22 CIKMAZ ET AL.
|
TABLE 5 Comparison of slopes between argument and language for chemistry and physics lab reports
Chemistry Physics
Linear model Quadratic model Linear model Quadratic model
Estimates of fixed effects Estimates of fixed effects
Fixed effect Coefficient SE Coefficient SE Coefficient SE Coefficient SE
Intercept 4.016*** 0.277 2.754*** 0.372 3.680*** 0.308 4.372*** 0.790
TypeMMR 1.019*** 0.263 0.648 0.435 0.747 0.426 1.204 1.113
Week 0.529*** 0.045 1.165*** 0.133 0.078 0.042 0.320 0.259
Week
2
0.058*** 0.011 0.019 0.020
TypeMMR week 0.004 0.042 0.191 0.182 0.125* 0.060 0.285 0.365
TypeMMR week
2
0.017 0.016 0.012 0.028
Estimates of variance components Estimates of variance components
Random effects Variance SD Variance SD Variance SD Variance SD
Intercept 1.2612 1.123 1.3169 1.147 0.1094 0.330 0.1112 0.333
Week 0.0332 0.182 0.0362 0.190 0.0008 0.028 0.0008 0.028
Abbreviation: MMR, multimodal representation.
Note: TypeMMR: argument =0, language-MMR =1. *p< 0.05; **p< 0.01; ***p< 0.001.
CIKMAZ ET AL.23
|
growthto examine whether there is a parallelism between two growth lines. Thus, the inter-
action coefficient can provide more information on whether there is a parallelism between the
lines. Because the type by week interaction coefficient was not significant, there is no difference
in the slopes of the argument and the MMR lines. In other words, there was a difference in the
overall average value of argument and MMR scores; but their trend lines showed parallel
growth (see Figure 2(a)).
A quadratic model fitted the data better than the linear model (χ
2
(2) =37.069, p< 0.001),
showing that there was no significant difference between the quality of argument and MMR ini-
tially (the intercepts for argument: 2.754, MMR: 2.106) as opposed to linear model. As men-
tioned above, comparing initial scores is not meaningful because the measures are not directly
comparable. Instead, we focus on the growth pattern and see that growth continued with a posi-
tive week coefficient (β
2
) of 1.165 (t
(576)
=8.733, p< 0.001) and the negative value of week
2
(β
3
=0.058, t
(576)
=5.075, p< 0.001). This shows that the growth rate is decreasing regard-
less of Type. In other words, there was no constant growth but a large growth rate that leveled
off slightly across the semester for both argument and MMR outcomes (see Figure 2(b)). These
growth rates were roughly equivalent for both argument and MMR.
4.2.2 | Argument versus MMR growth patterns in physics
To compare with the chemistry lab reports, we also compare growth patterns for argument and
MMR quality for the physics lab reports. The results (Table 5) showed that although there is no
significant change in the qualities of argument and MMR, and there is no difference between
argument and MMR use initially, their slopes are slightly diverging (type by week interaction
coefficient: β
3
=0.125, t
(386)
=2.093, p< 0.05) as seen Figure 2. When compared to the growth
in chemistry, this diverging looks very small (compare plots in Figure 2), indicating no substan-
tive difference between the slopes of argument and MMR qualities and no growth for both
types. Moreover, fitting a quadratic model provides no further information than the linear
model for comparing growth in argument and MMR quality for physics lab reports (See
Figure 2(b)). This indicates that, regardless of learning environment, argument and MMR com-
petencies show substantially parallel growth patterns. This supports the interdependence of
these two epistemic competencies.
5|DISCUSSION
Development and utilization of epistemic tools are encouraged for promoting deeper science
learning (NRC, 2012). While argumentation is widely recognized as an essential epistemic tool,
the epistemic role of language has not received the same level of attention (Prain &
Hand, 2016) despite its fundamental role in science learning (Norris & Phillips, 2003) and the
interdependence of argumentation itself on language (Tang & Moje, 2010). Language thus war-
rants attention not only as a product of inquiry but also a process of knowledge construction.
The current literature on argument (Asterhan & Schwarz, 2016) and language ([MMR],
Disessa, 2004) competencies in using these epistemic tools suggests that these competencies
would exhibit different rates of growth depending on how they were promoted and effectively
practiced as a part of the learning environment. In comparing two learning environments, this
study adds substantively to the field by exploring the differences in growth patterns in learning
24 CIKMAZ ET AL.
|
environments, and the interrelationships among growth across learning environments where
students use argument (e.g., Choi, 2008) and MMR (e.g., McDermott, 2009; Hand,
McDermott, & Prain, 2016; Neal, 2017) not only separately, but also simultaneously
(e.g., Demirbag, & Gunel, 2014; Hand & Choi, 2010; Yaman, 2020). The results of this study
suggest that (1) there is a markedly similar pattern of growth between argument quality and
quality of language use within each respective environment, and (2) the growth in quality of
argument and quality of language use, and utilization of these epistemic resources, are more
prominent in a knowledge generation environment.
Our results support the interdependence between argument and language use, as previously
theorized (e.g., Cavagnetto, 2010; Norton-Meier, 2008; Osborne, 2002; Tang & Moje, 2010) but
without sufficient evidence. The present results indicate a noticeable, parallel pattern between
argument and language growth across the semester, which supports the theoretical work on the
interdependent nature of these competencies. We build on work by Hand and Choi (2010) that
showed a high correlation between quality of embeddedness of representations and the quality
of argument, and by Yaman (2020) that showed similar growth patterns between argument and
multiple levels of representationsuse and quality of microscopic, macroscopic, symbolic,
and algebraic representationin a generative environment. We expand on these by showing
that argument and language have a similar growth pattern within two respective learning envi-
ronments. In the chemistry labs, we found parallel patterns of growth within this generative
environment. In the physics labs, although scores on argument and language competencies
were low throughout, the quality of argument and language use were parallel across the semes-
ter in that replicative environment. This supports the notion that language and argument are
interdependent in the extent to which they are used.
Yaman (2020) asserted there are two phases for growth in argument ability: a development
or formation phase, and a utilization phase. That is, even if an environment is designed to be
generative, these competences take some time to develop and apply, and then the use levels off.
This is corroborated in this study. The quadratic model indicates a pattern with initial, notable
growth of competencies in a developing phase (Figure 2(b)), then an apparent shift to a utiliza-
tion phase with a steady, high level of using those developed competencies. Our findings with
the guidance of quadratic analysis are consistent with Muis and Duffy's (2013) results that
with an intervention, epistemic competencies can develop and reach a stable level in just sev-
eral weeks. Since this study followed the same students moving between two different environ-
ments, it allowed us to compare the growth of competencies in two environments. However,
the results show that the development and utilization of those parallel resources depend upon
the environments that the students are in. The generative learning environment (here, chemis-
try labs) appeared to support development and utilization of epistemic competencies at a higher
level than replicative learning environment (here, the physics labs). This pattern was consis-
tently observed across both argument quality and language (MMR) use.
Our findings suggest three critical points. First, language use may be an important lever for
supporting argumentation as a knowledge generation process. Despite an emphasis on argu-
mentation for over three decades, argumentation is still virtually absentin science classrooms
(Fishman et al., 2017; Osborne, 2010). The interdependence of argument and language growth
(Tang & Moje, 2010) points out that encouraging language use in a generative manner
talking, writing, and drawing to develop and clarify ideascan benefit argumentation and, by
extension, science learning in science classrooms. We also consider that changing the audience
for students' writing may contribute to improved language use (del Longo & Cisotto, 2014), but
this requires further exploratory research.
CIKMAZ ET AL.25
|
Second, the most critical question raised from our findings is, Why did students not utilize
their competencies, which they developed and utilized at a high level in the generative environ-
ment, when they were in the replicative environment?If they developed and had high-level
utilization of those tools, it means epistemic tools became epistemic resources that can be used
in any learning condition (Bailin, 2002; Muis et al., 2016). Thus, we should expect students to
utilize those epistemic tools in replicative environments at a similar level as they do in the gen-
erative environment. The lack of consistent utilization of epistemic competencies in replicative
environments needs to be explored. The answer to this question is beyond the scope of this arti-
cle, but we can put forward tentative suggestions for future exploration. One, the demands of
the learning environment may support or inhibit the use of these competencies, based on
teacher's guidance, differences in nature or use of assessments, students' group expectations,
etc. Two, time may play a role, whether students are willing to commit the time needed to pre-
pare and write a complex written report (if it is not required), or whether it requires an
extended time to become consciously aware of differences in one's language and argument use.
Future research addressing these may answer questions such as: Does the structure of some
environments prevent students from adopting and utilizing those developed tools? Does it take
time for students to understand that they can adapt these tools to different environments?
Third, in the present study, we may consider possible explanations for differences in the use
of competencies when comparing students' writing samples across the two learning environ-
ments. One possible explanation is the role of audience. Previous work has argued that chang-
ing the audience, such as to a near peer or younger student, pushes students to transition and
translate between different modes of representation (Disessa, 2004; Waldrip et al., 2010)
and improve conceptual understanding (Gunel, Hand, & McDermott, 2009). However, the dif-
ference between environments was not restricted to the changing audience for their writing.
Students were required to prepare concept maps and beginning questions prior to lab sessions
in chemistry, activities that may provide sources for a generative writing process (Klein &
Boscolo, 2016), and engage in more dialogic interaction that can enhance argument and content
understanding (Shi et al., 2019). At this point, we are unable to distinguish these elements of
language in the two environmentswriting to learn, argumentative discourse, available writing
resources, and writing audiencein how they may affect students' development and use of lan-
guage and argument competencies. Further research is still needed to explore these elements
and their effects.
5.1 |Implications for practitioners
The study offers further support for previous recommendations to engage students in
immersive, argument-based inquiry environments (Cavagnetto, 2010), which emphasize the
importance of incorporating argumentation and peerpeer interaction to support learning. In
addition to the previously established benefits of argument-driven inquiry and using multiple
forms of language to develop arguments, the present findings may also indicate a role for offer-
ing students a specific audience for their writing: rather than writing to the teacherto sum-
marize his/her own work, the student can write to a near peer, such as a subsequent student of
the same instructor, or for preservice teachers, to write to a hypothetical future pupilof their
own. Doing so may be one way to support growth in students' competencies for developing a
scientific argument and expressing the argument in a multimodal and coherent way.
26 CIKMAZ ET AL.
|
5.2 |Limitations of the study
The nature of education research includes tradeoffs, and it is important to consider them while
interpreting the results. When we were comparing two environments, we examined a cohort of
students who move between two different disciplinesphysics and chemistryin the same
semester. Although we did not focus on the discipline related issues while scoring argument
and language, it can be a factor that affects students' lab writing. We tried to minimize this by
using scoring rubrics that were not tied to any specific disciplinary content area or even an
argument structure. For future research, another possible research setting could be two cohorts
of students enrolled in the same disciplinary area (or specific content area), but who experience
different learning environments. These could help the field explore if there are systematic dif-
ferences in language use or argumentative structures and uses across disciplinary areas.
Additionally, the environments were characterized for this research using interviews with
the instructors, whose descriptions may have been biased or could have omitted important
aspects the instructors could not recognize or describe. Observing or recording classroom
implementations can remove this limitation and improve the knowledge of the classroom envi-
ronments to the extent to which dialogue, argument, and MMR are carried out. Furthermore,
future research could explore the possible influences on students' transference of their compe-
tencies using interviews with students about their development of competencies and capacity or
willingness to transfer these competencies in other courses. Finally, this study was conducted
with a small number of participants. Thus, there is some caution for generalizing the findings
too far until studies with larger sample sizes could be implemented.
ORCID
Ali Cikmaz https://orcid.org/0000-0001-7196-1085
Gavin Fulmer https://orcid.org/0000-0003-0007-1784
Fatma Yaman https://orcid.org/0000-0002-4014-3028
Brian Hand https://orcid.org/0000-0002-0574-7491
ENDNOTE
1
Although all four aspects are incorporated into the rubric, it should be clarified that neither chemistry nor
physics professor required using nonverbal modes nor specified writing to any particular audience for the lab
reports (see Table 1).
REFERENCES
Acar, O., & Patton, B. R. (2012). Argumentation and formal reasoning skills in an argumentation-based guided
inquiry course. Procedia-Social and Behavioral Sciences,46, 47564760.
Archila, P. A., Molina, J., & Truscott de Mejía, A. M. (2018). Using formative assessment to promote argumenta-
tion in a university bilingual science course. International Journal of Science Education,40(13), 16691695.
Ardasheva, Y., Norton-Meier, L., & Hand, B. (2015). Negotiation, embeddedness, and non-threatening learning
environments as themes of science and language convergence for English language learners. Studies in Sci-
ence Education,51(2), 201249.
Asterhan, C. S., & Schwarz, B. B. (2016). Argumentation for learning: Well-trodden paths and unexplored terri-
tories. Educational Psychologist,51(2), 164187.
Bailin, S. (2002). Critical thinking and science education. Science & Education,11(4), 361375.
Banerjee, M., Capozzoli, M., McSweeney, L., & Sinha, D. (1999). Beyond kappa: A review of interrater agreement
measures. Canadian Journal of Statistics,27(1), 323.
CIKMAZ ET AL.27
|
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of
Statistical Software,67(1), 148. https://doi.org/10.18637/jss.v067.i01
Bates, D. M. (2010). lme4: Mixed-effects modeling with R.
Berland, L. K., & Reiser, B. J. (2009). Making sense of argumentation and explanation. Science Education,93(1),
2655.
Birol, G., Han, A., Welsh, A., & Fox, J. (2013). Impact of a first-year seminar in science on student writing and
argumentation. Journal of College Science Teaching,43(1), 8291.
Bowman, L. L., Jr., & Govett, A. L. (2015). Becoming the change: A critical evaluation of the changing face of life
science, as reflected in the NGSS. Science Educator,24(1), 51.
Brewer, E. W., & Kuhn, J. (2019). Causal-comparative design. In N. J. Salkind (Ed.), Encyclopedia of research
design (pp. 125131). SAGE Publications.
Brownell, S. E., Kloser, M. J., Fukami, T., & Shavelson, R. (2012). Undergraduate biology lab courses: Comparing
the impact of traditionally based "cookbook"and authentic research-based courses on student lab experi-
ences. Journal of College Science Teaching,41(4), 3645.
Cavagnetto, A. R. (2010). Argument to foster scientific literacy: A review of argument interventions in K12 sci-
ence contexts. Review of Educational Research,80(3), 336371.
Chen, Y. C., Park, S., & Hand, B. (2016). Examining the use of talk and writing for students' development of sci-
entific conceptual knowledge through constructing and critiquing arguments. Cognition and Instruction,
34(2), 100147.
Choi, A. (2008). A study of student written argument using the Science Writing Heuristic approach in inquiry-based
freshman general chemistry laboratory classes. (PhD dissertation). University of Iowa.
Choi, A., Hand, B., & Greenbowe, T. (2013). Students' written arguments in general chemistry laboratory investi-
gations. Research in Science Education,43(5), 17631783.
Choi, A., Hand, B., & Norton-Meier, L. (2014). Grade 5 students' online argumentation about their in-class
inquiry investigations. Research in Science Education,44(2), 267287.
Coirier, P., Andriessen, J., & Chanquoy, L. (1999). From planning to translating: The specificity of argumentative
writing. In Foundations of argumentative text processing (pp. 128). Amsterdam University Press.
Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods
approaches. Sage publications.
Crowell, A., & Kuhn, D. (2014). Developing dialogic argumentation skills: A three-year intervention study. Jour-
nal of Cognition and Development,15, 363381.
Curto, K., & Bayer, T. (2005). Writing & speaking to learn biology: An intersection of critical thinking and com-
munication skills. Bioscene: Journal of College Biology Teaching,31(4), 1119.
del Longo, S., & Cisotto, L. (2014). Writing to argue: Writing as a tool for oral and written argumentation. In
Writing as a learning activity (pp. 1543). Brill.
Demirbag, M., & Gunel, M. (2014). Integrating Argument-Based Science Inquiry with Modal Representations:
Impact on Science Achievement, Argumentation, and Writing Skills. Educational Sciences: Theory and Prac-
tice,14(1), 386391.
Disessa, A. A. (2004). Metarepresentation: Native competence and targets for instruction. Cognition and Instruc-
tion,22(3), 293331.
Duschl, R. (2008). Science education in three-part harmony: Balancing conceptual, epistemic, and social learning
goals. Review of Research in Education,32(1), 268291.
Duschl, R., & Osborne, J. (2002). Supporting and promoting argumentation discourse in science education. Stud-
ies in Science Education,38,3972.
Duschl, R. A., Schweingruber, H. A., & Shouse, A. W. (Eds.). (2007). Taking science to school: Learning and teach-
ing science in grades K-8 (Vol. 49, pp. 166163). National Academies Press.
Engle, R. A., & Conant, F. R. (2002). Guiding principles for fostering productive disciplinary engagement:
Explaining an emergent argument in a community of learners classroom. Cognition and Instruction,20(4),
399483.
Erduran, S., Simon, S., & Osborne, J. (2004). TAPping into argumentation: Developments in the application of
Toulmin's argument pattern for studying science discourse. Science Education,88(6), 915933.
28 CIKMAZ ET AL.
|
Fan, Y. C., Wang, T. H., & Wang, K. H. (2020). Studying the effectiveness of an online argumentation model for
improving undergraduate students' argumentation ability. Journal of Computer Assisted Learning,36(4),
526539.
Fishman, E. J., Borko, H., Osborne, J., Gomez, F., Rafanelli, S., Reigh, E., Tseng, A., Million, S., & Berson, E.
(2017). A practice-based professional development program to support scientific argumentation from evi-
dence in the elementary classroom. Journal of Science Teacher Education,28(3), 222249.
Fitzmaurice, G. M., Laird, N. M., & Ware, J. H. (2004). Applied longitudinal analysis. Hoboken Wiley-
Interscience.
Ford, M. (2008). Disciplinary authority and accountability in scientific practice and learning. Science Education,
92(3), 404423.
Ford, M. J. (2012). A dialogic account of sense-making in scientific argumentation and reasoning. Cognition and
Instruction,30(3), 207245.
Ford, M. J., & Forman, E. A. (2006). Chapter 1: Redefining disciplinary learning in classroom contexts. Review of
Research in Education,30(1), 132.
Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014).
Active learning increases student performance in science, engineering, and mathematics. Proceedings of the
National Academy of Sciences of the United States of America,111(23), 84108415.
Fulmer, G. W. (2018). Causal comparative research. In B. B. Frey (Ed.), The SAGE encyclopedia of educational
research, measurement, and evaluation (pp. 252254). Sage Publications.
Gabelica, C., & Fiore, S. M. (2013, September). What can training researchers gain from examination of methods
for active-learning (PBL, TBL, and SBL). In Proceedings of the human factors and ergonomics society annual
meeting (Vol. 57, pp. 462466). Sage Publications.
Galbraith, D. (2009). Writing about what we know: Generating ideas in writing. In The SAGE handbook of writ-
ing development (pp. 4864). SAGE Publications.
Gee, J. P. (2004). Language in the science classroom: Academic social languages as the heart of school-based lit-
eracy. In E. W. Saul (Ed.), Crossing borders in literacy and science instruction: Perspectives on theory and prac-
tice (pp. 1032). International Reading Association.
Grooms, J., Sampson, V., & Golden, B. (2014). Comparing the effectiveness of verification and inquiry laborato-
ries in supporting undergraduate science students in constructing arguments around socioscientific issues.
International Journal of Science Education,36(9), 14121433.
Grzyb, K., Snyder, W., & Field, K. G. (2018). Learning to write like a scientist: A writing-intensive course for
microbiology/health science students. Journal of Microbiology and Biology Education,19(1), 18.
Gunel, M., Hand, B., & McDermott, M. A. (2009). Writing for different audiences: Effects on high-school stu-
dents' conceptual understanding of biology. Learning and instruction,19(4), 354367.
Haack, S. (2004). Epistemology legalized: Or, truth, justice, and the American way. The American Journal of
Jurisprudence,49(1), 4361.
Haas, C., & Flower, L. (1988). Rhetorical reading strategies and the recovery of meaning. College Composition
and Communication,39,3047.
Halliday, M. A. K., & Martin, J. R. (2003). Writing science: Literacy and discursive power. Routledge.
Hand, B., & Choi, A. (2010). Examining the impact of student use of multiple modal representations in con-
structing arguments in organic chemistry laboratory classes. Research in Science Education,40(1), 2944.
Hand, B., McDermott, M. A., & Prain, V. (Eds.). (2016). Using multimodal representations to support learning in
the science classroom. Switzerland: Springer International Publishing.
Hofer, B. K. (2016). Epistemic cognition as a psychological construct: Advancements and challenges. In Hand-
book of epistemic cognition (pp. 3150). Routledge.
Jiménez-Aleixandre, M. P., & Erduran, S. (2007). Argumentation in science education: An overview. In Argu-
mentation in science education (pp. 327). Springer.
Keys, C. W., Hand, B., Prain, V., & Collins, S. (1999). Using the science writing heuristic as a tool for learning
from laboratory investigations in secondary science. Journal of Research in Science Teaching,36(10), 1065
1084.
Klein, P., Boscolo, P., Kirkpatrick, L., & Gelati, C. (Eds.). (2014). Writing as a learning activity. Brill.
Klein, P. D. (1999). Reopening inquiry into cognitive processes in writing-to-learn. Educational Psychology
Review,11(3), 203270.
CIKMAZ ET AL.29
|
Klein, P. D. (2006). The challenges of scientific literacy: From the viewpoint of second-generation cognitive sci-
ence. International Journal of Science Education,28(23), 143178.
Klein, P. D., & Boscolo, P. (2016). Trends in research on writing as a learning activity. Journal of Writing
Research,7(3), 311350.
Klein, P. D., Boscolo, P., Gelati, C., & Kirkpatrick, L. C. (2014). New directions in writing as a learning activity.
In Writing as a learning activity (pp. 114). Brill.
Klein, U. (2001). Introduction. In U. Klein (Ed.), Tools and modes of representation in the laboratory sciences.
Kluwer Academic.
Kress, G. (2005). Gains and losses: New forms of texts, knowledge, and learning. Computers and Composition,
22(1), 522.
Kuhn, D. (1993). Science as argument: Implications for teaching and learning scientific thinking. Science Educa-
tion,77, 319337.
Kuhn, D., Hemberger, L., & Khait, V. (2016). Argue with me: Argument as a path to developing students' thinking
and writing (2nd ed.). Routledge.
Lemke, J. (1998). Multiplying meaning: Visual and verbal semiotics in scientific text. In J. R. Martin & R. Veel
(Eds.), Reading science: Critical and functional perspectives on discourses of science (pp. 87113). Routledge.
Lemke, J. L. (1990). Talking science: Language, learning, and values. Ablex Publishing.
Lin, S. S. (2014). Science and non-science undergraduate students' critical thinking and argumentation perfor-
mance in reading a science news report. International Journal of Science and Mathematics Education,12(5),
10231046.
Maclure, M., & Willett, W. C. (1987). Misinterpretation and misuse of the kappa statistic. American Journal of
Epidemiology,126(2), 161169.
Magnifico, A. M. (2010). Writing for whom? Cognition, motivation, and a writer's audience. Educational Psychol-
ogist,45(3), 167184.
Marrero, D. (2016). An epistemological theory of argumentation for adversarial legal proceedings. Informal Logic,
36(3), 288308.
Mayer, R. E. (2009). Multimedia learning. Cambridge University Press.
McDermott, M. A. (2009). The impact of embedding multiple modes of representation on student construction of
chemistry knowledge. (PhD dissertation). University of Iowa.
McNeill, K. L., Lizotte, D. J., Krajcik, J., & Marx, R. W. (2006). Supporting students' construction of scientific
explanations by fading scaffolds in instructional materials. The Journal of the Learning Sciences,15(2),
153191.
Moon, A., Stanford, C., Cole, R., & Towns, M. (2016). The nature of students' chemical reasoning employed in
scientific argumentation in physical chemistry. Chemistry Education Research and Practice,17(2), 353364.
Muis, K. R., & Duffy, M. C. (2013). Epistemic climate and epistemic change: Instruction designed to change stu-
dents' beliefs and learning strategies and improve achievement. Journal of Educational Psychology,105(1),
213225.
Muis, K. R., Trevors, G., & Chevrier, M. (2016). Epistemic climate for epistemic change. In J. A. Greene, W. A.
Sandoval, & I. Bråten (Eds.), Handbook of epistemic cognition (pp. 331359). Routledge.
National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting concepts, and
core ideas. National Academies Press.
Neal, T. (2017). The impact of argument-based learning environments on early Learners' multimodal representa-
tions. The University of Iowa.
NGSS Lead States. (2013). Next generation science standards: For states, by states. The National Academies Press.
Norris, S. P., & Phillips, L. M. (2003). How literacy in its fundamental sense is central to scientific literacy. Sci-
ence Education,87(2), 224240.
Norton-Meier, L. (2008). Creating border convergence between science and language: A case for the Science
Writing Heuristic. In Science inquiry, argument and language: The case for the Science Writing Heuristic
(SWH) (pp. 1324). Brill Sense.
Nussbaum, E. M., & Asterhan, C. S. (2016). The psychology of far transfer from classroom argumentation. In
The psychology of argument: Cognitive approaches to argumentation and persuasion (pp. 407423). London:
College Publications.
30 CIKMAZ ET AL.
|
Osborne, J. (2002). Science without literacy: A ship without a sail? Cambridge Journal of Education,32(2),
203218.
Osborne, J. (2010). Arguing to learn in science: The role of collaborative, critical discourse. Science,328(5977),
463466.
Osborne, J. F., Henderson, J. B., MacPherson, A., Szu, E., Wild, A., & Yao, S. Y. (2016). The development and val-
idation of a learning progression for argumentation in science. Journal of Research in Science Teaching,
53(6), 821846.
Ozdem, Y., Ertepinar, H., Cakiroglu, J., & Erduran, S. (2013). The nature of pre-service science teachers' argu-
mentation in inquiry-oriented laboratory context. International Journal of Science Education,35(15),
25592586.
Padilla, M., & Cooper, M. (2012). From the framework to the next generation science standards: What will it
mean for STEM faculty? Journal of College Science Teaching,41(3), 6.
Parreira, P., & Yao, E. (2018). Experimental design laboratories in introductory physics courses: Enhancing cog-
nitive tasks and deep conceptual learning. Physics Education,53(5), 055012.
Prain, V., & Hand, B. (2016). Coming to know more through and from writing. Educational Researcher,45(7),
430434.
Prain, V., & Waldrip, B. (2008). A study of teachers' perspectives about using multimodal representations of con-
cepts to enhance science learning. Canadian Journal of Science. Mathematics and Technology Education,
8(1), 524.
Prichard, J. R. (2005). Writing to learn: An evaluation of the calibrated peer reviewprogram in two neurosci-
ence courses. Journal of Undergraduate Neuroscience Education,4(1), A34A39.
Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education,93(3),
223231.
R Core Team. (2019). R: A language and environment for statistical computing. R Foundation for Statistical Com-
puting Retrieved from https://www.R-project.org/
Rapanta, C., Garcia-Mila, M., & Gilabert, S. (2013). What is meant by argumentative competence? An integrative
review of methods of analysis and assessment in education. Review of Educational Research,83(4), 483520.
Reynolds, J. A., Thaiss, C., Katkin, W., & Thompson, R. J., Jr. (2012). Writing-to-learn in undergraduate science
education: A community-based, conceptually driven approach. CBELife Sciences Education,11(1), 1725.
Rivard, L. P., & Straw, S. B. (2000). The effect of talk and writing on learning science: An exploratory study. Sci-
ence Education,84(5), 566593.
Sadler, T. D. (2006). Promoting discourse and argumentation in science teacher education. Journal of Science
Teacher Education,17(4), 323346.
Sampson, V., & Clark, D. (2009). The impact of collaboration on the outcomes of scientific argumentation. Sci-
ence Education,93(3), 448484.
Sampson, V., Enderle, P., & Grooms, J. (2013). Argumentation in science education. The Science Teacher,
80(5), 30.
Sandoval, W. (2014). Conjecture mapping: An approach to systematic educational design research. Journal of the
Learning Sciences,23(1), 1836.
Sandoval, W. A., & Millwood, K. A. (2007). What can argumentation tell us about epistemology? In S. Erduran &
M. P. Jimenez-Aleixandre (Eds.), Argumentation in science education: Perspectives from classroom-based
research (pp. 7188). Springer.
Shi, Y., Matos, F., & Kuhn, D. (2019). Dialog as a bridge to argumentative writing. Journal of Writing Research,
11(1), 107129.
Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating inter-
rater reliability. Practical Assessment, Research, and Evaluation,9(1), 4.
Tang, K. S. (2015). Reconceptualising science education practices from new literacies research. Science Education
International,26(3), 307324.
Tang, K. S., & Moje, E. B. (2010). Relating multimodal representations to the literacies of science. Research in Sci-
ence Education,40(1), 8185.
Taylor, C. E., & Drury, H. (2005). The effect of student prior experience, attitudes, and approaches on perfor-
mance in an undergraduate science writing program. In G. Rijlaarsdam, H. van den Bergh, & M. Couzijn
CIKMAZ ET AL.31
|
(Eds.), Effective learning and teaching of writing. Studies in writing (Vol. 14). Springer. https://doi.org/10.
1007/978-1-4020-2739-0_38
Tynjälä, P., Mason, L., & Lonka, K. (2001). Writing as a learning tool: An introduction. In Writing as a learning
tool (pp. 722). Springer.
van Belle, G., Fisher, L. D., Heagerty, P. J., & Lumley, T. (2004). Biostatistics: A methodology for the health sciences
(Vol. 519). John Wiley & Sons.
Verkade, H., & Lim, S. H. (2016). Undergraduate science students' attitudes toward and approaches to scientific
reading and writing. Journal of College Science Teaching,45(4), 8389.
von Aufschnaiter, C., Erduran, S., Osborne, J., & Simon, S. (2008). Arguing to learn and learning to argue: Case
studies of how students' argumentation relates to their scientific knowledge. Journal of Research in Science
Teaching,45(1), 101131.
Vygotsky, L. S. (1962). Thought and language. MIT Press.
Waldrip, B., Prain, V., & Carolan, J. (2010). Using multi-modal representations to improve learning in junior sec-
ondary science. Research in Science Education,40(1), 6580.
Walker, J. P., Sampson, V., Anderson, B., & Zimmerman, C. O. (2012). Argument-driven inquiry in undergradu-
ate chemistry labs: The impact on Students' conceptual understanding, argument skills, and attitudes toward
science. Journal of College Science Teaching,41(4), 8289.
Walton, D. (1996). Argument structure: A pragmatic theory. University of Toronto Press.
Walton, D. (2016). Argument evaluation and evidence (Vol. 23). Springer.
Wellington, J., & Osborne, J. (2001). Language and literacy in science education. McGraw-Hill Education.
Winter, B. (2013). Linear models and linear mixed effects models in R with linguistic applications. arXiv:
1308.5499. Retrieved from [http://arxiv.org/pdf/1308.5499.pdf
Yaman, F. (2020). Pre-service science teachers' development and use of multiple levels of representation and
written arguments in general chemistry laboratory courses. Research in Science Education,50, 23312362.
Yore, L., Bisanz, G. L., & Hand, B. M. (2003). Examining the literacy component of science literacy: 25 years of
language arts and science research. International Journal of Science Education,25(6), 689725.
Yore, L. D., & Treagust, D. F. (2006). Current realities and future possibilities: Language and science literacy
Empowering research and informing instruction. International Journal of Science Education,28(23),
291314.
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section
at the end of this article.
How to cite this article: Cikmaz, A., Fulmer, G., Yaman, F., & Hand, B. (2021).
Examining the interdependence in the growth of students' language and argument
competencies in replicative and generative learning environments. Journal of Research in
Science Teaching,132. https://doi.org/10.1002/tea.21715
32 CIKMAZ ET AL.
|
... Augmentative writing (AW) is a literacy tool used to facilitate learning and develop students' critical thinking and scientific literacy (Cho & Jonassen, 2002;Cikmaz et al., 2021;Ferretti & Graham, 2019;Nussbaum et al., 2019). It provides the opportunity to identify problems, find potential solutions, construct arguments, and produce new knowledge (Finkenstaedt-Quinn et al., 2017). ...
... This hypothesis is based on previous studies that found students that engage in AW learning tasks will perform better in a test of conceptual understanding of chemistry than students that do not Bangert-Drowns et al., 2004;Chen et al., 2020;Gunel et al., 2007;Kingir et al., 2012;Yaman, 2018). It was considered necessary to test the learning effects in our context, because Bangert-Drowns et al. (2004) and Cikmaz et al., (2021) pointed out that the effect of writing tasks on learning might depend on scaffolding, population, and context. For example, we included online simulation and writing prompts that were not included in other AW studies for learning chemistry (e.g., Moon et al., 2019;Sampson et al., 2013;Walker et al., 2016;Yaman, 2021). ...
Article
Full-text available
Non-science majors often lack motivation to take science courses required for their graduation, because these courses are usually taught in a lecture format and are disconnected from their everyday life and needs related to future careers. This twophase action research, utilizing argumentative writing (AW) supported with online simulation, was conducted over three academic years in a college chemistry course designed for non-science majors. Phase 1, a quasi-experimental design (n = 134), examined the treatment effects of AW projects and determined the components of AW that contributed to student gains in conceptual performance. The results showed that students in the AW group scored significantly higher in conceptual performance than the control group. Five AW components predicted student gains in conceptual performance: accuracy of claim, relationship between claim and question, relationship between claim and evidence, use of multiple examples, and use of appropriate writing style. Phase 2, a single group design (n = 118), explored the inter-relationships between pre-/post-course knowledge, pre-/post-course motivation, and students’ performance on AW projects. The results showed that students’ motivation to learn chemistry at the beginning of the course is a significant predictor of their conceptual performance. Pathway analysis found that the performance of AW projects in low motivation students was affected by extrinsic motivation (grades, career). Their performance in AW projects did not affect their post-course knowledge. In contrast, the performance in AW projects by high motivation students was affected by intrinsic motivation and self-determination. They cumulatively built knowledge through AW projects that eventually affected post-course knowledge.
... We view producing high-quality arguments that consider multiple positions (Nussbaum & Schraw, 2007) through a cycle of construction and critique (Cikmaz et al., 2021;Kuhn, 1993;Yore et al., 2003) belongs to the create level in the revised Bloom's Taxonomy, while making judgments about arguments is at the evaluate level (Anderson & Krathwohl, 2001;Zhao et al., 2021). The E-SA in this study resonates with the checking sublevel within the evaluate level in Bloom's Taxonomy, as students need to evaluate each SA element based on the provided criteria by implicitly judging the connections between certain elements, however, they do not need to explicate and explain their judgment or to evaluate competing arguments on a controversial topic. ...
Article
Full-text available
Argumentation plays a significant role in science as a scientific practice in which knowledge is constructed, evaluated, and modified. Scientific argumentation (SA) is thus a promising activity for students to pursue in order to think and act like scientists and to enhance their understanding of science. Considering the impact that assessments usually have on teaching and their role in understanding students' ability, this study aims to explore the assessment of scientific argumentation competence (SAC) and Chinese high school students' performance and perceptions of SA. This study, therefore, proposes a three‐component framework for understanding SAC, including the competencies to identify, evaluate, and generate the SA elements outlined in Toulmin's argument model. Based on this framework, we hypothesize three levels of SAC and develop a test including 22 items to see whether these proposed SAC elements are valid and to what extent they represent students' levels of SAC. An iterative procedure is adopted to develop and validate the assessment instrument, validity evidence is collected from the analysis of teachers' interviews, students' think‐aloud and follow‐up interviews, and from confirmatory factor analysis and Rasch analysis of students' scores on the test. The results suggest that the measure, which includes three SAC components has acceptable reliability and validity, and the three‐level progression of SAC is aligned with and further expands previous learning progressions of SA. We also find that most students in our sample are at level 1 of SAC and they in general hold positive attitudes toward learning through argumentation, but their willingness in engaging with SA tends to be affected by their perception of the intrinsic and practical value of SA and their personality. Providing them with explanations of SA elements does not improve their test performance.
... Academic accommodations to support Eng+ students have been proposed but need further study, including providing students with more time to complete assessments, incorporating scaffolds and visual aids on assessment items, and low-stakes opportunities to develop language and argumentation skills (Abedi et al., 2004(Abedi et al., , 2020Cikmaz, Fulmer, Yaman, & Hand, 2021;Francis, Rivera, Leseaux, Kieffer, & Rivera, 2006;E. N. Lee & Orgill, 2021;Ryoo & Bedell, 2019;Siegel, 2007). ...
Preprint
Full-text available
Making decisions and constructing arguments with scientific evidence and reasoning are essential skills for all members of society, especially in a world facing complex socioscientific issues (climate change, global pandemics, etc.). Argumentation is a complex linguistic practice, but little is known about how students from diverse language backgrounds engage in argumentation. The goal of this study was to identify how students’ English language proficiency/history was associated with the reasoning demonstrated in their written arguments. We found that students with lower English proficiency and less English history produced fewer causal responses compared to students with higher English language proficiency and history. Follow-up interviews with fifteen participants revealed that students’ comfort communicating in English on assessments depended on a combination of general and academic language experiences. Findings suggest a need to identify what barriers students from diverse language backgrounds encounter during argumentation to ensure students from all language backgrounds have equitable opportunities to demonstrate their abilities.
Chapter
DESCRIPTION In recent years, argumentation, or the justification of knowledge claims with evidence and reasons, has emerged as a significant educational goal, advocated in international curricula and investigated through school-based research. Research on argumentation in science education has made connections to the cognitive, linguistic, social and epistemic aspects of argumentation. The particular context of physics as the domain underpinning argumentation has been relatively under-researched. The purpose of this paper is to outline how argumentation can be situated within physics education to serve different types of learning goals. Following a review of trends in the literature on physics education research in recent years, we focus on a set of themes to illustrate the nature of issues raised by research on argumentation in physics education. In particular, we trace themes related to subject knowledge, scientific methods and socio-scientific contexts, and subsequently turn to the role of visual tools in supporting the teaching and learning of argumentation in physics. The chapter thus raises questions about how physics education can be enhanced through argumentation. We identify a number of areas for future research and development in argumentation research in physics education.
Article
Making decisions, reasoning, and constructing arguments with scientific evidence are essential skills for all members of society, especially in a world facing complex socioscientific issues such as climate change and pandemics. Argumentation is a complex linguistic practice but little is known about how students from diverse language backgrounds engage in argumentation. The goal of this study was to identify how students’ English language proficiency and history was associated with the reasoning demonstrated in their written arguments. We found that students with lower English proficiency and less English history produced fewer causal responses compared to students with higher English language proficiency and history. Follow-up interviews with fifteen participants revealed that students’ comfort communicating in English on assessments was affected by a combination of general and academic language experiences. Findings suggest a need to identify the barriers encountered by students from diverse language backgrounds during argumentation to ensure students from all language backgrounds have equitable supports and opportunities to demonstrate their scientific abilities.
Article
This study investigated the development and utilization of argument and representation resources in pre-service science teachers’ (PSTs’) written and oral arguments over two semesters in an argument-based inquiry environment of General Chemistry Laboratory I and II courses. The study employed a form of mixed methods research that is known as ‘data-transformation variant of convergent design’ which allows quantification of qualitative data. Data sources included PSTs’ 180 laboratory reports and 20 video recordings. A Friedman test and a Spearman-Brown correlation were conducted for statistical analysis. The results revealed that the quality of argument and representation were intertwined in both written and oral argumentation. While the PSTs’ quality of written argument and representation significantly increased from the first-time phase to the following time phases, in oral argumentation the quality remained stable after the second time phase. There was also a positive correlation amongst the PSTs’ quality of written and oral argument and representation. The PSTs’ representational competency increased over time and they connected more representations in written arguments. The results suggest that students should be provided with opportunities to engage in sustained talking, writing, and reading practices both publicly and privately in order to critique and construct arguments, develop representational competency, and integrate ideas.
Article
Full-text available
This research develops a web‐based model, entitled the “intuitive claim, peer‐assessment, discussion, and elaborate claim argumentation training” (IPadE) model, and embeds with a Web‐based Interactive Argumentation System to enhance undergraduate students' socioscientific argumentation abilities. This research adopts a quasi‐experimental research design; the sample comprised 131 undergraduate students from two classes (69 in the experimental group and 62 in the control group). The socioscientific issue discussed were related to global health. This study collected and analysed quantitative and qualitative data, including the pretest and posttest of students' knowledge test scores and argumentation abilities questionnaire. The results generally confirmed the effectiveness of the IPadE model. First, in a comparison of the content knowledge and argumentation skills, the experimental group have statistically significantly improved than the control group. Second, regarding the number of reasoning modes proposed, the experimental group could propose multiple reasoning modes and reasoning levels on rebuttals increased after training. Lay Description What is already known about this topic • In the traditional classroom setting, it is difficult to obtain a fair expression of opinions because dominant students lead the debate. With internet and communication technology (ICT) rapid development, that has been emerging as a useful supplement to traditional methods. Web‐based interactive learning environments have recently become a popular and effective tool in the education field worldwide. • Argumentation skills are the critical reasoning abilities that a modern citizen should possess. Regrettably, the researchers have consistently demonstrated that adolescents' and young adults' insufficient competencies regarding argumentation, constructing weaknesses of two‐sided arguments, and difficulty justifying evidence in support of their claims (Brem & Rips, 2000; Driver, Newton, & Osborne, 2000; Kuhn, 1991, 2003; Naylor, Keogh, & Downing, 2007; Voss & Means, 1991). Students' inadequate knowledge of science and judgment capacity can cripple their competency to make satisfactory decisions and be responsible citizens in society (Fowler, Zeidler, & Sadler, 2009). • The integration of argumentation activities into science‐related curriculum is currently upheld as a core factor for a successful science program. In addition, HIV/AIDS is a prominent global health problem in Taiwan. Research based on global health‐related content knowledge and argumentation is rarely. • Most of the literature has focused on the learning of elementary and middle school students (Fowler et al., 2009; Jimènez‐Aleixandre, Rodriguez, & Duschl, 2000; Keselman, Kaufman, Kramer, & Patel, 2007; Lin & Mintzes, 2010; Wu & Tsai, 2007; Zohar & Nemet, 2002) and has rarely investigated pre‐service and in‐service teachers and undergraduate students. (Osborne, Erduran, & Simon, 2004; Chang & Chiu, 2008). What this paper adds • Research based on global health‐related content knowledge and argumentation is rarely. We develop the IPadE argumentation model with Web‐based Interactive Argumentation System to enhance argumentation abilities of undergraduate students toward globe health problem. • We develop three research instruments, including HIV/AIDS knowledge test (HKT), Open‐ended SSI argumentation ability questionnaire (SAAQ), and Argumentation ability checklist (AAC). • We added the metacognitive supports interface designs “context viewer” function of WIAS; on the left is the initial argumentation context of the user, in the middle is the Lakatos' argumentation model, and on the right is the peer assessment score. Students can view their own initial argumentation context and reflect upon it after interacting with their peers and group discussion to obtain a sounder argumentation in the argumentation stage (Figure 5). Implications for practice • Development of HIV/AIDS knowledge: The results confirm that experimental group students in the semester program using the IPadE model with WIAS performed better and significantly improved HIV/AIDS knowledge. • Development of argumentation skills: The results confirm that undergraduate students using the IPadE model with WIAS performed significantly better than those in the control group. • Changes in students' use of arguments modes: The majority of the reasons proposed by participants were science‐and‐technology‐oriented arguments, followed by law‐oriented and social‐oriented arguments in the PH. Additionally, the students' use of the major argument modes were proposed as law‐oriented in the NH and rebuttal; however, the number of students in the experimental group that could propose multiple argument modes in the PH, NH, and rebuttal increased after training. • Design of online discussion boards: The students used the discussions to assess peer learning by looking at peers' viewpoints and ideas. Comparison of face‐to‐face classroom, the WIAS can provided students with opportunities at anytime and anywhere through asynchronous discussions board to record their thoughts then make peer learning. This study suggest that a well‐designed web‐based argumentation system is a critical part of the higher education experience when the goal is to improve students' argumentation ability.
Article
Full-text available
The purpose of this study was to examine pre-service science teachers’ (PSTs) development of multiple levels of representation and written arguments using an immersive approach to argument, the science writing heuristic, in general chemistry laboratory I and II courses. Fifty PSTs participated the study, 20 experiments were performed, and 976 samples were collected over two semesters. A case study design was used. The data were evaluated in three ways: the first was to examine the total number of representations used and connectedness of these representations, the second was to use analytical and holistic frameworks to evaluate PSTs’ written argument, and the third was to use a Wilcoxon Signed-Rank test to compare the scores gathered from the representations and argumentation. The results showed that PSTs’ holistic argument and multiple levels of representation increased over time, they showed parallel patterns, and the increasing quality of argument and use of representations were intertwined. The results also indicated that PSTs predominantly used the symbolic level of representation and used it as a mediator between the macroscopic, microscopic, and algebraic levels. The argument components of evidence and reflection appear to be critical areas in which the PSTs used more connected levels of representations, and PSTs were selective in using representations. The results of this study suggest that students should be provided with opportunities to use, generate, interpret, and reflect on these representational levels through the use of writing and negotiation activities as part of being involved in an argument-based laboratory environment.
Article
Full-text available
Learning the tools and conventions of expert communication in the sciences provides multiple benefits to bioscience students, yet often these skills are not formally taught. To address this need, we designed a writing-intensive microbiology course on emerging infectious diseases to provide upper-division students with science-specific writing skills along with disciplinary course content. The course followed the guidelines of our university’s Writing Intensive Curriculum (WIC) program. Students wrote a press release, a case study, a controversy/position paper, and a grant prospectus, and revised drafts after feedback. To assess the course, in 2015 and 2016 we administered pre-post surveys and collected writing samples for analysis. Students reported on their experience, training, skills, and knowledge before taking the course. They then rated the extent to which the assignments, lectures, in-class activities, and writing activities contributed to their attainment of the learning outcomes of the course. Students entering the class were inexperienced in tools of science writing and the specific genres covered by the class. Their confidence levels rose in both skills and knowledge. Feedback from instructors was cited as most helpful in the majority of the areas where students reported the most gains. The survey provided evidence that discipline-specific knowledge had been acquired through writing activities. Teaching science writing by allowing the students to write “fiction” (e.g., a case report about a fictional patient) was effective in maintaining a high level of interest, both in learning the conventions of the genre and in seeking out detailed information about emerging infectious diseases. Both the course structure and the specific assignments would be useful at other institutions to teach science writing.
Article
We describe a dialogic approach to developing argumentive writing whose key components are deep engagement with the topic and extended discourse with peers that provides the activity with both an audience and a purpose. In a dialogic intervention extended over an entire school year, pairs of sixth graders engaged in electronic discourse with peers on a sequence of topics, as well as wrote individual final essays on each topic. In their essays, they showed achievements relative to a non-participating group in coordinating evidence with claims, in particular in drawing on evidence to weaken claims as well as to support them. They also showed some meta-level enhancement in understanding of the role of evidence in argument. A recall task ruled out the possibility that this enhancement was due to superior recall of the specific evidence available to them, rather than broader meta-level understanding. A case is made for fostering development in argumentive writing both dialogically and in the context of topics that students engage with deeply.
Article
Formative assessment, bilingualism, and argumentation when combined can enrich bilingual scientific literacy. However, argumentation receives little attention in the practice of bilingual science education. This article describes the effect of a formative assessment-based pedagogical strategy in promoting university students’ argumentation. It examines the written and oral arguments produced by 54 undergraduates (28 females and 26 males, 16–21 years old) in Colombia during a university bilingual (Spanish-English) science course. The data used in this analysis was derived from students’ written responses, and audio and video recordings. The first goal of this study was to determine how this teaching strategy could help students increase the use of English as a means of communication in argumentation in science. The second goal was to establish the potential of the strategy to engage students in argumentative classroom interactions as an essential part of formative assessment. The findings show that the strategy provided participants with opportunities to write their argumentation in Spanish, in English and in a hybrid version using code-switching. Educational implications for higher education are discussed.
Article
Being able to contextualise and solve complex problems is a highly valued skill in STEM graduates - a skill which we strive to nurture in our students. Since its introduction into undergraduate teaching, laboratory teaching has been used to consolidate students conceptual understanding, develop their practical skills and inculcate an evidence based problem solving approach. Much work has been done to achieve these goals with varying degrees of success. Here we present an alternative to the regular introductory level physics laboratory experiments which enhances students learning by focusing on problem solving rather than simply following detailed instructions. Working in small groups, students were able to achieve the aims of the experiment through self and peer-instruction. Similar experiments can be easily and cost effectively implemented in any standard secondary school and undergraduate teaching laboratory. These can be adjusted to target the development of a wide range of specific skill sets as well as deepen students understanding of different physics principles and concepts. Our approach will enable the teaching laboratory to truly fulfil the function with which it was originally conceived.
Book
constitutive of reference in laboratory sciences as cultural sign systems and their manipulation and superposition, collectively shared classifications and associated conceptual frameworks,· and various fonns of collective action and social institutions. This raises the question of how much modes of representation, and specific types of sign systems mobilized to construct them, contribute to reference. Semioticians have argued that sign systems are not merely passive media for expressing preconceived ideas but actively contribute to meaning. Sign systems are culturally loaded with meaning stemming from previous practical applications and social traditions of applications. In new local contexts of application they not only transfer stabilized meaning but also can be used as active resources to add new significance and modify previous meaning. This view is supported by several analyses presented in this volume. Sign systems can be implemented like tools that are manipulated and superposed with other types of signs to forge new representations. The mode of representation, made possible by applying and manipulating specific types of representational tools, such as diagrammatic rather than mathematical representations, or Berzelian fonnulas rather than verbal language, contributes to meaning and forges fine-grained differentiations between scientists' concepts. Taken together, the essays contained in this volume give us a multifaceted picture of the broad variety of modes of representation in nineteenth-century and twentieth-century laboratory sciences, of the way scientists juxtaposed and integrated various representations, and of their pragmatic use as tools in scientific and industrial practice.