Conference PaperPDF Available

An examination of the relationship between argumentation quality and students’ growth trajectories.

Authors:

Abstract and Figures

In order to sustain cognitive developmental growth over time, it is necessary to ensure that students build robustly connected knowledge networks.The quality of our knowledge networks is reflected in explanation, application, and transfer.In explanations, high quality argumentation can be taken as evidence of robustly connected knowledge networks. If this is true, the rate of cognitive developmental growth should be predicted by the overall quality of students’ argumentation. In this paper, we show that there are important (even staggering) differences in the average developmental trajectories of students in different kinds of schools, and show that depth of understanding, as captured in a set of argumentation scales, explains about 35% of the variance in developmental trajectories.
No caption available
… 
No caption available
… 
No caption available
… 
No caption available
… 
No caption available
… 
Content may be subject to copyright.
Presented at the NERA conference in
Trumbull CT, October 18, 2017
Theo L. Dawson, Ph.D., Lectica, Inc.
Aiden M. A. Thornton, ABD, University of Western
Australia & Lectica, Inc.
An examination of the relationship
between argumentation quality and
students’ growth trajectories
©2017, Lectica, Inc. All rights reserved.
Road map
Brief rationale for this work
The cognitive developmental learning model that guides this
research and approach to assessment
Approaches to measuring cognitive developmental growth
and argumentation quality
Analysis 1: Cognitive developmental growth trajectories and
their relation to SES and instructional practices
Analysis 2: The relation between quality of argumentation and
cognitive developmental growth trajectories
2
©2017, Lectica, Inc. All rights reserved.
Rationale in brief
In order to sustain cognitive developmental growth over time,
it is necessary to ensure that students build robustly
connected knowledge networks.[7,10]
The quality of our knowledge networks is reflected in
explanation, application, and transfer.[14]
In explanations, high quality argumentation can be taken as
evidence of robustly connected knowledge networks.[7,10,14]
If this is true, the rate of cognitive developmental growth
should be predicted by the overall quality of students’
argumentation.
3
Learning model
4
©2017, Lectica, Inc. All rights reserved.
Hierarchical Complexity [10,11]
5
In the cognitive-developmental tradition, learning is viewed as the construction of increasingly sophisticated understandings—of
the physical and social world, and of ourselves. Each new level integrates and builds upon the knowledge of the preceding level,
resulting in levels of increasing hierarchical complexity.
©2017, Lectica, Inc. All rights reserved.
Hierarchical Complexity [10,11]
There are 13 levels—
0-12.
Levels 9 and 10 are
most common between
grades 4 and 12.
6
7
This model leads cognitive developmentalists to reject the notion of learning as the simple acquisition of correct information,
whether this information is in the form of facts, definitions, procedures, vocabulary, or rules.
8
Instead, they think of knowledge as a network that is constructed over time, one in which new knowledge builds upon existing
knowledge, to create increasingly sophisticated models of the world. Correctness, on its own, is not adequate evidence of this
kind of growth.
9
Each level of understanding can be thought of as the foundation for the next level of understanding. If a given level is poorly
constructed, the quality of the next level is endangered. The kind of learning that builds foundations that can effectively support
successive levels of development is referred to here as “robust knowledge” or “deep understanding.”
©2017, Lectica, Inc. All rights reserved.
Deep learning (a.k.a., robust learning)
The best way to build robust knowledge is to ensure that
students…
-are learning material that can readily be integrated into their
existing knowledge structures, and
-have ample opportunity to network this knowledge robustly.[14]
This is best accomplished by ensuring that educators
support virtuous cycles of learning (VCoLs)…[5.7]
-set appropriate learning goals,
-supply high quality information,
-provide opportunities for real-world application, and
-ensure that students have an opportunity to reflect upon the
outcomes of these applications.
10
The best way to build robust knowledge is to ensure that students are learning material that can readily be integrated into
their existing knowledge structures, and that they have ample opportunity to network this knowledge robustly.
This is best accomplished by ensuring that we support virtuous cycle of learning or VCoLs, in which we set appropriate
learning goals for individual students, supply high quality information, provide opportunities for real-world application, and
ensure that students have an opportunity to reflect upon the outcomes of each application and set the next learning goal.
Real-world applications are important because they ensure that new knowledge is integrated robustly enough to be useable
outside of the classroom—robust enough to to be sticky and to be built upon over time.
©2017, Lectica, Inc. All rights reserved.
Evidence of robust learning
Evidence of robust learning cannot reliably be found in correct
answers.
The first place we find evidence of understanding is in students’ ability
to apply their knowledge effectively in real-world contexts.
The second place we find evidence of understanding is in transfer—
the extent of a student’s ability to make the connections required to
use new knowledge in real-world contexts.
The third place we find evidence of understanding is in students’
explanations More complex, nuanced, clear, and coherent
explanations signal deeper understanding.
-hierarchical complexity (Lectical Level)
-quality of argumentation
11
Cannot reliably be found in correct answers. Correct performance supplies neither necessary nor sufficient evidence of
robust learning. An answer can be wrong due to an error that has little to do with how deeply an individual understands
something, and answers can be correct even when students have very little understanding of a construct.
The first place we find evidence of understanding is in students’ ability to apply their knowledge. For example, the
application of geometry to the design of a structure, or the application of the scientific method to the design of an
experiment.
The second place we find evidence of understanding is in transfer—the extent of a student’s ability to make the connections
required to use new knowledge in real-world contexts.
The third place we find evidence of understanding is in students’ explanations More complex, nuanced, and clear
explanations signal deeper understanding.
At Lectica, we look for evidence of understanding by measuring two aspects explanatory arguments. First, we determine
their hierarchical complexity or Lectical Level, by examining the complexity of their structure and the level of abstraction of
their elements. Then we examine the quality of their argumentation by examining different aspects of coherence.
©2017, Lectica, Inc. All rights reserved.
Argumentation & development
Level of argumentation is the degree to which learners are
able to coherently explain or justify their judgments.
The coherence of arguments/explanations is generally taken
as an indication of quality of understanding.[7,14]
The practice of argumentation has been shown to support
learning and development.[7,13,14]
Consequently, the quality of students’ argumentation skills is
likely to predict the slope of their developmental trajectories.
12
Assessment & metrics
13
©2017, Lectica, Inc. All rights reserved.
Lectical Assessments
Lectical Assessments are used to measure two dimensions:
-hierarchical complexity (Lectical Level)
-argumentation
They are open response assessments that feature ill-structured
questions, and probes designed to elicit rich explanations.
14
We build low-stakes, embeddable, and formative standardized assessments called Lectical Assessments or DiscoTests.
Most Lectical Assessments measure two dimensions:
hierarchical complexity (complexity and level of abstraction)
argumentation (mechanics and coherence)
To measure these dimensions, we have developed open response assessments that feature ill-structured questions and
probes designed to elicit rich explanations.
©2017, Lectica, Inc. All rights reserved.
LRJA (form 1)
The LRJA examines students’ reasoning about inquiry and evidence,
the quality of information and evidence, and the nature of
knowledge. It includes questions like the following:
1. Some scientists think that violent TV shows are bad for children. Others
think some violent TV shows are okay. Which group of scientists do you
think is right? Please explain.
2. How would you decide which group of scientists was right? Please
explain.
3. If you were one of the scientists who thought violent TV was bad for
children, what could you do to convince the other group of scientists that
you were right? Please explain.
4. How is it possible that the two groups of scientists have such different
ideas? Please explain.
5. Is it possible to know for sure if violent television is bad for children? Please
explain.
15
Some violent
TV is okay for
children.
Violent TV is
bad for
children.
For example
Our LRJA (Lectical Reflective Judgment Assessment)
This assessment elicits students’ reasoning about inquiry and evidence, the quality of information and evidence, and the
nature of knowledge. It includes open ended questions like:
Some scientists think that violent TV shows are bad for children. Others think some violent TV shows are okay. Which
group of scientists do you think is right? Please explain.
None of these questions has a “correct” answer. Students’ judgments are not scored. Instead, we score the hierarchical
complexity of their justifications and the quality of their argumentation.
©2017, Lectica, Inc. All rights reserved.
16
Question 1
Some scientists think that violent TV shows are bad
for children. Others think some violent TV shows are
okay. Which group of scientists do you think is
right? Why?
“Both groups are correct because the kids that get
too much violent TV might be more violent (by
influence) than those who don’t get as much violent
TV. Being more violent is rather dangerous for the
community, and the person him/herself.”
Students provide written responses in which they explain their answers.
©2017, Lectica, Inc. All rights reserved.
17
Phase
In order to make a decision, you should…
8d
think (think hard) or ask parents (teachers, etc.).
9a
think about what you have learned from your parents (or in school), or
think about what has happened to you (or someone you know).
9b
think about your own opinion (or what you like, think, believe, have seen)
or the opinions of others.
9c
use your thinking skills or trust your thoughts, find reasons, think about
why people think or feel the way they do, or think about the facts.
9d
think about what makes sense, possible outcomes (consequences), or
what you already know from life experience.
10a
analyze the evidence you have collected, understand the reasoning
behind a claim, use common sense or logic, or understand (get) both
perspectives.
10b
make an educated guess or a rough estimate; think about your own
values; put yourself in the other person's shoes, compare perspectives
or evidence, or look for relationships.
10c
look for conflicting evidence, consider similarities and differences or the
pros and cons of each position, or avoid personal bias.
10d
try to remain impartial or objective, consider multiple factors (as causes),
weigh the results, or look at the big picture.
Lectical
Assessment
System
Lectical Level is determined either with the Lectical Assessment System, low inference rubrics, or CLAS, our new electronic
scoring system.
©2017, Lectica, Inc. All rights reserved.
Psychometrics—Lectical Assessment System (LAS)
The psychometric properties of the LAS have been reported
elsewhere.[4]
The LAS has been shown to measure the same underlying dimension
as several other longitudinally validated developmental scoring
systems.
The LAS measures a dimension that is not captured to a great extent
by conventional assessments. Correlations between Lectical Level and
the scores of conventional assessments range from .25 to .55
Rasch person separation reliability (reproducibility of relative measure
location) ranges from .91 to .97.*
Human raters are required to maintain an agreement rate of 85%
within .20 of a level (continuously monitored).
18
*Analysis conducted with Winsteps.
©2017, Lectica, Inc. All rights reserved.
Psychometrics—LRJA rubrics
The psychometric properties* of the LRJA rubrics have
been reported elsewhere.[4]
In brief: on a sample of 3,754 rubric-scored LRJAs…
-Rasch person separation reliability (reproducibility of relative
measure location) = .91 (estimated Alpha = .94, assuming
complete data)
-Item separation reliability = .98
Human raters are required to maintain an agreement rate
of 85% within .20 of a level (continuously monitored).
19
*Analysis conducted with Winsteps.
©2017, Lectica, Inc. All rights reserved.
Scoring—CLAS
Another presentation at this conference is about CLAS, its
calibration, and its performance.
In short, CLAS scores currently agree with LRJA rubric
scores from 85% to 91% of the time within .20 of a level.
(Well above the 85% minimum set for human inter-rater
agreement.)
CLAS is required to maintain an agreement rate of 85%
within .20 of a level (continuously monitored).
The range of scores in a typical classroom is generally about
1.5 levels.
20
©2017, Lectica, Inc. All rights reserved.
21
Argumentation
To evaluate argumentation quality, trained raters use a standard set of rating scales.
I will be presenting more information about the argumentation scale a bit later.
BTW: The vocabulary rubric is used to rate the extent to which students use vocabulary meaningfully. Many students—especially
those attending inner city schools—use words in a way that suggests they do not comprehend their meaning.
©2017, Lectica, Inc. All rights reserved.
Psychometrics—Argumentation scales
The psychometric properties* of the Argumentation scales have
been reported elsewhere.[6]
Conducted on 7,647 LRJAs coded with the Argumentation
scales*…
-Person estimates range = 28–80 (Thurstone thresholds)
-Rasch person separation reliability (reproducibility of relative measure
location) = .90
-Standard error of person estimates: M = 1.6
-Item separation reliability = .99
-Distribution of item and person estimates was satisfactory
- No disordered thresholds for items
22
*Analysis conducted with Winsteps.
Analysis 1: Growth trajectories,
SES, and pedagogy
23
Before examining the relation between argumentation and Lectical growth, it’s worth taking a look at the kinds of differences we
have found in growth trajectories in different types of schools.
©2017, Lectica, Inc. All rights reserved.
Sample & method
All LRJA assessments in our database for which we have
information about…
-average SES of students and
-the degree to which the instructional focus is on correctness (low
VCoLing) vs. deep understanding (high, diverse VCoLing)
N = 15,177
Convenience sample!
Only two schools in the sample pre-screened students.
Assessments were scored with either the human version of the
Lectical Assessment System (n = 7,187), the LRJA rubrics (n =
4,304, or CLAS (n = 3,257), our computerized scoring system.
24
©2017, Lectica, Inc. All rights reserved.
25
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
High SES, high VCoL
4 5 6 7 8 9 10 11 12 13
Students in different types of schools develop at different rates, as can be seen in this figure, which shows average scores
by age for scored K-12 LRJAs in our database.
Five different types of schools are represented here—from low SES students in inner city schools in which there is very little
VCoLing to high SES high IQ students in a private international school with a high level of VCoLing.
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
High SES, high VCoL
4 5 6 7 8 9 10 11 12 13
26
High SES
international school
students
There are two groups of students that will be excluded from further examination. First, are students in a High SES school
who performed at levels not usually seen before the college years. These students, unlike those in the rest of our sample,
were tested prior to admission to the school, so were likely high achievers from the outset.
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
High SES, high VCoL
4 5 6 7 8 9 10 11 12 13
27
Inner city students
accepted into
college programs
n = 1,370
Error Bars: 95%CI
Second are the performances of low SES students who had been accepted into pre-college or college programs. These
students cannot be considered as a representative sample of inner city students.
©2017, Lectica, Inc. All rights reserved.
28
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12 13
I’ve added linear regression lines here so the trajectories are clearer. Although we know that growth is not linear, the linear
regression was a good enough fit to these data for my purposes here.
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12 13
29
Inner city low SES
(R2 = .227, F =
2552.50, p < .01)
n = 8,715
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12 13
30
n = 1,716
Public mid SES
(R2 = .593, F =
2499.17, p < .01)
Error Bars: 95%CI
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12 13
31
n = 219
Private mid SES
(R2 = .308, F = 97.25,
p < .01)
Error Bars: 95%CI
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12 13
32
Best performing
school (R2 = .361, F =
82.50, p < .01)
n = 147
Error Bars: 95%CI
©2017, Lectica, Inc. All rights reserved.
33
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12
Here I’ve taken some license and projected the linear regression lines, so we can make some comparisons between the 4 types
of schools.
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12
34
Projected average grade 12
score of students in inner
city schools is equal the
average grade 8 score of
students in mid SES public,
progressive schools.
The projected average grade 12 score of students in inner city schools is equal the average grade 8 score of students in mid SES
public, progressive schools. If we assume that these students continue developing at the same rate, by grade 12, they are
projected to be about 3.5 years behind students in our mid SES public schools. In fact, some of our models show very little
growth after grade ten (for the average inner city student), so the problem may be worse than it looks here.
©2017, Lectica, Inc. All rights reserved.
1120
1110
1100
1090
1080
1070
1060
1050
1040
1030
1020
1010
1000
990
980
970
960
950
940
Mean Lectcal Score
Grade
Error Bars: 67%CI
School type
Low SES public, low VCoL
Mid SES public, some VCoL
Mid SES private, some VCoL
Mid SES private, high VCoL
4 5 6 7 8 9 10 11 12
35
Projected grade 12 scores
of students in private
progressive schools equals
the average grade 10 score
of students in our best
performing school.
The huge disadvantage of low SES children is the cloud. There are 2 silver linings.
First, it is possible to change the slope of development. The differences between middle class schools serving similar
students shows this. We believe students in the best school are doing better because of this schools’ curriculum, which is
the most VCoL rich of any school we’ve worked with. This VCoLing helps students learn robustly—building solid
foundations for future growth.
It’s not all about $. The best performing school in our database is a participatory community school that serves middle class
families. Students are not pre-selected for high ability and its tuition is low relative to most private schools.
Can we attribute some of these differences in slope to depth of understanding as represented in the clarity and coherence
of their explanations?
Analysis 2: The relation
between argumentation &
development
36
©2017, Lectica, Inc. All rights reserved.
Sample
276 students
-105 students in grade 4 at time 1
-171 students in grade 6 at time 1
All students were attending inner city schools on the Eastern
Seaboard and took the assessment as part of a large
longitudinal study.
Students were tested in the fall of 2011, the winter of
2012-2013, and the spring of 2014 (2.5 years between time 1
and time 3).
37
©2017, Lectica, Inc. All rights reserved.
Procedures
Assessment was untimed, and was usually completed within
one school period.
There were no stakes attached to taking the assessments
(no credit, no grade).
Assessments were scored for Lectical Level with LRJA
rubrics, then rated for quality of argumentation by the same
rater.
Scorer assignment was randomized.
Inter-rater agreement was monitored with blind random
second scoring. Differences greater than .20 of a level were
reconciled through discussion.
38
©2017, Lectica, Inc. All rights reserved.
39
Lectical growth in 2 cohorts
cohort 1 (n = 105), cohort 2 (n = 171)
Lectical growth was reasonably linear across the 2 cohorts.
©2017, Lectica, Inc. All rights reserved.
40
Lectical growth in 2 cohorts
cohort 1 (n = 105), cohort 2 (n = 171)
Slope for larger
sample
n = 8,715
Growth in this sample was somewhat accelerated relative to the average growth in inner city schools in our larger sample. This
may be because this sample of students was somewhat atypical. Many students did not qualify for inclusion in this sample
because they did not complete all three assessments, produced performances that were unscorable on one or more of the test
occasions, or simply “blew off” the assessment on one or more test times.
©2017, Lectica, Inc. All rights reserved.
Data quality
Data distributions met the assumptions for regression
modeling.
41
©2017, Lectica, Inc. All rights reserved.
Argumentation and Lectical growth
Correlational and regression analyses were run to examine
the possible impacts of demographic variables, including
ethnicity, sex, and SES on Lectical growth.
-Only identifying as Latino correlated with Lectical Growth (r = .137, p
<.05).
42
©2017, Lectica, Inc. All rights reserved.
Lectical growth is explained by…
43
Model Summary
Model
R
R Square
Adjusted R
Square
Std. Error of
the Estimate
1
.432a
0.187
0.184
0.21388
2
.468b
0.219
0.213
0.20999
3
.591c
0.350
0.342
0.19200
a. Predictors: (Constant), Time 1 Lectical score
b. Predictors: (Constant), Time 1 Lectical score, Time 1 argumentation score
c. Predictors: (Constant), Time 1 Lectical score, Time 1 argumentation score, Arg time 3 - Arg time 1
After controlling for Lectical Level at time 1, both time 1 argumentation and growth in argumentation predicted Lectical growth.
©2017, Lectica, Inc. All rights reserved.
44
Model
Unstandardized
Coefficients
Standardized
Coefficients
t
Sig.
B
Std. Error
Beta
p
1
(Constant)
3.776
0.437
8.635
0.000
Time 1 Lectical score
-0.364
0.046
-0.432
-7.936
0.000
2
(Constant)
4.651
0.502
9.257
0.000
Time 1 Lectical score
-0.480
0.057
-0.570
-8.456
0.000
Time 1 argumentation
score
0.039
0.012
0.226
3.354
0.001
3
(Constant)
4.692
0.459
10.214
0.000
Time 1 Lectical score
-0.533
0.052
-0.633
-10.176
0.000
Time 1 argumentation
score
0.109
0.014
0.625
7.628
0.000
Arg time 3 - Arg time 1
0.099
0.013
0.513
7.387
0.000
Discussion
45
©2017, Lectica, Inc. All rights reserved.
Recap of rationale
In order to sustain cognitive developmental growth over time,
it is necessary to ensure that students build robustly
connected knowledge networks.
The quality of our knowledge networks is reflected in
explanation, application, and transfer.
In explanations, high quality argumentation can be taken as
evidence of robustly connected knowledge networks.
If this is true, the rate of cognitive developmental growth
should be predicted by the overall quality of students’
argumentation and change in argumentation over time.
46
©2017, Lectica, Inc. All rights reserved.
Conclusions
Developmental trajectories of students in different schools
varies dramatically.
-Some of the difference is explained by socioeconomic factors.
-But, as shown here, students in schools that employ more practices
that foster deep understanding develop faster than students in
schools who employ fewer.
Both initial argumentation quality and growth in argumentation
quality predicted the rate of developmental growth.
Te ac hi n g pr a ct ic es th at s u pp or t d ee p u nd er s ta nd in g , as
manifested in argumentation quality, can produce steeper
learning trajectories.
47
©2017, Lectica, Inc. All rights reserved.
Limitations & future research
Sampling:
-Our samples were convenience samples.
-The data employed in the argumentation analysis was part of a larger, well-
designed study, but we have not completed Argumentation ratings on the
entire available sample.
-Our examination of Lectical growth would have been stronger if we had a
larger, more diverse sample.
Methods:
-More sophisticated growth modeling procedures may have yielded deeper
insights.
Next:
-More argumentation rating with more diverse samples
-Further comparisons of the properties of Lectical Scores with other test scores.
48
©2017, Lectica, Inc. All rights reserved.
References
1. Afflerbach, P. (2005). High stakes testing and reading assessment. National Reading
Conference Policy Brief. Retrieved September 30, from http://journals.sagepub.com/doi/
pdf/10.1207/s15548430jlr3702_2.
2. Amrein, A. L., & Berliner, D. C. (2003). The effects of high-stakes testing on student
motivation and learning. Educational Leadership, 60(5), 32-38.
3. Baggio, H. C., Segura, B., Junque, C., de Reus, M. A., Sala-Llonch, R., & Van den Heuvel,
M. P. (2015). Rich club organization and cognitive performance in healthy older participants.
Journal of Cognitive Neuroscience, 27, 1801-1810.
4. Dawson, T. (2014). A confirmatory Rasch analysis of the RFJ001. International Objective
Measurement Workshop.
5. Dawson, T. L., & Seneviratna, G. (2015, July). New evidence that well-integrated neural
networks catalyze development. Proceedings from ITC, Sonoma, CA.
6. Dawson, T. L., & Stein, Z. (2008). Cycles of research and application in education: Learning
pathways for energy concepts. Mind, Brain, & Education, 2(2), 90-103.
7. Dawson, T. L., & Stein, Z. (2011, June). Virtuous cycles of learning: redesigning testing
during the digital revolution. Proceedings from The International School on Mind, Brain, and
Education, Erice (Sicily), Italy.
49
©2017, Lectica, Inc. All rights reserved.
References, cont.
8. Dawson, T. L., & Stein, Z. (2012, October 17). Measuring the growth of reflective judgment
with cognitive developmental assessments. Proceedings from Annual meeting of the
Northeastern Educational Research Association, Rocky Hill, CT.
9. Firestone, W. A., Frances, L., & Schorr, R. Y. (Eds.). (2004). The ambiguity of teaching to the
test: standards, assessment, and educational reform. Mahwah, NJ: Erbaum Associates.
10. Fischer, K. W. (1980). A theory of cognitive development: The control and construction of
hierarchies of skills. Psychological Review, 87, 477-531.
11. Fischer, K. W., & Bidell, T. R. (2006). Dynamic development of action, thought, and emotion.
In W. Damon & R. M. Lerner (Eds.), Handbook of child psychology: Theoretical models of
human development (6 ed., Vol. 1, pp. 313-399). New York: Wiley.
12. Hursh, D. (2008). High-stakes testing and the decline of teaching and learning. New York:
Rowman & Littlefeild.
13. Newton, P., Driver, R., & Osborne, J. (2010). The place of argumentation in the pedagogy of
school science. International Journal of Science Education, 21, 553-576.
14. Nickerson, R. S. (1985). Understanding understanding. American Journal of Education, 93,
201-239.
50
©2017, Lectica, Inc. All rights reserved.
Acknowledgements
The work reported in this paper/presentation would not have been possible
without the contributions of the entire Catalyzing Comprehension for Discussion
and Debate research team, the collaborating districts and school personnel, and
the willingness of teachers and students to participate in assessments, classroom
observations and recordings, and other data collection procedures. The research
reported here was supported by the Institute of Education Sciences, U.S.
Department of Education, through Grant R305F100026 to the Strategic
Educational Research Partnership Institute as part of the Reading for
Understanding Research Initiative. The opinions expressed are those of the
authors and do not represent views of the Institute or the U.S. Department of
Education.
51
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Conference Paper
Full-text available
This paper explores positive new directions for the future of educational testing by examining trends at the interface of the learning sciences and advances in educational technologies. A brief history of the relation between testing and technology sets the stage for a look at emerging “edu-tech” trends and what these might mean for the future of testing. This historical-critical look at past and present testing practices reveals that the learning sciences have been less influential in shaping the growth of testing infrastructures than cumulative advances in technology that have enabled large-scale standardization and automation. We argue that during the current “digital revolution” the learning sciences ought to assume more responsibility for shaping the adoption of new testing technologies. We propose a set of principles that, if followed, would move tomorrow’s testing infrastructures toward learning-centric design. Combining the affordances of new digital technologies with advances in our understanding of learning make it possible to build tests that promote multi-level learning in educational systems, catalyzing virtuous cycles of learning for everyone they affect—students, teachers, school leaders, policy makers, and researchers. The DiscoTest™ Initiative is presented as a reform effort that is guided by these design principles, serving as an example of positive new possibilities for testing at the interface of the learning sciences and new educational technologies.
Article
Full-text available
Report on research from 18 states that concludes that high-stakes tests do not lead to higher student achievement. In addition, such tests can decrease student motivation to learn and lead to higher student retention and dropout rates. (Contains 2 figures and 21 references.) (WFA)
Article
The human brain is a complex network that has been noted to contain a group of densely interconnected hub regions. With a putative "rich club" of hubs hypothesized to play a central role in global integrative brain functioning, we assessed whether hub and rich club organizations are associated with cognitive performance in healthy participants and whether the rich club might be differentially involved in cognitive functions with a heavier dependence on global integration. A group of 30 relatively older participants (range = 40-79 years of age) underwent extensive neuropsychological testing, combined with diffusion-weighted magnetic resonance imaging to reconstruct individual structural brain networks. Rich club connectivity was found to be associated with general cognitive performance. More specifically, assessing the relationship between the rich club and performance in two specific cognitive domains, we found rich club connectivity to be differentially associated with attention/executive functions-known to rely on the integration of distributed brain areas-rather than with visuospatial/visuoperceptual functions, which have a more constrained neuroanatomical substrate. Our findings thus provide first empirical evidence of a relevant role played by the rich club in cognitive processes.
Article
Skill theory provides tools for predicting developmental sequences and synchronies in any domain at any point in development by integrating behavioral and cognitive-developmental concepts. Cognitive development is explained by skill structures called "levels," together with transformation rules relating these levels to each other. The transformation rules specify the developmental steps by which a skill moves gradually from one level of complexity to the next. At every step in these developmental sequences, the individual controls a particular skill. Skills are gradually transformed from sensory-motor actions to representations and then to abstractions. The transformations produce continuous behavioral changes; but across the entire profile of a person's skills and within highly practiced task domains, a stagelike shift in skills occurs as the person develops to an optimal level. The theory suggests a common framework for integrating developmental analyses of cognitive, social, language, and perceptual-motor skills and certain behavioral changes in learning and problem solving. (6 p ref)
Article
This National Reading Conference Policy Brief provides information related to high stakes reading tests and reading assessment. High stakes reading tests are those with highly consequential outcomes for students, teachers, and schools. These outcomes may include student promotion or retention, student placement in reading groups, school funding decisions, labeling of schools as successful or failing, and the degree of community support for a school. The Policy Brief focuses on the popularity of high stakes tests, the uses and misuses of high stakes tests, and the consequences of high stakes testing. Although many believe high stakes tests to be central to efforts to raise school accountability and student achievement, these tests are accompanied by numerous liabilities. These include the following: (1) High stakes tests are used with increasing frequency in spite of the fact that there is no research that links increased testing with increased reading achievement; 2) High stakes tests are limited in their ability to describe students' reading achievement; (3) High stakes tests may be harmful to students' self-esteem and motivation; (4) High stakes tests confine and constrict reading curriculum; (5) High stakes tests alienate teachers; (6) High stakes tests disrupt high quality teaching and learning; and (7) High stakes tests demand significant allocation of time and money that could be otherwise used to increase reading achievement.
Article
The research reported in this paper stemmed from our conviction that argument is a central dimension of both science and science education. Our specific intention was to determine whether secondary science teachers in England give pupils opportunities to develop and rehearse the skills of argumentation during their lessons. We found that classroom discourse was largely teacher dominated and tended not to foster the reflective discussion of scientific issues. Opportunities for the social construction of knowledge, that are afforded by the use of argument-based pedagogical techniques, were few and far between. After a discussion of teachers' responses to this finding, we highlighted two major explanations: firstly, limitations in teachers' pedagogical repertoires; secondly, external pressures imposed upon science teachers in England by the National Curriculum and its assessment system.
Article
ABSTRACT— We begin this article by situating a methodology called developmental maieutics in the emerging field of mind, brain, and education. Then, we describe aspects of a project in which we collaborated with a group of physical science teachers to design developmentally informed activities and assessments for a unit on energy. Pen-and-paper assessments, called teasers, were employed, along with interviews, to study how students learned about the physics of energy. Results were used to describe students’ learning pathways and to design a scoring rubric for teacher use. We hypothesized that (a) teasers, by themselves, could be used effectively to evaluate the developmental level of students’ reasoning about energy and (b) teachers could employ the scoring rubric with minimal instruction. Encouraged by our findings, we went on to create a freely available online version of the energy teaser, including a new rubric designed to improve the accuracy with which teachers can assess the developmental level of students’ energy conceptions.
A confirmatory Rasch analysis of the RFJ001. International Objective Measurement Workshop
  • T Dawson
Dawson, T. (2014). A confirmatory Rasch analysis of the RFJ001. International Objective Measurement Workshop.
New evidence that well-integrated neural networks catalyze development
  • T L Dawson
  • G Seneviratna
Dawson, T. L., & Seneviratna, G. (2015, July). New evidence that well-integrated neural networks catalyze development. Proceedings from ITC, Sonoma, CA.