Conference PaperPDF Available

Statistical Modeling to Promote Students' Aggregate View of Data in the Context of Informal Statistical Inference

Authors:

Abstract and Figures

Helping students develop an aggregate view of data is a key challenge in statistics education. It has been suggested that modeling pedagogy can address this challenge (Lehrer & Schauble, 2004). In this paper we present a case study – part of a UK-Israel research project – that aims to examine how students' reasoning about modeling of a real phenomenon can support the emergence of aggregate view of data, in the context of making informal statistical inferences. We focus on the emergent reasoning of two fifth-graders (aged 10) involved in statistical data analysis and modeling activities using TinkerPlots2. We describe the students' articulations of aggregate view of data as they: 1) explore a small sample; 2) plan and construct a model that represents the investigated phenomenon and make predictions about 'some wider universe'; and 3) generate random samples from this model to examine its representativeness. This paper aims to contribute to the study of models that young students can understand and use to develop their aggregate view of data.
Content may be subject to copyright.
Statistical Modeling to Promote Students Aggregate View of Data
in the Context of Informal Statistical Inference
Keren Aridor and Dani Ben-Zvi
LINKS I-CORE, University of Haifa, Israel
Abstract
Helping students develop an aggregate view of data is a key challenge in statistics education.
It has been suggested that modeling pedagogy can address this challenge (Lehrer &
Schauble, 2004). In this paper we present a case study part of a UK-Israel research project
that aims to examine how students reasoning about modeling of a real phenomenon can
support the emergence of aggregate view of data, in the context of making informal statistical
inferences. We focus on the emergent reasoning of two fifth-graders (aged 10) involved in
statistical data analysis and modeling activities using TinkerPlots2. We describe the students
articulations of aggregate view of data as they: 1) explore a small sample; 2) plan and
construct a model that represents the investigated phenomenon and make predictions about
‘some wider universe’; and 3) generate random samples from this model to examine its
representativeness. This paper aims to contribute to the study of models that young students
can understand and use to develop their aggregate view of data.
Keywords: Exploratory data analysis, informal statistical inference, aggregate view of data,
statistical model, statistical modeling.
Introduction
One of the core aspects of statistical reasoning is handling data from an aggregate point of
view (Hancock, Kaput, & Goldsmith, 1992), namely, viewing data as an entity with emergent
properties, such as shape, center and spread (Konold, Higgins, Russell, & Khalil, 2014).
Young students tend to see data as individual cases and measurement values as inseparable
from an object or person measured. Students who cannot develop a notion of an organizing
structure with which they can see the whole instead of just the elements, miss the essential
point of doing statistics, which is predicting properties of aggregates (Bakker & Hoffmann,
2005). Therefore, developing students aggregate view of data is a key challenge in statistics
education (Bakker, Biehler, & Konold, 2004). It has been suggested that placing statistical
modeling at the heart of statistics learning can address this challenge by supporting students
search for patterns in data and account for variation in these patterns (Pfannkuch & Wild,
2004). In this paper, we closely study this assertion. This case study is part of a UK-Israel
research projecti (Ainley, Aridor, Ben-Zvi, Manor, & Pratt, 2013) that demonstrates how
fifth-graders modeling of an authentic phenomenon using TinkerPlots2 (TP2, Konold &
Miller, 2011) can support the emergence of aggregate view of data. We focus on the ways
they shifted between local and aggregate views of data while reasoning about models and
their context in the context of making informal statistical inference.
Literature review
Informal Statistical Inference (ISI) is a relatively new theoretical construct and pedagogical
approach aiming at deepening learners understanding of statistical inference in relation to
other key statistical ideas (Garfield & Ben-Zvi, 2008). ISI is based on making generalizations
beyond the given data, expressing uncertainty using a probabilistic language and using data
as evidence for those generalizations. The main goal of teaching ISI is to deepen the
understanding of the purpose and the gain that can be driven from the data and its
interpretations (Makar & Rubin, 2009). The reasoning process leading to making ISIs is
Informal Inferential Reasoning (IIR, Ben-Zvi, Gil, & Apel, 2007; Makar, Bakker, & Ben-Zvi,
2011). IIR is a cognitive activity engaged in formulating generalizations (e.g., conclusions,
predictions) from random samples of data using various statistical tools, while considering
and articulating evidence and uncertainty. IIR involves reasoning with several key statistical
ideas such as: sample size, sampling variability, controlling for bias, uncertainty and
properties of data aggregates (Rubin, Hammerman, & Konold, 2006).
Aggregate view of data. Statistical thinking is developed from a partial or local view of data
toward a global view of data (Konold et al., 2014), and the ability to flexibly shift between
these views (Ben-Zvi & Arcavi, 2001a). Such reasoning is called aggregate view of data, or
aggregate reasoning. Konold et al. (2014) defined a hierarchy of three other perspectives for
viewing data that are taken by students and are encapsulated by aggregate reasoning: 1) data
as pointers to the context of the source of the data, without referring to the data itself (there is
no fundamental unit). Data cases are served as reminders to the larger event from which it
came (e.g., refereeing to events that happened during the data collection and are not
necessarily seen in the data); 2) data as case values that provide information about the value
of some attribute for each individual case. Individual cases are perceived as the fundamental
unit for analysis and focusing on their characteristics (e.g., focusing on extreme values); 3)
data as classifiers, that give information about the frequency of cases with a particular
attribute value. Such cases are perceived as a unit with similar properties (e.g., the mode of
the data). The way that the data is viewed depends on the purpose of the data collection, the
context of the problem, and on the questions that are asked, and influences the way the data is
handled, e.g., the research questions, data representations, interpretation from data and
inference (Konold et al., 2014).
When viewing data as an aggregate, a data set is considered as an entity, or as a group,
with emergent properties, which are different from the properties of the individual cases
themselves (Friel, 2007). The notion of distribution as an organizing conceptual structure is
supported by aggregate reasoning (Bakker & Gravemeijer, 2004) that allows concentration on
the distributions emergent features such as: the general shape, how spread out the cases are,
and where the cases tend to be concentrated within the distribution (Konold et al., 2014).
With categorical data one might describe frequencies using percentage or quantitative
descriptors (e.g., “most”, “majority”), or with numeric data, one might relate to properties
such as measures of center (i.e., mean, median), of shape (e.g., symmetry, skewness), of
density (actual or relative frequency, majority, quartiles) and of spread (e.g., outliers, range,
interquartile range, standard deviation) (Friel, 2007; Cobb, 1999). Two important aggregate
properties are: 1) distinctions between signal and noise; and 2) recognition and diagnosis of
various types and sources of variability (e.g., variability due to measurement error, natural
variability, sampling variability) (Rubin et al., 2006). A pedagogical approach placing
modeling in the center of data exploration can support the emergence of aggregate view of
data (Lehrer & Schauble, 2004).
Model and modeling. Models are analogies in which objects and relations in the model
system are used as stand-ins to those in the real world by means of representations, laws, and
structures of reasoning (Lehrer & Schauble, 2010). Modeling is a process of forming a model
on the basis of key theoretical aspects and data in a particular discipline, and evaluating and
improving it to include theoretical ideas or new findings (Lesh, Carmona, & Post, 2002).
Modeling is considered a form of explanation that is characteristic even defining of
science. Model-based reasoning entails deliberately turning attention away from the
investigated phenomenon to construct a model (Lehrer & Schauble, 2010). A modeling
approach puts the modeling process (along with learning about the nature and the purposes of
models) in the center of the learning process (Schwartz & White, 2005).
Mathematical models are abstract constructs that focus on structural characteristics or
on a general pattern that is common to several systems (Lesh & Harel, 2003). Mathematical
models are used in statistics to represent a general pattern of the data (Moore, 1990). Model
and modeling are essential components of statistical reasoning (Wild & Pfannkuch, 1999).
The practice of statistics is a form of modeling, as the development of models of data,
variability and chance are paving the way of the statistical investigation (Wild & Pfannkuch,
1999; Lehrer, Kim, Ayers, & Wilson, 2014). A statistical model have an important role in the
foundations of statistical thinking, and reasoning with models is considered as a general as
well as specific statistical type of thinking. The former relates, for example, to statistical
conceptions of the situation that influence how we collect data about the system and analyze
it, and the latter relates, for example, to measuring and modeling variability for the purpose of
prediction, explanation, or control (Wild & Pfannkuch, 1999; Garfield & Ben-Zvi, 2008).
The main usages of statistical models are: 1) selection, design and usage of a suitable
model to simulate data that will address a research question. For example, by using a tool that
generates random data (e.g., dice), or by simulating a distribution of a population, based on a
sample of real data, that can be used to make inferences while examining statistical concepts,
such as representativeness of the sample (as in this case study); and 2) adaptation of a
statistical model to databases in order to explain or describe the variation in the investigated
population, for example, adjusting a linear model to data that describes a relationship
between variables (Garfield & Ben-Zvi, 2008).
A modeling pedagogical approach that views data as a model of a situation in the real
world (Hancock et al., ) can serve as a bridge between data and probability (Konold &
Kazak, 2008) by providing multiple affordances to learn about random samples and sampling
from an investigated population, consider key statistical ideas emanating from the study of a
hypothetical model of this population, and examine the connections between these elements
(Manor, Ben-Zvi, & Aridor, 2013). For example, modeling a random behavior (the
randomization test) might provide an opportunity to experience and reflect upon probabilistic
situations. It allows to mimic such behavior in a real world system, to answer questions about
that system and to predict future outcomes. Modeling random behavior underpins the
quantification of uncertainty using statistical inference techniques such as confidence
intervals and significance testing (Arnold, Budgett, & Pfannkuch, 2013). A modeling
pedagogical approach can support learners in coordinating their understanding of particular
cases with an evolving notion of data as an aggregate of cases (Lehrer & Schauble, 2004),
among others, by the need to summarize data in multiple ways depending on its nature
(Pfannkuch & Wild, 2004).
In this study we consider a model to be an analogy which simplifies a real
phenomenon and describes some of the connections and relations among its components. A
model can emerge through an observation of the real phenomenon, while selecting and
focusing on features that are relevant to a specific purpose, for which it was constructed. A
model might be abstract (conceptual) or concrete (e.g., graph, table, dice, TP2 sampler). The
abstract model can represent a real world system and the conjectures about it in order to
describe, explain, predict, and elaborate on its behavior (Wild & Pfannkuch, 1999; Lehrer &
Schauble, 2010). A concrete model can serve as a tool representing a process, such as a
production of the population, its key components or properties through prediction or by
sampling or as a tool that supports the emergence of informal ideas (Garfield & Ben-Zvi,
2008).
We assume that each step of the statistical investigation entails a process of
emergence, development, refinement or verification of a conceptual or concrete model,
according to a certain need or purpose. This process is related to an emergent ability to view
globally a real phenomenon in the context world. This view might entail conflicts between
context and data, which can support the development of aggregate view of data. In this case
study, a conceptual model, followed by concrete models, were developed by a pair of
students in an attempt to describe a real phenomenon, make predictions about it, and
“produce” its population. We focus on the emergence of these models in relation to students’
views of data.
Method
The research question. In this paper we focus on the question: How did the modeling of an
investigated phenomenon play a part in promoting (or hindering) the emergence of students
aggregate views of data? In order to address this question, we use data of a pair of fifth grade
students (aged 10) as they participated in the Dalmatians Task an authentic inquiry of
exploration, prediction and explanation of statistical modeling in the context of ISI. This case
study is a part of a UK-Israel collaboration (2012-2014) aimed at developing and studying a
modeling approach for teaching and learning statistics by integrating the benefits of
Exploratory Data Analysis (EDA) and Active Graphing (AG) (Ainley & Pratt, 2014; Ainley
et al., 2013).
The setting and participants. The participants are Iddo and Yael, a pair of academically
successful and articulate ten year-olds (grade 5) from two Israeli public schools. The students
had no previous formal experience in statistics or TinkerPlots. Both students learned earlier
this year in school how to calculate the arithmetic mean. The students spent three hours on
the Dalmatians Task.
Data collection and analysis. Two researchers introduced the task and the tools and
frequently asked the students to clarify their reasoning. The students’ investigations were
fully videotaped using Camtasia and an additional video camera to capture both their
computer screen, discussions and actions. The videos were carefully observed, transcribed,
translated from Hebrew to English, and annotated for further analysis of the relationship
between modeling and the development of students’ aggregate view of data. We used the
interpretative microgenetic method (Siegler, 2006) to analyze the data. It is a qualitative
detailed analysis of the transcripts that take into account verbal, gestural, and symbolic
actions within the situations in which they occurred. Interpretations were discussed by UK
and Israeli researchers until a consensus was reached. Episodes were selected to illustrate the
students’ development of aggregate view of data using modeling in the context of ISI.
Differences between Hebrew and English connotations of words were discussed extensively.
The Dalmatians Task. The children were asked to plan a model that would produce
realistic Dalmatians of different sizes in order to create a theme park of the 101 Dalmatians
movie. The learning trajectory (Table 1) was designed to encourage the students to reason
with key statistical ideas (such as, models, distribution, center and variability (signal and
noise), sample and sampling), express uncertainty, and develop aggregate view of data.
Table 1. The Dalmatians Task learning trajectory.
Content
Min.
a) Introduce and discuss the task
Learn about the task and make conjectures about the dog population
Ideas \ concepts
Natural variability, reality vs. simulation
5
b) Collect data
Measure two real Labradors (we had no Dalmatians at hand) and discuss their properties and the
relations between them
Ideas \ concepts
Natural variability, relations between attributes
18
c) Discuss and analyse a realistic data of five Dalmatians
We provided data of five Dalmatians’ spot color, height, tail length, body length, and leg length (Fig.
1). The students were asked to make conjectures, test them and search for relations between the
attributes using TP2. The quantitative variables values were approximately simulated according to the
relations between body measures of real Dalmatians: body length is similar to height at shoulder, leg
length is between half and two thirds of height at shoulder, and tail length is a bit more than half body
length.
Figure 1: A realistic data of five Dalmatians in a TP2 table.
Ideas \ concepts
Variability, uncertainty, relations within and between attributes
48
d) Build a model (A ‘machine’)
Plan and build a model in TP2 (a ‘machine’) to produce realistic Dalmatians.
Ideas \ concepts
Distribution, range, center, variability, frequency, chance, reasonable
data
30
e) Draw random samples from the model (run the ‘machine’)
Draw random sample graphs and compare them to the realistic data graph.
Ideas \ concepts
Randomness, spread, chance, variability, population and sample, signal
and noise
89
f) Evaluate the model and improve it
Evaluate and improve the model according to the realistic data and expectations raised from the
context world
Ideas \ concepts
Uncertainty, randomness, spread, chance, variability, population and
sample, dependent and independent variable, signal and noise
10
Summary of findings
The following description is provided to serve as a background for the viewing and
discussion of the video segments at the conference.
a) Introduce and discuss the task. The researcher (first coauthor) introduced the task goal to
the students, and asked them: how could we generate realistic dogs that would be different
from each other? The students began to reason about the population and its characteristics,
considering variability in dog’s dimensions and temper.
b) Collect data. The students discussed first how to measure the real Labradors and then
measured them accordingly. Yael’s preliminary conjecture was that dogs’ body
measurements are related to age, but to their surprise, the older dog (two years old) was
smaller in all her measurements than the younger dog (9 months). When they analyzed the
Collection 1
Options
spots height tail_length body_length leg_length <new>
brown 41 23 40 22
black 37 23 37 18
black 26 13 27 14
black 30 19 30 16
black 30 15 31 17
collected data, they conducted a comparison between the two dogs’ measurements, as well as
between the measurements of each dog. They found that there was a variability between the
dogs (one is bigger than the other) and within the dogs (by the proportion between attributes).
They declared that: “dogs are very different from each other”.
c) Discuss and analyze a realistic data of five Dalmatians. After a short preview of the
software, the students started analyzing the data (Table ). They examined one variable at a
time in stacked dotplots, and relations between attributes in a scatterplot. Iddo saw a clear
relation between height and leg length. They examined their conjecture (Fig. 2a) and looked
mostly locally at the data, considering data as case-value by focusing on extreme values.
Figure 2a (Left): Relation between leg length and height. Figure 2b (Right): Relation between
tail length and height.
Although they noticed a pattern in the data that strengthened their conjecture, the
students were bothered by three cases - two, four and five (Figs. 1, 2a). Two of them (cases
four and five) had the same size of height (30 cm), and a similar size of leg length (16 and 17
cm) and the irregular case (case two) had a similar leg length (18 cm) but bigger height (37
cm). In attempt to make sense of this irregularity, the students searched for explanation in
other attributes. They concentrated on comparing the tables columns and rows and looking at
graphs. For the rows, they found similarity between the values of the height and body length
for each case. The focus on cases four and five, led the students to isolate the tail length
attribute, arguing that this was the only attribute that distinguished between these cases and to
discover another interesting pair of cases - cases one and two (Figs. 1, 2b), that had the same
tail length, as the students noticed in the graph, but were differed in the other attributes, as the
students saw in the table. While wandering between the table and the graph, the students
revealed relations between attributes mostly by searching for similar values of two attributes
of each case. Iddo generalized these relations in a way that took variability into account and
said that a dog that is biggest in one attribute is relatively big in the other attributes.
Yael refined her method for looking at the table, and suggested another generalization
by referring to the difference between attributes of the same dog. She noticed that the
difference between attributes values for each specific dog were smaller than the difference
between attributes’ values of different dogs. Iddo used TP2 pen to draw a trend line and pairs
of parallel lines from some cases to the axes, to emphasize a proximate y=x linear relation
between height and body length (Fig. 3).
Yael had an idea for generating more dogs, but she didnt express it clearly. She
decided to use a paper to describe a new discovery (Fig. 4) - a method to assess the strength
of a relation between attributes. She divided the four numerical attributes into two categories,
where the difference between the values inside each category are small, while the difference
between values from different categories are big. She referred to two attributes in the same
category as closed (e.g., height and body length) and two attributes from a different
category as open (e.g., body length and leg length).
Figure 3 (left): A trend line to emphasize the relationship between height and body
length. Figure 4 (middle): Yael’s discovery: types of relationships between attributes of a
phenomenon. Figure 5 (right): A model of the attribute height among Dalmatians in TP2.
d) Build a model (a ‘machine’) for one attribute. The students decided to model a relation
between two closed attributes: height and body length. They suggested possible values for
the heights, while getting familiar with various TP modeling devices that were introduced to
them by the researcher. Yael referred to the height range in the table, and offered to slightly
increase it in the machine. They referred to the range and center of the height distribution,
considering the mean and the likelihood of a value to be close to the center.
When the students used the curve device, a conflict arose between them: Yael insisted
on drawing an approximately normal curve and Iddo tried to draw a bimodular curve. Iddo
explained his opinion by the different preferences people have for dog’s height, or by the
frequency of the height as he perceived it. Yael explained her motivation by the need to set
the heights according to the likelihood and chances of their occurrence. Although this
argument might suggest initial signs of aggregate view of data, the students didn’t look at the
distribution of the heights as a whole, and tried to set the frequencies of each value of the
heights in the model. They tried each of the TP2 devices: mixer, stacks, curve, pie and bars,
in order to search for the device that would allow them to do that easily, and decided to use
the bars that allowed them to set the percentage of each height and to set it easily by drawing
the cursor (Fig. 5).
e) Generate random samples from the model of height. The students took a random sample of
10 cases and tried to make sense of it. Iddo explained that the sampler chose values according
to the percentages given to it. They both said that if they sampled more dogs the “picture”
would be different and referred to the sample size as responsible for the absence of a certain
value that they set in the model.
f) Evaluate the model and improve it. The students added another bar device for the body
length. They set its range to be the same as the height’s range after examining the table, and
Yael stated that its “arrangement” did not have to be the same as the one they set for the
height. Once again they tried to model a normal distribution (Fig. 6). The children drew a
random sample of 10 from the model, and were surprised not to get a linear relation as they
expected. They tried to handle the noise in the data by editing the model, but neither changing
the frequencies of a certain range of the body length, nor reducing the range of it, helped. At
this point the researcher showed the students how to design dependency between two
attributes, and they separated the body length to five equal intervals, and set a uniform
distribution for each of them, explaining that they would change it later (Fig. 7). They were
more satisfied by the random sample generated from the new model as it provided them more
similar data to the original data table and a clear signal of the relationship between height and
body length. But this clear signal raised a new problem that the students acknowledged - the
lack of noise.
Figure 6 (left): A model of the relation between height and body length among Dalmatians in
TP2. Figure 7 (Right): An improvement of the model of the relation between height and body
length among Dalmatians in TP2.
A bit of a discussion. After measuring the two Labradors, the students began to explore the
Dalmatians data with the sense of a large variability in the population. The strong authentic
context encouraged them to search for trends and patterns in the data. It seems that the need
to model the investigated phenomenon encouraged the students to invent various types of
models in order to make sense of the data and to produce dogs. While searching for similarity
in the data and for explanations for irregularity, they developed methods to compare between
cases and attributes locally and then globally using the table. They verified their discoveries
in the data and refined them constantly. Their initial view toward data was local considering
data as case values. We suggest that the students focus on clusters of three cases elicited a
discussion about rules, that might be an expression of a rudimentary aggregate view. An
emergence of this initial reasoning was also seen when Yael suggested two categories to
assess the strength of a relation between attributes.
The modeling of a concrete model using TP2 seems to raise the need to take into
account the range, center and shape of distributions. This sense was tested when a conflict
was raised between the students about the attributes distribution. While Yael felt the need to
describe a smooth, normal and maybe theoretical distribution, Iddo searched for the sense of
irregularity in the model and tried to describe it in the distribution. The need to model a
dependency between attributes and to examine random samples, involved a refinement of the
model, along with reasoning about statistical ideas such as signal and noise, chances, sample
size, variability and uncertainty.
Expected Contributions
We hope that the results of this case study will contribute to the discussion about aggregate
view of data in relation to modeling approaches in IIR, as well as provide the grounds for
further research that will expand the existing knowledge about these issues.
Our selected video segments for SRTL9 are expected to provide fertile grounds for
discussing the role of statistical modeling in promoting (or hindering) students emergence of
aggregate view of data:
1. How do students articulations of aggregate view emerge while they explore data in
an attempt to model a real phenomenon?
2. What might be the relations between the development of ideas and concepts about
statistical models and modeling and the development of aggregate view of data?
3. How can reasoning about modeling and aggregate views of data in the context of ISI
be further developed in primary level?
4. What was the role of TP2 in the shaping of students aggregate views of data and
models?
References
Ainley, J., Aridor, K., Ben-Zvi, D., Manor, H., & Pratt, D. (2013). Childrens expressions of
uncertainty in statistical modelling. In J. Garfield (Ed.), Proceedings of the Eighth
International Research Forum on Statistical Reasoning, Thinking, and Literacy (SRTL-8)
(CD). Minneapolis, MN, USA: University of Minnesota.
Ainley, J., & Pratt, D. (2014). Expressions of uncertainty when variation is partially-
determined. In K. Makar, B. de Sousa, and R. Gould (Eds.), Sustainability in statistics
education (Proceedings of the Ninth International Conference on Teaching Statistics,
ICOTS9, July 2014). Voorburg, The Netherlands: International Association for Statistical
Education and International Statistical Institute.
Arnold, P., Budgett, S., & Pfannkuch, M. (2013). Experiment-to-causation inference:
The emergence of new considerations regarding uncertainty. In J. Garfield (Ed.),
Proceedings of the Eighth International Research Forum on Statistical Reasoning,
Thinking, and Literacy (SRTL-8) (CD). Minneapolis, MN, USA: University of Minnesota.
Bakker, A., Biehler, R., & Konold, C. (2004). Should young students learn about Boxplots?
In G. Burrill & M. Camden (Eds.), Curricular development in statistics education, IASE
2004 Roundtable on Curricular Issues in Statistics Education, Lund Sweden. Voorburg,
the Netherlands: International Statistics Institute.
Bakker, A., & Hoffmann, M. (2005). Diagrammatic reasoning as the basis for developing
concepts: A semiotic analysis of students learning about statistical distribution.
Educational Studies in Mathematics, 60, 333-358.
Bakker, A., & Gravemeijer, K.P.E. (2004). Learning to reason about distributions. In D. Ben-
Zvi & J. Garfield (Eds.), The Challenge of Developing Statistical Literacy, Reasoning, and
Thinking (pp. 147168). Dordrecht, The Netherlands: Kluwer Academic Publishers.
Ben-Zvi, D., & Arcavi, A. (2001a). Junior high school students construction of global views
of data and data representations. Educational Studies in Mathematics, 45, 3565.
Ben-Zvi, D., Gil, E., & Apel, N. (2007). What is hidden beyond the data? Helping young
students to reason and argue about some wider universe. In D. Pratt & J. Ainley (Eds.),
Reasoning about Informal Inferential Statistical Reasoning: A collection of current
research studies. Proceedings of the Fifth International Research Forum on Statistical
Reasoning, Thinking, and Literacy (SRTL-5), University of Warwick, UK, August, 2007.
Cobb, P. (1999). Individual and collective mathematical development: The case of statistical
data analysis. Mathematical Thinking and Learning, 1(1), 5-43.
Friel, S. (2007). The research frontier: Where technology interacts with the teaching and
learning of data analysis and statistics. In G.W. Blume & M.K. Heid (Eds.), Research on
technology and the teaching and learning of mathematics: Cases and perspectives, 2 (pp.
279-331). Greenwich, CT: Information Age Publishing, Inc.
Garfield, J., & Ben-Zvi, D. (2008). Developing Students Statistical Reasoning: Connecting
Research and Teaching Practice. Springer.

Hancock, C., Kaput, J. J., & Goldsmith, L. T. (1992). Authentic enquiry with data: Critical
barriers to classroom implementation. Educational Psychologist, 27(3), 337364.
Konold, C., Higgins, T. & Russell, S. J. & Khalil, K. (2014), Data seen through different
lenses, Educational Studies in Mathematics, DOI 10.1007/s10649-013-9529-8.
Konold, C., & Kazak, S. (2008). Reconnecting data and chance. Technology Innovations in
Statistics Education, 2(1), Article 1.
Konold, C., & Miller, C. (2011). TinkerPlots (Version 2.0) [Computer software]. Key
Curriculum Press. Online: http://www.keypress.com/tinkerplots.
Lehrer, R., Kim, M-J., Ayers, E., & Wilson, M. (2014). Toward establishing a learning
progression to support the development of statistical reasoning. In J. Confrey and A.
Maloney (Eds.), Learning over time: Learning trajectories in mathematics education.
Charlotte, NC: Information Age Publishers.
Lehrer, R., & Schauble, L. (2004). Modeling natural variation through distribution. American
Educational Research Journal, 41(3), 635679.
Lehrer, R., & Schauble, L. (2010). What kind of explanation is a model? In M.K. Stein (Ed.),
Instructional Explanations in the Disciplines (pp. 9-22). New York: Springer.
Lesh, R., Carmona, G., & Post, T. (2002). Models and modeling. In D. Mewborn, P. Sztajn,
D. White, H. Wiegel, R. Bryant, et al. (Eds.), Proceedings of the 24th annual meeting of
the North American Chapter of the International Group for the Psychology of
Mathematics Education (Vol. 1, pp. 89-98) Columbus, OH: ERIC Clearinghouse.
Lesh, R., & Harel, G. (2003). Problem solving, modeling, and local conceptual development.
International Journal of Mathematics Thinking and Learning, 5, 157-189.
Makar, K., Bakker, A., & Ben-Zvi, D. (2011). The reasoning behind informal statistical
inference. Mathematical Thinking and Learning, 13(1), 152-173.
Makar, K., & Rubin, A. (2009). A framework for thinking about informal statistical
inference. Statistics Education Research Journal, 8(1), 82105.
Manor, H., Ben-Zvi, D., & Aridor, K. (2013). Students emergent reasoning about
uncertainty while building informal confidence intervals in an “integrated approach”. In J.
Garfield (Ed.), Proceedings of the Eighth International Research Forum on Statistical
Reasoning, Thinking, and Literacy (SRTL-8) (pp. ). Minneapolis, MN, USA:
University of Minnesota.
Moore, D. S. (1990). Uncertainty. In L. A. Steen (Ed.), On the shoulders of giants: A new
approach to numeracy (pp. 95137). Washington, DC: National Academy of Sciences.
Pfannkuch, M., & Wild, C. (2004). Towards an understanding of statistical thinking. In D.
Ben-Zvi, & J. Garfield, (Eds.), The challenge of developing statistical literacy, reasoning,
and thinking (pp. 17-46). Dordrecht, Netherlands: Kluwer Academic Publishers.
Rubin, A., Hammerman, J. K. L., & Konold, C. (2006). Exploring informal inference with
interactive visualization software. In Proceedings of the Seventh International Conference
on Teaching Statistics. Salvador, Brazil.
Siegler, R. S. (2006). Microgenetic analyses of learning. In W. Damon & R.M. Lerner (Series
Eds.) & D. Kuhn & R.S. Siegler (Vol. Eds.), Handbook of child psychology: Volume 2:
Cognition, perception, and language (6th ed., pp. 464510). Hoboken, NJ: Wiley.
Schwartz, C., & White, B. (2005). Meta-modeling knowledge: Developing students
understanding of scientific modeling. Cognition and Instruction, 23(2), 165-205.
Wild, C. J., & Pfannkuch, M. (1999). Statistical thinking in empirical enquiry (with
discussion). International Statistical Review, 67, 223-265.
i This study was supported by the British Academy Small Research Grant Scheme (SG112288). The views expressed in this
paper do not necessarily reflect the views or policy of the British Academy.
ResearchGate has not been able to resolve any citations for this publication.
Book
Full-text available
Increased attention is being paid to the need for statistically educated citizens: statistics is now included in the K-12 mathematics curriculum, increasing numbers of students are taking courses in high school, and introductory statistics courses are required in college. However, increasing the amount of instruction is not sufficient to prepare statistically literate citizens. A major change is needed in how statistics is taught. To bring about this change, three dimensions of teacher knowledge need to be addressed: their knowledge of statistical content, their pedagogical knowledge, and their statistical-pedagogical knowledge, i.e., their specific knowledge about how to teach statistics. This book is written for mathematics and statistics educators and researchers. It summarizes the research and highlights the important concepts for teachers to emphasize, and shows the interrelationships among concepts. It makes specific suggestions regarding how to build classroom activities, integrate technological tools, and assess students' learning. This is a unique book. While providing a wealth of examples through lessons and data sets, it is also the best attempt by members of our profession to integrate suggestions from research findings with statistics concepts and pedagogy. The book's message about the importance of listening to research is loud and clear, as is its message about alternative ways of teaching statistics. This book will impact instructors, giving them pause to consider: "Is what I'm doing now really the best thing for my students? What could I do better?" J. Michael Shaughnessy, Professor, Dept of Mathematical Sciences, Portland State University, USA This is a much-needed text for linking research and practice in teaching statistics. The authors have provided a comprehensive overview of the current state-of-the-art in statistics education research. The insights they have gleaned from the literature should be tremendously helpful for those involved in teaching and researching introductory courses. Randall E. Groth, Assistant Professor of Mathematics Education, Salisbury University, USA.
Article
Full-text available
For the past 15 years, pre-university students in many countries including the United States have encountered data analysis and probability as separate, mostly independent strands. Classroom-based research suggests, however, that some of the difficulties students have in learning basic skills in Exploratory Data Analysis stem from a lack of rudimentary ideas in probability. We describe a recent project that is developing materials to support middle-school students in coming to see the “data in chance” and the “chance in data.” Instruction focuses on four main ideas: model fit, distribution, signal-noise, and the Law of Large Numbers. Central to our approach is a new modeling and simulation capability that we are building into a future version of the data-analysis software TinkerPlots. We describe three classroom-tested probability investigations that employ an iterative model-fit process in which students evaluate successive theories by collecting and analyzing data. As distribution features become a focal point of students’ explorations, signal and noise components of data become visible as variation around an “expected” distribution in repeated samples. An important part of students’ learning experience, and one enhanced through visual aspects of TinkerPlots, is becoming able to see things in data they were previously unable to see.
Article
Full-text available
Statistical reasoning focuses on properties that belong not to individual data values but to the entire aggregate. We analyze students’ statements from three different sources to explore possible building blocks of the idea of data as aggregate and speculate on how young students go about putting these ideas together. We identify four general perspectives that students use in working with data, which in addition to an aggregate perspective include regarding data as pointers, as case values, and as classifiers. Some students seem inclined to view data from only one of these three alternative perspectives, which then influences the types of questions they ask, the data representations they generate or prefer, the interpretations they give to notions such as the average, and the conclusions they draw from the data.
Conference Paper
Full-text available
Goal Recognize and describe the way British and Israeli children account for and express the idea of uncertainty in statistical modelling:  Young children (age 11, UK): learning to talk about uncertainty and range  Older children (age 14, Israel): learning to think about uncertainty and distribution  All children: Learning to use and reason with the TP2 model ABSTRACT We present initial data from a collaboration between researchers in the UK and in Israel. We aimed to explore how young students (11-14 years of age) expressed uncertainty in partially-determined situations, where a signal might account for some, or even a substantial amount of, variation but additionally there is a need to account for noise in the system. The two teams collaborated to develop a task that drew on previous experience with Active Graphing and EDA. As well as collecting data through an experiment, the young students used the modelling functionality in TinkerPlots2 to create models (or 'machines') that generated similar data to that in the experiment. In the presentation, we describe the various ways in which the young students accounted for and expressed uncertainty verbally and through actions in TinkerPlots2.
Article
Full-text available
In the first part of this article, I clarify how we analyze students' mathematical reasoning as acts of participation in the mathematical practices established by the classroom community. In doing so, I present episodes from a recently completed classroom teaching experiment that focused on statistics. Against the background of this analysis, I then broaden my focus in the final part of the article by developing the themes of change, diversity, and equity.
Book
Research in statistics education is an emerging field, with much of the work being published in diverse journals across many disciplines. Locating and synthesizing this research is often a challenging task, as is connecting the research literature to practical issues of teaching and assessing students. This book is unique in that it collects, presents, and synthesizes cutting edge research on different aspects of statistical reasoning and applies this research to the teaching of statistics to students at all educational levels. Unlike other books on how to teach statistics, or educational materials to help students learn statistics, this book presents the research foundation on which teaching should be based. The chapters in this volume are written by the today's leading researchers in statistics education. This volume will prove of great value to mathematics and statistics education researchers, statistics educators, statisticians, cognitive psychologists, mathematics teachers, mathematics and statistics curriculum developers, and quantitative literacy experts in education and government.
Article
This paper discusses the thought processes involved in statistical problem solving in the broad sense from problem formulation to conclusions. It draws on the literature and in-depth interviews with statistics students and practising statisticians aimed at uncovering their statistical reasoning processes. From these interviews, a four-dimensional framework has been identified for statistical thinking in empirical enquiry. It includes an investigative cycle, an interrogative cycle, types of thinking and dispositions. We have begun to characterise these processes through models that can be used as a basis for thinking tools or frameworks for the enhancement of problem-solving. Tools of this form would complement the mathematical models used in analysis and address areas of the process of statistical investigation that the mathematical models do not, particularly areas requiring the synthesis of problem-contextual and statistical understanding. The central element of published definitions of statistical thinking is “variation”. We further discuss the role of variation in the statistical conception of real-world problems, including the search for causes.