Conference PaperPDF Available

Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing

Authors:

Abstract

Datasets which are identical over a number of statistical properties, yet produce dissimilar graphs, are frequently used to illustrate the importance of graphical representations when exploring data. This paper presents a novel method for generating such datasets, along with several examples. Our technique varies from previous approaches in that new datasets are iteratively generated from a seed dataset through random perturbations of individual data points, and can be directed towards a desired outcome through a simulated annealing optimization strategy. Our method has the benefit of being agnostic to the particular statistical properties that are to remain constant between the datasets, and allows for control over the graphical appearance of resulting output.
Same Stats, Different Graphs:
Generating Datasets with Varied Appearance and
Identical Statistics through Simulated Annealing
Justin Matejka and George Fitzmaurice
Autodesk Research, Toronto Ontario Canada
{first.last}@autodesk.com
Figure 1. A collection of data sets produced by our technique. While different in appearance, each has the same summary statistics
(mean, std. deviation, and Pearson’s corr.) to 2 decimal places. (x
͞ =54.02, y
͞ = 48.09, sdx = 14.52, sdy = 24.79, Pearson’s r = +0.32)
ABSTRACT
Datasets which are identical over a number of statistical
properties, yet produce dissimilar graphs, are frequently used
to illustrate the importance of graphical representations when
exploring data. is paper presents a novel method for
generating such datasets, along with several examples. Our
technique varies from previous approaches in that new
datasets are iteratively generated from a seed dataset through
random perturbations of individual data points, and can be
directed towards a desired outcome through a simulated
annealing optimization strategy. Our method has the benefit
of being agnostic to the particular statistical properties that
are to remain constant between the datasets, and allows for
control over the graphical appearance of resulting output.
INTRODUCTION
Anscome’s Quartet [1] is a set of four distinct datasets each
consisting of 11 (x,y) pairs where each dataset produces the
same summary statistics (mean, standard deviation, and
correlation) while producing vastly different plots (Figure
2A). is dataset is frequently used to illustrate the
importance of graphical representations when exploring
data. e effectiveness of Anscombe’s Quartet is not due to
simply having four different data sets which generate the
same statistical properties, it is that four clearly dierent and
identifiably distinct datasets are producing the same
statistical properties. Dataset I appears to follow a somewhat
noisy linear model, while Dataset II is following a parabolic
distribution. Dataset III appears to be strongly linear, except
for a single outlier, while Dataset IV forms a vertical line
with the regression thrown off by a single outlier. In contrast,
Figure 2B shows a series of datasets also sharing the same
summary statistics as Anscombe’s Quartet, however without
any obvious underlying structure to the individual datasets,
this quartet is not nearly as effective at demonstrating the
importance of graphical representations.
While very popular and effective for illustrating the
importance of visualizations, it is not known how Anscombe
came up with his datasets [5]. Our work presents a novel
method for creating datasets which are identical over a range
of statistical properties, yet produce dissimilar graphics. Our
method differs from previous by being agnostic to the
particular statistical properties that are to remain constant
between the datasets, while allowing for control over the
graphical appearance of resulting output.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from Permissions@acm.org.
CHI 2017, May 06 - 11, 2017, Denver, CO, USA
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-4655-9/17/05…$15.00
DOI: http://dx.doi.org/10.1145/3025453.3025912
Figure 2. (A) Anscombe’s Quartet, with each dataset having
the same mean, standard deviation, and correlation. (B)
Four unstructured datasets, each also having the same
statistical properties as those in Anscombe’s Quartet.
A
B
I II III IV
RELATED WORK
As alluded to above, producing multiple datasets with similar
statistics and dissimilar graphics was introduced by
Anscombe in 1973 [1]. “Graphs in Statistical Analysis” starts
by listing three notions prevalent about graphs at the time:
(1) Numerical calculations are exact, but graphs are
rough;
(2) For any particular kind of statistical data there
is just one set of calculations constituting a
correct statistical analysis;
(3) Performing intricate calculations is virtuous,
whereas actually looking at the data is cheating.
While one cannot argue that there is currently as much
resistance towards graphical methods as when Anscombe's
paper was originally published, the datasets described in the
work (Figure 1A) are still effective and frequently used for
introducing or reinforcing the importance of visual methods.
Unfortunately, Anscombe does not report how the datasets
were created, nor suggest any method to create new ones.
e first attempt at producing a generalized method for
creating such datasets was published in 2007 by Chatterjee
and Firat [5]. ey proposed a genetic algorithm based
approach where 1,000 random datasets were created with
identical summary statistics, then combined and mutated
with an objective function to maximize the “graphical
dissimilarity” between the initial and final scatter plots.
While the datasets produced were graphically dissimilar to
the input datasets, they did not have any discernable structure
in their composition. Our technique differs by providing a
mechanism to direct the solutions towards a specific shape,
as well as allowing for variety in the statistical measures
which are to remain constant between the solutions.
Govindaraju and Haslett developed a method for regressing
datasets towards their sample means while maintaining the
same linear regression formula [7]. In 2009, the same authors
extended their procedure to creating “cloned” datasets [8]. In
addition to maintaining the same linear regression as the seed
dataset, their cloned datasets also maintained the same means
(but not the same standard deviations). While Chatterjee and
Firat [5] wanted to create datasets as graphically dissimilar
as possible, Govindaraju and Haslett’s cloned datasets were
designed to be visually similar, with a proposed application
of confidentializing sensitive data for publication purposes.
While our technique is primarily aimed at creating visually
distinct datasets, by choosing appropriate statistical tests to
remain constant through the iterations (such as a
Kolmogorov-Smirnov test) our technique can produce
datasets with similar graphical characteristics as well.
In the area of generating synthetic datasets, GraphCuisine [2]
allows users to direct an evolutionary algorithm to create
network graphs matching user-specified parameters. While
this work looks at a similar problem, it differs in that it is
focused on network graphs, is an interactive system, and
allows for directly specifying characteristics of the output,
while our technique looks at 1D or 2D distributions of data,
is non-interactive, and perturbs the data such that the initial
statistical properties are maintained throughout the process.
Finally, on the topic of using scatter plots to encode graphics,
Residual Sur(Realism) [11] produces datasets with hidden
images which are only revealed when appropriate statistical
measures are performed. Conversely, our technique encodes
graphical appearance into the data directly.
METHOD
e key insight behind our approach is that while generating
a dataset from scratch to have particular statistical properties
is relatively dicult, it is relatively easy to take an existing
dataset, modify it slightly, and maintain (nearly) the same
statistical properties. With repetition, this process creates a
dataset with a different visual appearance from the original,
while maintaining the same statistical properties. Further, if
the modifications to the dataset are biased to move the points
towards a particular goal, the resulting graph can be directed
towards a particular visual appearance.
e pseudocode for the high-level algorithm is listed below:
INITIAL_DS is the seed dataset from which the statistical
values we wish to maintain are calculated. e PERTURB
function is called at each iteration of the algorithm to modify
the latest version of the dataset (CURRENT_DS) by moving
one or more points by a small amount, in a random direction.
e “small amount” is chosen from a normal distribution and
is calibrated such that >95% of movements result in the
statistical properties of the overall dataset remaining
unchanged (to two decimal places).
Once the individual points have been moved, the FIT
function is used to check if perturbing the points has
increased the overall fitness of the dataset. e fitness can be
calculated in a variety of ways, but for conditions where we
want to coerce the dataset to into a shape, fitness is calculated
as the average distance of all points to the nearest point on
the target shape.
e naïve approach of accepting only datasets with an
improved fitness value results in possibly getting stuck in
locally-optimal solutions where other, more globally-optimal
solutions are possible. To mitigate this possibility, we
employ a simulated annealing technique [9]. With the
possible solutions generated in each iteration, simulated
annealing works by always accepting solutions which
1: current_ds initial_ds
2: for x iterations, do:
3: test_ds PERTURB(current_ds, temp)
4: if ISERROROK(test_ds, initial_ds):
5: current_ds test_ds
6:
7: function PERTURB(ds, temp):
8: loop:
9: test MOVERANDOMPOINTS(ds)
10: if FIT(test) > FIT(ds) or temp > RANDOM():
11: return test
improve the fitness, but also, if the fitness is not improved,
the solution may be accepted based on the “temperature” of
the simulated annealing algorithm. If the current temperature
is less than a random number between 0 and 1, the solution
is accepted even if it the fitness is worsened. We found that
using a quadratically-smoothed monotonic cooling schedule
starting with a temperature of 0.4 and finishing with a
temperature of 0.01 worked well for the sample datasets.
Once the perturbed dataset has been accepted, either through
an improved fitness value or from the simulated annealing
process, the perturbed dataset is compared to the initial
dataset for statistical equivalence. For the examples in this
paper we consider properties to be “the same” if they are
equal to two decimal places. e ISERROROK function
compares the statistics between the datasets, and if they are
equal (to the specified number of decimal places), the result
from the current iteration becomes the new current state.
Example Generated Datasets
Example 1: Coercion Towards Target Shapes
In this first example (Figure 1), each dataset contains 182
points and are equal (to two decimal places) for the
“standard” summary statistics (x/y mean, x/y standard
deviation, and Pearson’s correlation). Each dataset was
seeded with the plot in the top left.e target shapes are
specified as a series of line segments, and the shapes used in
this example are shown in Figure 3.
Figure 3. e initial data set (top-left), and line segment
collections used for directing the output towards specific
shapes. e results are seen in Figure 1.
With this example dataset, the algorithm ran for 200,000
iterations to achieve the final results. On a laptop computer
this process took ~10 minutes. Figure 4 shows the
progression of one of the datasets towards the target shape.
Figure 4. Progression of the algorithm towards a target
shape over the course of the cooling schedule.
Example 2: Alternate Statistical Measures
One benefit of our approach over previous methods is that
the iterative process is agnostic to the particular statistical
properties which remain constant between the datasets. In
this example (Figure 5) the datasets are derived from the
same initial dataset as in Example 1, but rather than being
equal on the parametric properties, the datasets are equal in
the non-parametric measures of x/y median, x/y interquartile
range (IQR), and Spearman’s rank correlation coecient.
Figure 5. Example datasets are equal in the non-parametric
statistics of x/y median (53.73, 46.21), x/y IQR (19.17, 37.92),
and Spearman’s rank correlation coecient (+0.31).
Example 3: Specific Initial Dataset
e previous two examples used a rather “generic” dataset of
a slightly positively correlated point cloud as the starting
point of the optimization. Alternately, it is possible to begin
with a very specific dataset to seed the optimization.
Figure 6. Creating a collection of datasets based on the
“dinosaurus” dataset. Each dataset has the same summary
statistics to two decimal places: (x
͞ =54.26, y
͞ = 47.83, sdx =
16.76, sdy = 26.93, Pearson’s r = -0.06).
Alberto Cairo produced a dataset called the “Datasaurus” [4].
Like Anscombe’s Quartet, this serves as a reminder to the
importance of visualizing your data, since, although the
dataset produces “normal” summary statistics, the resulting
plot is a picture of a dinosaur. In this example we use the
“datasaurus” as the initial dataset, and create other datasets
with the same summary statistics (Figure 6).
Example 4: Simpson’s Paradox
Another instrument for demonstrating the importance of
visualizing your data is Simpson’s Paradox [3, 10]. is
paradox occurs with data sets where a trend appears when
looking at individual groups in the data, but disappears or
reverses when the groups are combined.
To create a dataset exhibiting Simpson’s Paradox, we start
with a strongly positively correlated dataset (Figure 7A), and
then perturb and direct that dataset towards a series of
Iteration: 1
Temperature: 0.4
Iteration: 50,000
Temperature: 0.35
Iteration: 100,00
0
Temperature: 0.2
Iteration: 200,000
Temperature: 0.01
Iteration: 1 Iteration: 20,000 Iteration: 80,00
0
Iteration: 200,00
0
negatively sloping lines (Figure 7B). e resulting dataset
(Figure 7C) has the same positive correlation as the initial
dataset when looked at as a whole, while the individual
groups each have a strong negative correlation.
Figure 7. Demonstration of Simpson's Paradox. Both
datasets (A and C) have the same overall Pearson's
correlation of +0.81, however after coercing the data
towards the pattern of sloping lines (B), each subset of data
in (C) has an individually negative correlation.
Example 5: Cloned Dataset with Similar Appearance
As discussed by Govindaraju and Haslett [8] another use for
datasets with the same statistical properties is the creation of
“cloned” datasets to anonymize sensitive data [6]. In this
case, it is important that individual data points are changed
while the overall structure of the data remains similar. is
can be accomplished by performing a Kolmogorov-Smirnov
test within the ISERROROK function for both x and y. By only
accepting solutions where both the x and y K-S statistic is
<0.05 we ensure that the result will have a similar shape to
the original (Figure 8). is approach has the benefit of
maintaining the x/y means and correlation as accomplished
in previous work [8], and additionally the x/y standard
deviations as well. is could also be useful for “graphical
inference” [12] to create a collection of variant plots
following the same null hypothesis.
Figure 8. Example of creating a “mirror” dataset as in [8].
Example 6: 1D Boxplots
To demonstrate the applicability of our approach to non 2D-
scatterplot data, this example uses a 1D distribution of data
as represented by a boxplot. e most common variety of
boxplot, the “Tukey Boxplot”, presents the 1st quartile,
median, and 3rd quartile values on the “box”, with the
“whiskers” showing the location of the furthest datapoints
within 1.5 interquartile ranges (IQR) from the 1st and 3
rd
quartiles. Starting with the data in a normal distribution
(Figure 9A) and perturbing the data to the left (B), right (C),
edges (D, E), and arbitrary points along the range (F) while
ensuring that the boxplot statistics remain constant produces
the results shown in Figure 9.
Figure 9. Six data distributions, each with the same 1st
quartile, median, and 3rd quartile values, as well as equal
locations for points 1.5 IQR from the 1st and 3rd quartiles.
Each dataset produces an identical boxplot.
LIMITATIONS AND FUTURE WORK
When the source dataset and the target shape are vastly
different, the produced output might not be desirable. An
example is show Figure 10, where the data set from Figure
7A is coerced into a star (Figure 10). is problem can be
mitigated by coercing the data towards “simpler” patterns
with more coverage of the coordinate space such as lines
spanning the grid, or pre-scaling and positioning the target
shape to better align with the initial dataset.
Figure 10. Undesirable outcome (C) when coercing a
strongly positively correlated dataset (A) into a star (B).
e currently implemented fitness function looks only at the
position of individual points in relation to the target shape,
which can result in “clumping” of data points and sparse
areas on the target shape. A future improvement could
consider an additional goal to “separate” the points to
encourage better coverage of the target shape in the output.
e parameters chosen for the algorithm (95% success rate,
quadratic cooling scheme, start/end temperatures, etc.) were
found to work well, but should not be considered “optimal”.
Such optimization is left as future work.
e code and datasets presented in this work are available at
www.autodeskresearch.com/publications/samestats.
CONCLUSION
We presented a technique for creating visually dissimilar
datasets which are equal over a range of statistical properties.
e outputs from our method can be used to demonstrate the
importance of visualizing your data, and may serve as a
starting point for new data anonymization techniques.
AC
B
“Cloned” DataOriginal Data Comparison
−10 −5 0 5 10
A
B
C
D
E
F
ABC
REFERENCES
1. Anscombe, F.J. (1973). Graphs in Statistical Analysis.
e American Statistician 27, 1, 17–21.
2. Bach, B., Spritzer, A., Lutton, E., and Fekete, J.-D.
(2012). Interactive Random Graph Generation with
Evolutionary Algorithms. SpringerLink, 541–552.
3. Blyth, C.R. (1972). On Simpson’s Paradox and the
Sure-ing Principle. Journal of the American
Statistical Association 67, 338, 364–366.
4. Cairo, A. Download the Datasaurus: Never trust
summary statistics alone; always visualize your data.
http://www.thefunctionalart.com/2016/08/download-
datasaurus-never-trust-summary.html.
5. Chatterjee, S. and Firat, A. (2007). Generating Data
with Identical Statistics but Dissimilar Graphics. e
American Statistician 61, 3, 248–254.
6. Fung, B.C.M., Wang, K., Chen, R., and Yu, P.S.
(2010). Privacy-preserving Data Publishing: A Survey
of Recent Developments. ACM Comput. Surv. 42, 4,
14:1–14:53.
7. Govindaraju, K. and Haslett, S.J. (2008). Illustration of
regression towards the means. International Journal of
Mathematical Education in Science and Technology
39, 4, 544–550.
8. Haslett, S.J. and Govindaraju, K. (2009). Cloning
Data: Generating Datasets with Exactly the Same
Multiple Linear Regression Fit. Australian & New
Zealand Journal of Statistics 51, 4, 499–503.
9. Hwang, C.-R. Simulated annealing: eory and
applications. Acta Applicandae Mathematica 12, 1,
108–111.
10. Simpson, E.H. (1951). e Interpretation of Interaction
in Contingency Tables. Journal of the Royal Statistical
Society. Series B (Methodological) 13, 2, 238–241.
11. Stefanski, L.A. (2007). Residual (Sur)Realism. e
American Statistician, .
12. Wickham, H., Cook, D., Hofmann, H., and Buja, A.
(2010). Graphical inference for infovis. IEEE
Transactions on Visualization and Computer Graphics
16, 6, 973–979.
... There are two important challenges to using the correlation score as a measure of fit between a pair of two-dimensional distributions. The first is the well-known problem that many different distributions can have the same correlation coefficient, as illustrated by examples such as Anscombe's quartet [13], [14] (Fig. 1a). Second, there are many more ways to get a low correlation than a high correlation. ...
... Black lines show least-squares regression fits, to illustrate indistinguishable slopes and intercepts. b: A highly non-random distribution from the Datasaurus Dozen [14] and (Gaussian-distributed) synthetic data with the same means and standard deviations from an untrained generative model. Pearson's R of -0.06 and -0.11, respectively, resulting in the very high correlation score of 0.97 despite the poor fit. ...
... Black lines show least-squares regression fits, to illustrate indistinguishable slopes and intercepts. b: A highly non-random distribution from the Datasaurus Dozen [14] and (Gaussian-distributed) synthetic data with the same means and standard deviations from an untrained generative model. Pearson's R of -0.06 and -0.11, respectively, resulting in the very high correlation score of 0.97 despite the poor fit. ...
... There are two important challenges to using the correlation score as a measure of fit between a pair of two-dimensional distributions. The first is the well-known problem that many different distributions can have the same correlation coefficient, as illustrated by examples such as Anscombe's quartet and the Datasaurus Dozen [13], [14] (Fig. 1). Second, there are many more ways to get a low correlation than a high correlation. ...
Preprint
Full-text available
Generative models hold great potential, but only if one can trust the evaluation of the data they generate. We show that many commonly used quality scores for comparing two-dimensional distributions of synthetic vs. ground-truth data give better results than they should, a phenomenon we call the "grade inflation problem." We show that the correlation score, Jaccard score, earth-mover's score, and Kullback-Leibler (relative-entropy) score all suffer grade inflation. We propose that any score that values all datapoints equally, as these do, will also exhibit grade inflation; we refer to such scores as "equipoint" scores. We introduce the concept of "equidensity" scores, and present the Eden score, to our knowledge the first example of such a score. We found that Eden avoids grade inflation and agrees better with human perception of goodness-of-fit than the equipoint scores above. We propose that any reasonable equidensity score will avoid grade inflation. We identify a connection between equidensity scores and R\'enyi entropy of negative order. We conclude that equidensity scores are likely to outperform equipoint scores for generative models, and for comparing low-dimensional distributions more generally.
... Here, then, is the problem: just as summary statistics can arise from wildly diverse types of data samples (see Anscombe's quartet [56] and Alberto Cairo's "Datasaurus" [57]), significant differences between sample means can arise from diverse forms of population distributions. So, if an experiment only reports a significant difference between the means (like the confidence intervals on the right of Fig 5), then that experiment cannot differentiate an effect like that shown in panel A from an effect like that shown in panel C. ...
... It is well known that summary statistics can hide trends. For instance, Alberto Cairo's Datasaurus Dozen [57] shows that the same mean, standard deviation, and covariance can result from data that, when plotted, look like a star and from data that look like a dinosaur. Similarly, techniques intended to discern differences between means are also ambiguous to the underlying distributions (in terms of the Datasaurus dozen, these techniques may be able to identify a difference between the means of the star distribution and dinosaur distribution, but they still will ignore the shape of the distribution). ...
Article
Full-text available
Misperceptions of the social world can lead to actions and social policy that are detrimental to an individual’s or group’s well-being. Here we investigate whether misperceptions arise when participants make predictions of the modal number of ideal future sexual partners reported by heterosexual cohorts (younger cohort: 18–23 years; older cohort: 24–29 years). For both men and women and in both cohorts, the modal number of reported partners equaled 1.0, but men’s averages were higher than women’s averages due to a subgroup of men who reported desiring large numbers of partners (that is, the distributions had the same shape, but men’s distributions had a longer tail). Study 1: When asked to estimate the mode directly, participants performed poorly and, in some conditions, dramatically so (e.g., 40% of younger men reported wanting one sexual partner, but 0% of younger men predicted 1 to be the most frequent response). Study 2: When asked to estimate the shape of the whole distribution, participants underestimated the number of respondents who would desire the mode and thus replicated patterns in the literature for misestimations of skewed distributions. Study 3: When provided information about others’ actual modal desired number of partners, the number of male participants who reported desiring one sexual partner increased, suggesting that misperceptions of social norms may influence preferences. We discuss how the mean and mode can lead to two accurate but different interpretations of the data (mean: men report desiring more sexual partners than women; mode: the most frequent response reported by both men and women is 1.0). Discrepancies of this sort can lead to mischaracterizations that may not be uncommon in the research literature. These discrepancies cannot be differentiated by significance tests that seek to find differences in the mean but can be resolved with attention to other methods of analyses.
... A classic example is Anscombe's quartet (Anscombe, 1973), which consists of four (X, Y ) datasets whose summary statistics are the same but whose scatterplots are quite different. Since Anscombe (1973), there have been many extensions of simulated datasets whose inferential results are identical but whose graphs differ (e.g., Chatterjee and Firat (2007); Matejka and Fitzmaurice (2017); Healy (2018)). When teaching data visualization, we've found it useful to also illustrate the opposite phenomenon: graphical displays that are identical but whose inferential results differ. ...
Preprint
Data visualization is a core part of statistical practice and is ubiquitous in many fields. Although there are numerous books on data visualization, instructors in statistics and data science may be unsure how to teach data visualization, because it is such a broad discipline. To give guidance on teaching data visualization from a statistical perspective, we make two contributions. First, we conduct a survey of data visualization courses at top colleges and universities in the United States, in order to understand the landscape of data visualization courses. We find that most courses are not taught by statistics and data science departments and do not focus on statistical topics, especially those related to inference. Instead, most courses focus on visual storytelling, aesthetic design, dashboard design, and other topics specialized for other disciplines. Second, we outline three teaching principles for incorporating statistical inference in data visualization courses, and provide several examples that demonstrate how instructors can follow these principles. The dataset from our survey allows others to explore the diversity of data visualization courses, and our teaching principles give guidance to instructors and departments who want to encourage statistical thinking via data visualization. In this way, statistics-related departments can provide a valuable perspective on data visualization that is unique to current course offerings.
... One of the key uses of data visualization is to assess the underlying assumptions typically required for many statistical methods. Anscombe's quartet (Anscombe 1973) and the Datasaurus dozen (Matejka and Fitzmaurice 2017) are simple, yet powerful illustrations of this point, each providing a series of vastly different data patterns that yet all yield the same statistical results. Data exploration is also encouraged as it can inform new paths for analysis. ...
Preprint
Full-text available
The advent of artificial intelligence (AI) technologies has significantly changed many domains, including applied statistics. This review and vision paper explores the evolving role of applied statistics in the AI era, drawing from our experiences in engineering statistics. We begin by outlining the fundamental concepts and historical developments in applied statistics and tracing the rise of AI technologies. Subsequently, we review traditional areas of applied statistics, using examples from engineering statistics to illustrate key points. We then explore emerging areas in applied statistics, driven by recent technological advancements, highlighting examples from our recent projects. The paper discusses the symbiotic relationship between AI and applied statistics, focusing on how statistical principles can be employed to study the properties of AI models and enhance AI systems. We also examine how AI can advance applied statistics in terms of modeling and analysis. In conclusion, we reflect on the future role of statisticians. Our paper aims to shed light on the transformative impact of AI on applied statistics and inspire further exploration in this dynamic field.
... This raincloud quartet was created via simulated annealing(Matejka & Fitzmaurice, 2017) and with help of the R packages moments(Komsta & Novomestky, 2022) and mousetrap(Wulff et al., 2023), the work byBlanca et al. (2013) andFreeman and Dale (2013), and ChatGPT(Brown et al., 2020). ...
Preprint
Full-text available
Proper data visualization helps researchers draw correct conclusions from their data and facilitates a more complete and transparent report of the results. In factorial designs, so-called raincloud plots have recently attracted attention as a particularly informative data visualization technique; raincloud plots can simultaneously show summary statistics (i.e., a box plot), a density estimate (i.e., the cloud), and the individual data points (i.e., the raindrops). Here we first present a ‘raincloud quartet’ that underscores the added value of raincloud plots over the traditional presentation of means and confidence intervals. The added value of raincloud plots appears to be increasingly recognized by cognitive psychologists: a focused literature review shows that the prevalence of raincloud-style plots in Psychonomic Bulletin & Review has risen from 2% in 2013 to 37% in 2023. To further encourage this trend and make raincloud plotting easy and practical for a broader group of researchers and students, we have recently implemented a comprehensive suite of raincloud plots in JASP. Examples from two factorial research designs illustrate how these raincloud plots support a correct and comprehensive interpretation of the data.
... The only aspect of mouse-tracking analyses that enjoys widespread consensus is the use of aggregated means instead of individual trajectories to plot experimental results (Buttlar & Walther, 2019;Dieciuc et al., 2019;Pfister et al., 2016;Stillman et al., 2018;Ye & Damian, 2022). Ironically, this consensus comes with major limitations as it neglects that one and the same average trajectory can derive from highly different trajectories on a by-participant or by-trial level (e.g., Matejka & Fitzmaurice, 2017). ...
Article
Full-text available
Mouse-tracking is regarded as a powerful technique to investigate latent cognitive and emotional states. However, drawing inferences from this manifold data source carries the risk of several pitfalls, especially when using aggregated data rather than single-trial trajectories. Researchers might reach wrong conclusions because averages lump together two distinct contributions that speak towards fundamentally different mechanisms underlying between-condition differences: influences from online-processing during action execution and influences from incomplete decision processes. Here, we propose a simple method to assess these factors, thus allowing us to probe whether process-pure interpretations are appropriate. By applying this method to data from 12 published experiments on ideomotor action control, we show that the interpretation of previous results changes when dissociating online processing from decision and initiation errors. Researchers using mouse-tracking to investigate cognition and emotion are therefore well advised to conduct detailed trial-by-trial analyses, particularly when they test for direct leakage of ongoing processing into movement trajectories.
Chapter
In the previous chapter, we discussed nonlinearity at the individual factor level. In this chapter, we discuss nonlinearity at the alpha level. Alpha usually comprise factors at various economic levels, such as the company, industry, market, etc. Accordingly, we explore nonlinear relationships between individual factors as well as between sets of factors and returns. Asset returns usually do not follow a normal distribution; rather, their distributions tend to be highly skewed, with long and fat tails. Therefore, we also investigate the nonlinear impacts of factors on the distribution of returns so that we can forecast the distribution of alpha.
Conference Paper
Full-text available
This paper introduces an interactive system called GraphCuisine that lets users steer an Evolutionary Algorithm (EA) to create random graphs that match user-specified measures. Generating random graphs with particular characteristics is crucial for evaluating graph algorithms, layouts and visualization techniques. Current random graph generators provide limited control of the final characteristics of the graphs they generate. The situation is even harder when one wants to generate random graphs similar to a given one, all-in-all leading to a long iterative process that involves several steps of random graph generation, parameter changes, and visual inspection. Our system follows an approach based on interactive evolutionary computation. Fitting generator parameters to create graphs with pre-defined measures is an optimization problem, while assessing the quality of the resulting graphs often involves human subjective judgment. In this paper we describe the graph generation process from a user’s perspective, provide details about our evolutionary algorithm, and demonstrate how GraphCuisine is employed to generate graphs that mimic a given real-world network. An interactive demo of GraphCuisine can be found on our website http://www.aviz.fr/Research/Graphcuisine .
Article
The definition of second order interaction in a (2 × 2 × 2) table given by Bartlett is accepted, but it is shown by an example that the vanishing of this second order interaction does not necessarily justify the mechanical procedure of forming the three component 2 × 2 tables and testing each of these for significance by standard methods.*
Book
Gaining access to high-quality data is a vital necessity in knowledge-based decision making. But data in its raw form often contains sensitive information about individuals. Providing solutions to this problem, the methods and tools of privacy-preserving data publishing enable the publication of useful information while protecting data privacy. Introduction to Privacy-Preserving Data Publishing: Concepts and Techniques presents state-of-the-art information sharing and data integration methods that take into account privacy and data mining requirements. The first part of the book discusses the fundamentals of the field. In the second part, the authors present anonymization methods for preserving information utility for specific data mining tasks. The third part examines the privacy issues, privacy models, and anonymization methods for realistic and challenging data publishing scenarios. While the first three parts focus on anonymizing relational data, the last part studies the privacy threats, privacy models, and anonymization methods for complex data, including transaction, trajectory, social network, and textual data. This book not only explores privacy and information utility issues but also efficiency and scalability challenges. In many chapters, the authors highlight efficient and scalable methods and provide an analytical discussion to compare the strengths and weaknesses of different solutions.
Article
This paradox is the possibility of $P(A \mid B) even though P(A ∣ B) ≥ P(A ∣ B′) both under the additional condition C and under the complement C′ of that condition. Details are given on why this can happen and how extreme the inequalities can be. An example shows that Savage's sure-thing principle ("If you would definitely prefer g to f, either knowing that the event C obtained, or knowing that C did not obtain, then you definitely prefer g to f.") is not applicable to alternatives f and g that involve sequential operations.
This article, presents a procedure for generating a sequence of data sets which will yield exactly the same fitted simple linear regression equation y = a + bx. Unless rescaled, the generated data sets will have progressively smaller variability for the two variables, and the associated response and covariate will ‘regress’ towards their unconditional sample means.
Article
The Anscombe dataset is popular for teaching the importance of graphics in data analysis. It consists of four datasets that have identical summary statistics (e.g., mean, standard deviation, and correlation) but dissimilar data graphics (scatterplots). In this article, we provide a general procedure to generate datasets with identical summary statistics but dissimilar graphics by using a genetic algorithm based approach.