PreprintPDF Available

Data Comics for Reporting Controlled User Studies in Human-Computer Interaction

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Inspired by data comics, this paper introduces a novel format for reporting controlled studies in the domain of human-computer interaction (HCI). While many studies in HCI follow similar steps in explaining hypotheses, laying out a study design, and reporting results, many of these decisions are buried in blocks of dense scientific text. We propose leveraging data comics as study reports to provide an open and glanceable view of studies by tightly integrating text and images, illustrating design decisions and key insights visually, resulting in visual narratives that can be compelling to non-scientists and researchers alike. Use cases of data comics study reports range from illustrations for non-scientific audiences to graphical abstracts, study summaries, technical talks, textbooks, teaching, blogs, supplementary submission material, and inclusion in scientific articles. This paper provides examples of data comics study reports alongside a graphical repertoire of examples, embedded in a framework of guidelines for creating comics reports which was iterated upon and evaluated through a series of collaborative design sessions.
Data Comics for Reporting Controlled User Studies in
Human-Computer Interaction
Zezhong Wang, Jacob Ritchie*, Jingtao Zhou*, Fanny Chevalier, Benjamin Bach
Abstract
—Inspired by data comics, this paper introduces a novel format for reporting controlled studies in the domain of human-
computer interaction (HCI). While many studies in HCI follow similar steps in explaining hypotheses, laying out a study design, and
reporting results, many of these decisions are buried in blocks of dense scientific text. We propose leveraging data comics as study
reports to provide an open and glanceable view of studies by tightly integrating text and images, illustrating design decisions and key
insights visually, resulting in visual narratives that can be compelling to non-scientists and researchers alike. Use cases of data comics
study reports range from illustrations for non-scientific audiences to graphical abstracts, study summaries, technical talks, textbooks,
teaching, blogs, supplementary submission material, and inclusion in scientific articles. This paper provides examples of data comics
study reports alongside a graphical repertoire of examples, embedded in a framework of guidelines for creating comics reports which
was iterated upon and evaluated through a series of collaborative design sessions.
Index Terms—Statistical communication, comics, scientific reports
1 INTRODUCTION
Effective communication in academic research is crucial, as it allows
readers to assess the rigor of scientific methods and to build trust in
scientific results. In particular, controlled experiments require detailed
descriptions of key information related to procedures, hypotheses, con-
ditions, data, and experimental setups, as well as careful reporting of
data analysis and discussion of the results. Accessible presentation of
this information is key not only for peer-review and replication, but for
scientific dissemination, the training of students, and communication
of research methodology and results to the general public. However,
despite guidelines and conventions for technical writing, study reports
often result in compressed descriptions that use domain-specific jargon
and complex visualizations; important information and methodological
decisions risk being buried, complex relations may not be sufficiently
explained, or readers might simply overlook information that is crucial
for interpretation, discussion, and application of the results [21], adding
to research debt [60] and creating a need for interpretive labour [38].
To make research findings more accessible to both experts and non-
experts, a range of formats have emerged to complement traditional jour-
nal and conference articles. Examples include graphical abstracts [44],
pictorials, posters, videos, interactive explanations [75], and academic
blogs [3, 5] which have become one of the key factors initiating and
increasing public engagement and effective communication [31,47].
These formats, often created by professionals [44], make compelling
use of graphical representations, animation, interaction, and provide
more space for detailed and understandable explanations, appropri-
ate for a specific audience. However, this often comes at the cost of
over-simplification, which can cause important details about methods,
conditions, decisions, and context to be sacrificed in order to create
compelling narration and content that is more easily understood.
In seeking to integrate and allow for both detailed presentation of
studies and easier access by readers—experts and non-experts—this
Zezhong Wang and Benjamin Bach are with The University of Edinburgh.
E-mail: {zezhong.wang, bbach}@ed.ac.uk.
Jacob Ritchie is with Stanford University. E-mail: jritchie@stanford.edu.
Jingtao Zhou is with The University of Edinburgh and Tianjin University.
E-mail: i@tzingtao.com
Fanny Chevalier is with University of Toronto.
E-mail: fanny@cs.toronto.edu.
*Jacob Ritchie and Jingtao Zhou contributed equally to the project.
Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publication
xx xxx. 201x; date of current version xx xxx. 201x. For information on
obtaining reprints of this article, please send e-mail to: reprints@ieee.org.
Digital Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx
paper explores the use of data comics [11, 84] to report controlled
user studies in Human-Computer Interaction (HCI) and visualization.
Our goal is to provide a framework for researchers and science com-
municators, guiding the dissemination process and designing visual
narratives for scientific study reports (Figure 1). We opted for comics
because of their great flexibility and numerous characteristics beneficial
to communication, including (i) tight integration of visual and textual
explanations, (ii) sequential presentation of information, (iii) visual
content for recall and quick navigation [79], and (iv) different levels of
reading, i.e., quick overview as well as access to details, from skimming
to close reading. As a static medium, comics are ideal for storyboarding
and ideation [77]; there are minimal technical barriers to production,
and they can be shared in many forms such as scientific papers, con-
ference posters, slide shows, grant proposals and blogs. Moreover,
comics are different from cartoons, as the basic principles of comics
can fit different visual styles to make communication appropriate for a
diverse set of audiences and media, e.g., ranging from hand sketches
over vector graphics to computer generated visualizations [10,14,50].
Based on five design objectives we explore the use of comics and data
comics design patterns [13] to explain study setups and data analysis for
controlled experiments, through a guided design discussion and a series
of design iterations focused on previously published studies (Section 3).
We then elaborate a 10-stage framework to guide the storyboarding of
data comics for controlled user studies (Section 4), including stages
such as context,hypotheses,data transformation,result presentation,
etc. For each stage, we describe what information should be shown
and discuss design solutions that we found in our exploration and
working with seven participants in remote collaborative design sessions
(Section 5). The collaborative design process and guidelines engaged
participants in storyboarding comics for their own studies. We report
on visual solutions for common tasks such as sketching hypotheses,
reporting randomization, and detailed analysis of results. Feedback
from participants highlights the need for improved reporting methods
as well as the potential of data comics.
We do not aim to standardize reporting or to provide a visual gram-
mar for using data comics. Nor do we intend to replace other forms of
reporting studies and science. Rather, through our framework, which
describes critical information that should be retained when simplifying
study reporting for visual presentation, we hope to provide guidance and
best practices to encourage and inspire researchers and science commu-
nicators to engage in both rigorous reporting and efficient explanation of
their studies at different levels of detail according to the respective audi-
ence. Application scenarios include scientific and public talks, scientific
posters, teaching materials and text books. Beyond HCI and visualiza-
tion, our approach can generalize to other domains, given additional
practice and exploration. A comic gallery, workshop material, and stage
descriptions can be found at https://statscomics.github.io.
2 BACKGRO UND
2.1 Scientific Reporting
To support writers, reviewers, and readers, numerous international
groups have published guidelines to standardize the information that
should be provided in publications on the design, conduct, and anal-
ysis of the experiments [4]. Guidelines exist for clinical trials [68],
animal research [49], observational studies in epidemiology [72], and
communication of empirical results in HCI [32]—including checklists
of information to include when reporting a randomised trial [40] or
in-vivo experiment [49]. Some guides prescribe estimation approaches
over dichotomous testing procedures for increased interpretability [32].
Other guidelines are established more implicitly through community
norms, examples in highly-read papers, and good scholarly training.
Still, technical reports may not always result in a clear and compre-
hensive account of the research [59]. Moreover, as publication migrates
online, it has become increasingly easy to append supplemental materi-
als to research articles. Open data repositories, project websites, and
data analysis scripts are now commonly used as a transparent demon-
stration of materials and study procedures to promote replication [42].
While this is extremely valuable, it adds to the complexity of infor-
mation, and given the increasing pace of publication in many fields,
even trained scientists can experience problems with understanding
reported information and interpreting the output of statistical tools.
Misconceptions and fallacies [32] as well as an increasing demand
for the piecing together of information from multiple sources can lead
to misguided or incorrect conclusions [18, 51], and well-documented
problems such as publication bias and the file drawer effect inflate the
number of incorrect claims in the literature [17, 64,66].
In our work, we go beyond the traditional article format, exploring
ways to report experiments—in line with scientific standards—that
promote access to scientific knowledge while preserving transparency.
2.2 Presentation and Explanation
In addition to prose, graphical content is key to reporting scientific
methods and results [55, 61]. Figures in the form of graphs, diagrams,
pictures, illustrations, and other data visualizations show experimen-
tal procedures [48], study setups [46], theoretical models [16], and
research hypotheses [35, 41]. Besides diagrams and illustrations, data
visualizations are a key means of conveying results graphically by
showing mean values, distributions, confidence intervals, outliers and
other sources of uncertainty. Symbolic representations, e.g., for confi-
dence intervals or quartiles in boxplots, can provide hurdles for novice
readers [30] and should be explained [77].
There have been a number of suggestions to change or augment
scientific reporting for better explanation. For example, Document
Cards [71] provide for automatic summaries of scientific papers, includ-
ing keywords and figures to facilitate browsing large document collec-
tions for relevance. However, our focus is not on concise summaries but
on understandable explanations. In this vein, fluid documents [82, 83]
allow readers to pull up supplemental information in-context, whereas
elastic documents [15] support linking of textual and tabular content
to automatically generated visualizations. Other new formats harness
the ability of papers in HTML and PDF to include animation and in-
teractivity to convey dynamic behaviours in user interfaces [39] or let
readers explore variability in result outcomes across multiple analysis
settings [33]. Similarly, the need to understand complicated concepts
in machine learning and data science paired with an increasing (public,
scientific, juristic, etc.) demand for transparency has led to an increase
in interactive explorable explanations. Pioneered by Bret Victor [75,76]
and now widely disseminated by platforms such as
Distill.pub
[1],
explorable explanations leverage interactive visualizations to explain
complex concepts [6]. In general, while there exist many other sources
to explain scientific methods (t-tests, ANOVA, etc.) our work focuses
on the analysis pipeline—its stages and the decisions made for the
conditions, factors, and setup, as well as the reporting of results—rather
than explaining the functioning of a specific aspect of, e.g., a statistical
model.
Our work is closer to graphical abstracts, a particular and increas-
ingly common format to illustrate the core idea of a research paper
through a single and concise graphic that can shows concepts, research
methods, experimental setups, and results [44]. Comics, in comparison,
provide for more space and narrative elements to deal with complex
information and to enable a more in-depth understanding of a study’s
protocol and results.
2.3 Comics for Explaining Science and Data
Comics provide a unique medium for the communication of complex
content. Their expressive and compelling power for visual storytelling
comes from their unique combination of text and pictures [29, 36,56]
and its linear reading order, able to break down complexity into sequen-
tial steps that a reader can consume at their own pace. By providing
several panels on the same page, comics can also support overview
and detail, allowing for simultaneous explanation and exploration. Em-
pirical studies have shown the effectiveness of comics for presenting
scientific phenomena [36] and data visualizations [77].
In a scientific and educational context, comics have been valued for
their ability to promote public engagement [58], e.g., in stem cell re-
search [8], nanotechnology [53], and many other domains [36]. Comics
have also been used for classroom instruction of STEM topics, showing
promising increases in student engagement [43, 70]. Comics have been
used to teach concepts in statistics [37, 62, 74] such as means, medians,
and regression, as well as concepts in data visualization [78].
The approach we present in this paper differs from such uses of
comics to teach statistics or visualization, since the focus is not on
educating or explaining particular statistical methods, but on reporting
how such methods are used in a specific scientific analysis, to allow
a reader to understand the purpose they serve. This is closer to the
concept of a data comic [11, 84], a genre that focuses on effective
narration and explanation with data visualizations. Data comics offer a
large design space comprising many data comic design patterns [13]
and explored in previous workshops [77].
Thus, rather than understanding comics as sketchy ‘cartoons’ or as
just an educational resource for public outreach, we explore comics as
an effective means of communicating key information and decisions
for reporting controlled, quantitative empirical studies in human com-
puter interaction (HCI) and visualization. There is a low barrier to
creating publication-ready comics, as well as universal shareability
through paper publications, conference posters, textbooks, slideshows
and websites. To the best of our knowledge, no examples, structured
design explorations, or studies exist on the topic of using comics to
report on scientific studies.
3 DATA COMICS FOR REPORTING STUDIES
3.1 Design Objectives
In exploring data comics for controlled user studies in HCI and visual-
ization, we aim to investigate challenges and design solutions for the
following design objectives O1-O5. Each objective marks an individual
aspect of our approach to addressing study reporting.
O1: Clarify decisions and important information:
We aim to clar-
ify key steps and decisions across methods, protocols, and results.
This includes information about study setup, number and demo-
graphics of participants, conditions and datasets used in trials,
user interfaces, or data transformations such as outlier removal.
Each of these decisions has implications for interpreting the re-
sults. Our goal is to present and provide for visual explanation
(O2) of these decisions to facilitate the interpretation of results.
O2: Explain information visually:
Our exploration focuses on ef-
fective visual explanation of the information in a study report.
Where appropriate, we want to find visual depictions to explain
hypotheses, data samples, study protocols, task conditions and
data transformations and to explain results through data visualiza-
tion. The repertoire of visual depictions may range from diagrams
and schemata, to graphic pictograms and annotations, screenshots,
illustrations, and data visualizations. Textual or other narrative
structures present in comics are outside our scope.
O3: Support recognition and recall with a visual vocabulary:
Be-
sides using visual explanations (O2) for understanding key infor-
mation (O1) and providing for overview (O4), visual information
can afford quick reference through recall and recognition. For
example, an effect discussed in the results section of a study re-
port may be present only for a specific condition, which can be
reviewed by glancing back to the relevant section of the comic. A
visual vocabulary made up of recognizable symbols for each con-
dition might facilitate this process. While earlier work has found
benefits of data comics for memorizing content [79], investigating
memorability of comics is beyond this paper’s scope.
O4: Provide visual overview and structure to facilitate informa-
tion retrieval:
For example, individual stages of a study or anal-
ysis could inform the glanceable higher level structure of the
comic. At the same time, we aim to provide space for sufficiently
detailed, specific information, e.g., though dedicated panels of
different sizes. Reading the comic at different levels of detail
could help make comics-style study reports compelling to differ-
ent audiences, including both experts and non-experts.
O5: Use aesthetic appeal to create engagement:
Finally, besides
the rather functional objectives O1-O4, we posit that reporting
studies through comics can provide for a fresh view of science
through compelling visual narratives and clear visual styles (e.g.,
hand-drawn illustrations and characters). We believe comics—
provided the right visual style is used—can engage non-experts
and larger audiences, further promoting science outreach.
3.2 Methodology
Our methodology was driven by the need to create comics for studies,
due to the lack of existing examples. Following a method used in prior
research [10, 13], we began by creating comics for studies reported
in existing academic papers and eventually created 15 comic sketches
for 13 papers, 4 of which we polished and which can be found on our
website. Our detailed methodology was as follows.
1. Design Exploration:
We started our exploration by creating
comics of our own papers that involved empirical studies, since
we were most familiar with the intricate details of these experi-
ments: [7, 12, 26, 63, 79]. As our research lies within visualization
and HCI, our papers inherently contained a number of helpful visual
resources that were used as the basis of the visual vocabulary. We
constructed at least one complete example comic for each study,
starting with sketching on paper, and later translating to sketching on
drawing boards, which allowed for quick iteration and exploration
of ideas, layouts, and visual styles. During this iterative process
which involved discussions among all authors, we were aided by
data comic design patterns [13] and existing guidelines for visually
explaining data visualization techniques [78].
2. Design Discussion:
In parallel to our own exploration, we con-
ducted one 1.5 hour design discussion with three participants (two
graduate students specializing in HCI and one undergraduate), who
were asked to create data comics for a research paper that they were
familiar with, with an objective of using visual explanations to sup-
port understanding of the analysis. The discussion was introduced
using a presentation, during which participants were shown exam-
ples of data comics, facilitated by one of the authors. Participants
first created written lists of all steps of the analysis, which they then
turned into comic sketches with pen and paper.
Resulting data comics showed integrated use of visual and textual
explanation, with a clear reading order indicated by comic panels
(details are discussed in Sect. 6). These comics provide visual so-
lutions for illustrating data collection, using annotated illustrations
(e.g., interaction between a user and a smart watch), task procedures
described using a simple flowchart, and interpretations of study
results (e.g., a bar chart showing an overall trend, with annotated de-
tails beside one of the bars). All three participants included narrator
characters to explain study details. In the discussion following the
workshop, participants commented that data comics could reveal
concrete information such as the effects of outlier removal or en-
able critical reasoning about whether ‘outliers’ were really outliers.
However, participants reported that they struggled to decide which
parts of the analysis should be included into the comic. One reason
mentioned was the lack of familiarity with studies: “my level of
comfort with stats is too low to do the translation to comic manu-
ally”. Such comments motivated us to identify information to be
included before conducting any further evaluation (Sect. 4).
3. Design by stages and examples:
To promote transparency and the
reporting of key information, we consulted existing checklists and
guidelines for scientific reporting (Sect. 2.1). We analysed examples
produced during our exploration and design discussion, and distilled
an initial list of stages and design solutions reported in Sect. 4. To
evaluate whether our stages could effectively serve as a prescriptive
guide to creating comic-style study reports, we asked a computer
science student with design skills who was unaware of our research
to create more data comics (he now is a co-author of this paper).
The generated examples clearly presented the structure of the stages
and effectively applied design solutions.
To further explore the usefulness of data comics in presenting re-
search, we extended our practice to HCI studies about music [69]
and speech language processing [65]. The inclusion of a particular
paper was based on two principles: i) it included a controlled user
study, and ii) we were familiar with the topic so that we could maxi-
mize our effectiveness in explaining the hypotheses, procedures and
analyses. During this process we kept developing and refining our
list of stages and associated design solutions (Sect. 4).
4. Collaborative design:
Eventually, to further evaluate our stages
and the idea of using data comics for reporting, to source more
design solutions, as well as to find problems and/or solutions not
yet covered by our own explorations, we conducted another set of
collaborative design sessions with seven researchers (five doctoral
and two post-graduate) (Sect. 5).
3.3 Example Comic: Weighted Graph Comparison
To better explain what information our comics aim to encompass, and
how this information is visually presented with respect to our design
objectives O1-O5 (Sect. 3.1), this subsection discusses an example
data comic for a published study. Fig. 1 shows a comic illustrating a
study on comparing weighted networks as conducted by Alper et al. [7].
The paper presents a classic example of a controlled user study in
visualization; in a lab setting, it compares two visualization techniques
and tests four hypotheses on task accuracy and task-completion time.
The comic is meant for a blog, project website, or conference poster.
Structure:
The comic in Fig. 1 is subdivided into four parts
I-IV
by the author to highlight important stages of the study report (O1).
Panel numbers
1
-
36
are defined in the bottom left corner for each
panel of the comic. The comic begins (
I
) by introducing
context and
study motivation
, including key concepts (e.g., weighted graphs in
panels
1
-
4
), purpose of the study and research question
4
. It also
introduces the subject of study, i.e., the six visualizations of interest
6
.
Both concepts and conditions (I) are introduced with illustrations which
are reused throughout the remainder of the comic, providing visual
consistency and quick visual reference. Then (
II
), the comic introduces
tasks and conditions
examined in the study. It does so by introducing
formal notions for the two conditions and visual explanations for each
condition
9
-
10
, using the anatomy cheat sheet described by Wang et
al. [78]. Each task
12
-
14
is explained through a vertical sequence
of images explaining the task’s goal, what participants had to look
for, how they entered their answers, and the set of possible answers.
The last panel in part
II
introduces the different data sets and which
parameters were used to generate or select them.
Part
III
of this comic describes each of the four
hypotheses 17
-
20
for the study by repeating visual depictions of relevant conditions in-
volved in each hypothesis (task, datasets). In addition, each hypothesis
illustrates the expected results in terms of the dependent variable, if
that hypothesis should hold, using annotated bar chart sketches. Part
IV
details the
study setup
, including participants, setup, conditions, and a
reconstructed screenshot of the interface presented to participants
21
-
22
. Panels
25
-
28
show that the order in which data sets were shown
Fig. 1. Example of a data comic illustrating a study that compared two visualization techniques for weighted networks. Created by one of the authors
through an iterative process, based on information provided in the original paper [7]. Each panel is labeled with a number in the bottom left corner,
which we reference in (Sect. 3.3).
goes from simple to difficult. The last two panels in
IV
illustrate that
task completion time has been log-transformed
29
and that participants
were asked for a subjective ranking on a Likert scale 30 .
The results section of the comic (
V
) displays
results
for each of the
independent variables, highlights significant findings, and ‘declares’ the
winner technique for independent variables. This part uses a parallel
layout
31
-
33
to repeat and stress that “adjacency matrices outperform
node-link diagrams”, a conclusion drawn from the experiment and also
emphasized as a corresponding core finding in the original publication.
Finally (
VI
), the comic
evaluates the hypotheses
and reports which
ones are supported and which are unsupported.
Design decisions:
The comic makes a set of design decisions
to communicate study design and results. As a form of sequential art,
comics are naturally averse to organizing hierarchies, so extra care has
been taken to ensure clear section structures and semantic nestings. The
heading for each part, for example, used large spacing and typography
to preserve proper contrast in the masonry layout, aiming for visual
clarity and structure (O4). To demarcate the canvas regions and bounds
of tasks and conditions, the author uses characteristic panels to group
each of them
9
-
10
,
12
-
14
,
15
, and
16
. The primary condition
“techniques,” i.e., node-link diagrams and adjacency matrices, are given
different identity colors to be reused
8
(O3). A sketch of the user
interface in the experiment is placed in the study setup section to give
the audience a clearer understanding of the tasks
22
(O2). The comic
also employs a set of data comic design patterns [13], such as flashback
4,space-annotation 20 ,temporal sequence 6,25 -28 ,question-and-
answer and multiple facets 31 -33 .
The visual style (O5) of the comic aims to achieve a balance between
clarity (sufficient spacing, reduced and harmonic color palette, con-
sistent visual emphasis, clean, grid-like, and non-overlapping layout,
abstraction of concepts), objectivity (few and very small narrators),
as well as artistic demand to create a compelling and unique visual
experience (hand-drawn style, pictograms).
4 STAGES AND DESIGN SOLUTIONS
This section describes the 10 stages which informed the design of our
data comics study reports. In our framework, a stage summarizes the
information a data comic can show about a specific aspect of the report-
ing process, and provides visual solutions for common communication
needs. A stage can be seen as a short set of guidelines about what
information to report in each part of a study and how to represent this
information through graphics, text, and a combination thereof. We
grouped information into stages based on common reporting structures
in HCI and visualization papers, our experience about what information
can effectively be shown in context (e.g., tasks, independent variables),
and existing guidelines (Section 2.1).
Each stage is illustrated with small comic strips selected from exam-
ple comics created by the authors of this paper and works co-designed
with participants (Section 5). The selected examples demonstrate pos-
sible design solutions, and we extract commonly used abstract design
patterns from the examples if they exist. For each stage we provide
a brief definition, explain the presentation objective and discuss chal-
lenges and design considerations. Table 1 summarizes the information
presented at each stage and our visual solutions.
Stage #1—Context and Motivation
Purpose:
This stage aims at a general introduction to the problem
studied, the motivation for the study and the research question, and
explains any important jargon occurring in the remainder of the study.
Depending on the intended audience and their knowledge, additional in-
formation about the topic and domain might be necessary.
Design:
This
stage also introduces the visual morphology and terminology for the
entire comic, e.g., by explaining concepts visually. For example, Fig.
2 and Fig. 1
1
-
4
both introduce basic terms (i.e., ‘transition’ between
‘initial visual state’ and ‘final visual state’; ‘weighted graph’) through
visual explanations using sequences, arrows, and textual annotations.
Keywords are highlighted with color to visually link to corresponding
Table 1. Overview of the 10 framework stages, including what information is reported (Explains) and the Design Solutions we found.
Stage Explains Design Solutions and Examples
1Context Problem, motivation, jargons & concepts,
domain, background
Explaining domain jargon, sketch context (Fig. 2)
2Conditions Techniques, devices Condition Symbols (Fig. 3(b) Fig. 3(c))
3Hypotheses Hypotheses, expected results Sketch Hypothesis (Fig. 4(b) Fig. 4(a))
4Tasks Tasks, instructions, answer modalities, collected data Illustrate Task (Fig. 5(a))
5Stimuli and Materials Data sets, user interface Explain the conditions in Stimuli (Fig. 5(b))
6Participants Power analysis, eligibility, demographics, numbers Randomization Plot (Fig. 6)
7Study Setup Study environment, steps (training, tasks), study design Flowchart or timeline (Fig. 4 Fig. 7(b))
8Data Transformations
Assumption checks (e.g., normality), transformations (e.g.,
z-scoring, logarithmic), outlier removal
Show Transformation (Fig. 8(b) Fig. 8(c)) Zoom (Fig. 8(c))
9Result Presentation Explain important messages of a chart, explain error bars
Difference heatmap (Fig. 9(b), Tips for interpreting the
charts Fig. 9(a))
10
Hypotheses Evaluation Which hypotheses are supported by results
Comparing the sketch from hypotheses with the chart from
result (Fig. 1 34 -36 )
visual explanations, such as ‘ ’ and the ‘ ’ in Fig. 2 and
’ and ‘ ’ in Fig. 1. The key problems a study addresses can
be explicitly stated, as in Fig. 2.
Fig. 2. Example for Stage 1: The comic highlights and explains key
concepts (‘transition’, ‘initial visual state’, ‘final visual state’, ‘animation’)
[26] with illustrations of simple examples.
Stage #2—Conditions
(a)
(b)
CONDITION 1
Control Condition CD
x
y
0
With zero normalized y-axis
Deceptive Condition
x
y
With truncated y-axis
=
CONDITION 2
(c)
Empathy Design Structure Design Baseline Design
techniques
Personal visual narrative with
a first person perspective
showing the humans behind
the dots designed to generate
empathy.
Structured visual narratives of
aggregates of people with a third
person perspective revealing
information gradually.
Fully exploratory flow map
without the use of any
visual narrative techniques.
12
34
Fig. 3. Examples for Stage 2: Descriptions and iconic condition symbols
for (a) visualization types [79] (b) visual encodings [63] (c) two visual
narratives compared to a baseline [52].
Purpose:
This stage introduces the conditions in a study, i.e., the
values chosen for independent study variables. In HCI and visual-
ization this usually includes comparing visualization and interaction
techniques, devices, or subsets of a design space. This stage should
explain how each condition looks and works, and which ones are the
baseline techniques, explaining and illustrating each technique.
De-
sign:
Conditions can be illustrated through schematic representations
(Figures 3(a), 3(b), 3(c)) or, if more iconicity is required, screenshots,
photos and other realistic representations. Techniques should be ex-
plained well, and for visualization techniques the author should explain
how to read each visualization, e.g., through anatomy cheat sheets [78].
Fig. 3(b) shows two tested conditions side-by-side, highlighting the
differences in visual encoding (truncated vs. non-truncated y-axis).
Fig. 3(a) shows how different techniques relate to a (Cartesian) design
space. Symbols can be defined here and re-used to indicate each condi-
tion for the remainder of the comic, e.g., using ‘ ’ to present ‘control
condition’ and ‘ ’ to present ‘Empathy Design’.
Stage #3—Hypotheses
Purpose:
This stage illustrates and describes the hypotheses tested in
the study. A hypothesis can involve one or more conditions, at least
one dependent variable, as well as expected outcomes stated in terms
of effect size.
Design:
Illustrating hypotheses can be challenging due
to their hypothetical nature. Illustrating the conditions involved can
be straightforward, e.g., by repeating figures introduced for conditions
(and potentially also for tasks, Fig. 4(b) and Fig. 4(a)). To illustrate
hypothetical values for dependent variables, we can use data visual-
izations with the same encoding as those used to report results, as this
makes evaluating hypotheses easier. For a single quantitative dependent
variable, we tried bar charts. If a study author pre-registers a hypothesis
and a minimum expected effect size, this is straightforward to visu-
alize. However, this is not a well-adopted research practice, and less
well-defined hypotheses such as ‘larger than’, ‘no difference’ or ‘range’
are fuzzy and hard to convey in a single bar chart. Annotations could
highlight concepts such as differences, averages, values and associated
ranges, such as Fig. 4(a) which uses ‘ ’ to indicate ‘less’. Visualizing
such concepts is related to visualizing uncertainty, for which visualiza-
tion techniques have been summarized elsewhere [45]. One solution
is to omit axis tick marks along axes (Figures 4(b) and 4(a)). Similar
presentation is found in research papers [41], for example, Endicott [35]
uses schematic diagrams to illustrate hypotheses about the origin of the
modern human.
Stage #4—Tasks and Dependent Variables
Purpose:
This stage illustrates the individual tasks participants perform
during the study. This can include instructions given to participants,
the way instructions were presented, what participants saw in each task,
what they had to do, how they entered an answer to a question and
completed a trial, which answers were valid, what data were collected
for each dependent variable and how these data were collected.
De-
sign:
Tasks can be shown through a sequence of actions (Fig. 5(b)),
by showing example stimuli, what users needed to pay attention to or
count (Fig. 1
12
-
14
) and eventually showing the interaction used to
finish a trial and submit an answer (e.g., Fig. 5(b) shows a clicking
action, while Fig. 1 (bottom of
12
-
14
) shows a multiple choice menu).
(a)
Participants in the empathy condition will be more positive (less opposit and less
threat) towards immigration than participants in the structure and exploration
conditions.
Opposition
allow many allow none
Perceived Threat
Less opposit
than & no threat high threat
Less threat
than &
H-Empathy
(b)
Fig. 4. Examples for Stage 3: Descriptions and sketches of hypotheses
for (a) visualization study [52], (b) musical interface evaluation [69].
(a)
1Target points
to track
Task: NoId
2Show
animation
3Click on
targets
Task: Id
1Target
points
to track
2Show
animation
3Click on
targets
4Select ID
(b)
Fig. 5. Example for Stage 4 and Stage 5: (a) how participants in a
perceptual study [26] select targets in two conditions (b) how the stimuli
are generated for these tasks.
Stage #5—Stimuli and Materials
Purpose:
This stage should introduce the individual stimuli presented
to participants during the study. This is related to the previous stage
on Tasks and Dependent Variables but it aims to give examples of
stimuli and explains the factors involved in their design. For example,
in visualization, this should include the example data sets in the setup
participants saw and details like their size and complexity. If data
sets are generated, illustrations could show parameters and values.
Design:
Factors used to generate data sets can be shown by examples,
such as Fig. 1
15
-
16
which shows examples of small and large, dense
and sparse networks or Fig.5 of [12] showing example stimuli with
different parameters (number-of-clusters) as well as the variation of
data sets with the same parameters ((c), (d)). The example in Fig. 5(b)
shows an example of a data set with randomly generated point positions.
Multiple small panels are used to display detailed views of different
conditions.
Stage #6—Participants
Purpose:
This stage should include information about participants
themselves (e.g., number, gender balance, eligibility criteria), as well
as results of any power analysis used to estimate the number of partici-
Fig. 6. Examples for Stage 6: (a) participant demographics isotype
chart [79], (b) Latin Square randomization, (c) completely randomized
design and (d) balanced incomplete block design from [81].
pants required to detect an effect.
Design:
Fig. 6(a) shows participant
demographic data using isotype visualization. A randomization plot
can be employed to show various randomization schemes (Fig. 6 (b)
Latin Square randomization [81] (c) completely randomized design [2]
or (d) balanced incomplete block design [81]) to demonstrate the allo-
cation of participants to stimuli. If power analysis is used to determine
sample size, a power curve for the study design [34] can be shown to
display the power estimate and its margin of error.
Stage #7—Study Setup
Purpose:
This stage helps the reader get an overview of the study
procedure. This can include the study environment and apparatus, the
sequence of steps in a study (e.g., training, trials) as well as detailed
information associated with each step (e.g., duration, setup) and the
instructions participants were provided with at each step.
Design:
The
sequence of steps can be shown as a pipeline (Fig. 7(b)) or a flowchart
(Fig. 7(a)). Annotations provide both an overview of the pipeline and
detailed information about participants, materials and duration. Study
setup can be complex, and may be explained over several sections in a
paper, so the author may be selective about the information reported in
this stage.
Stage #8—Data Transformations and Checks
Purpose:
This stage describes transformations (e.g., log-transform) and
checks performed on the data before any significance tests or regression
analyses. This stage can include outlier removal and winsorizing, or
checking for normality before running statistical tests.
Design:
Outlier
removal can be indicated by thresholds, e.g., at the extreme of a distri-
bution as in Fig. 8(b), and placed alongside descriptions of the nature
of the outliers (e.g., as explanations in speech bubbles). Fig. 8(c) shows
a distribution before and after transformation; possible visualizations
include histograms or QQplots.
Stage #9—Result Presentation
Purpose:
Compared to the previous stages, this stage may require far
greater consideration, given the importance of figures and visualizations
for analysis and presentation of results. The information presented can
include anything important for understanding the results: distributions,
effect sizes, descriptive statistics, error bars or measures of uncertainty.
Data visualizations are key in this stage and require appropriate de-
scriptions as to how they should be read [78] as well as annotations
to highlight key messages.
Design:
A number of different techniques
may be necessary for this stage: for example, bars in a result bar chart
can be reordered to facilitate pair-wise comparison of distributions.
Such a reordering can be made explicit through a before-after sequence
and, e.g., an arrow highlighting the change (Fig 9(d)). Fig 9(c) uses a
cut-out panel to reveal a bimodal distribution with a violin plot. Other
statistical results might require showing trends, extreme values, distri-
butions or outliers, which can be facilitated by annotations. In case
of complex charts or insights, multiple repetitions of a chart can be
presented, each one focusing on a specific message (Fig. 9(b)). To show
pair-wise differences between more than two independent variables,
Fig. 9(a) uses a “difference heat-map”, showing pair-wise differences
on different levels (.01, .001).
Stage #10—Hypothesis Evaluation
Purpose:
This last stage contrasts the initial hypotheses with the find-
ings from the results in #9. It should show which hypotheses can be ac-
cepted and which ones are not supported by the results.
Design:
Fig. 1,
for example, shows both, the figure drawn for the initial hypotheses (#2)
(a)
Experiment 1 Experiment 2
pre and post-testing of
immigration attitudes
X 304
91
91 92 90
97 94
X 300
X 282X 273
NO pre-testing of
immigration attitudesp
Immigration attitudes pre-test
Valid participants:
Random allocation to stimuli
The Human Values Scale
Immigration attitudes post-test
Filtering: Fitering questions, Timing; Timing; Contradiction and ‘prefer not to say’
(2-4 mins)
Pre-stimulus phase Stimulus phase Post-stimulus phase
(5 mins)(3 mins)(5-8 mins)
Select and watch up
to six personal
migration stories
Free exploration of
Free exploration Free exploration
the data space
Read through a
structured narrative
showing aggregated
data using a stepper
Empathy Design Structure Design Baseline Design
Demographics: Gender, Age, Education, Income, Religiosity, Political Orientation
(b)
  
Fig. 7. Examples for Stage 7: (a) pipeline describes the sequence of
study steps and how procedures differed between conditions across two
experiments [52] (b) A flowchart describes the different phases of a study
in chronological order [79].
and the the actual result figure. Supported and unsupported hypotheses
can be highlighted with ‘3’ and ‘7’.
5 COLLABORATIVE DESIGN SESSIONS
We conducted remote collaborative design sessions to evaluate our
solutions and stages and obtain feedback, explore more solutions, and
to understand which challenges people might encounter in creating data
comics for a wider range of studies.
Participants
Participants were recruited through email lists at
computer science research labs at three universities (two in Europe and
one in Canada). We motivated participation by providing guidance on
how to create data comics for the students’ own studies, co-designed
and polished by two of the authors. We gave the comic in Fig. 1 as an
example. We did not constrain the scope to controlled studies in HCI
and visualization. The seven participants (three females) include five
Ph.D. students and two recently graduated master students specialising
in usable security and privacy (SecPriv), computational social psy-
chology (CompPsy), geoinformatics (GeoInf ), probabilistic machine
learning (ML), 3D interactive visualization (3DVis), public health and
health system (Health), and systems design engineering (SysEng) re-
spectively. Their respective studies involved a controlled study in data
visualization (GeoInf, [52]), an analysis of online open human data.
(SecPriv [73]; CompPsy, [25]), observational animal data and model-
(a)
Before running our regression, we removed four
outliers with unusually high values (>18 units)
(b)
We decided on
above 60 seconds and
below 1 second to be
the proper threshold for
removing outliers…
Accidental
click Technical
problem
1s 60s t
󱇮Tablet
N=2 N=4
(c)
Log
Q-Q plot
Density plot
Our data was highly skewed, so we used a logarithmic transformation.
Log
Fig. 8. Examples for Stage 8: (a) Visualizing the process of outlier re-
moval reveals that the goodness of fit in a regression model was highly
dependent on outlier removal. (b) Authors can justify criteria for outlier
removal using annotated density plots [12] (c) Data comics can demon-
strate the effect of important distribution checks and transformations that
are often relegated to a textual description.
1What motivated you in participating in this workshop?
2Which issues do you currently see in reporting studies (as a reader/author)?
3
Did our stages-framework help? Which challenges did you experience in
doing your story boards / what was the most difficult part, e.g., stages, illus-
trating specific issues?
4
Creating storyboards, did that change the way you think about reporting
studies?
5How are you going to use your final comic?
6
What did you value about using data comics to reporting studies, what do you
think works about using comics for reporting studies?
Table 2. Questions asked of the participants in the co-design process.
ing (ML, [20]), 3D interactive system design and evaluation (3DVis,
work-in-progress), cognitive behavior study with a game (Health, [24])
and a study of gestural interactions with VR environments (SysEng,
work-in-progress). This selection gave us the chance to explore the
potential and limits of our design solutions and receive feedback from
diverse perspectives. None of the participants had any experience with
creating (data) comics.
Procedure
The entire design session was conducted individually
for each participant, giving participants and facilitators a flexible sched-
ule to work on comics asynchronously offline and communicate about
solutions and design iterations.
1.
For the first session, participants were asked to attend with a write-up
of their study and to give a brief presentation in 5-10 minutes. Three
participants presented slides. This gave facilitators and participants
an overview and a review of the important stages of the study.
2.
Then, we gave a very brief introduction to data comics and showed
examples from https://statscomics.github.io/gallery.html.
3.
Participants were then asked to create drafts (“storybords”) for their
comics off-line before the next session. Participants were free to
choose drawing techniques, e.g., hand-sketches, vector graphics,
photoshop, screenshots, etc). To guide participants and to see if our
framework would help them in their process, we provided textual
guidelines of each stages (
https://statscomics.github.io/guidelines.
html) as well as the three example comics.
4.
After 1-4 days, instructors met again with each participant, dis-
cussing storyboards and any challenges encountered.
5.
Based on feedback from participants, the instructors iterated upon
their comics, aided by slides, written reports and other records.
6.
Eventually, participants were asked summative questions (Table 2).
Results
Drafts from participants showed that they presented a
visual and structured layout of their studies. The structure included
our stages, e.g., motivation,methods,conditions,stimulus and results.
Design solutions included a) a Venn diagram to represent the rela-
tionship between data and samples (SecPriv,Health); b) a timeline to
(a)
From the 5-point Likert Scale on the questionnaire, we obtained these results:
-2 -1 0 1 2
* Visual Ability
Mean Difference
95% CI
Low scorers were
deceived less
* Chart Familiarity
Need for Recognition
Education Intersectionmeans NOT significant,
and vice versa
High scorers were
deceived less
Comparing visual ability and deception,
we found that…
12345
Visual
Ability
Deception
Low High Participants with high
score on visual ability…
…recorded low
deception scores
Likewise, the same was true
when we compare…
12345
Chart
Familiarity
Deception
Low High
(b)
A Kruskal-Wallis test confirmed a significant difference between
conditions for responses to the deception test question, Mann-Whitney
U test with Bonferroni-Halm correction were used as post-hoc tests.
12345
Control
Deceptive
Control
-Deceptive
Deceptive
-Control
Mean Response
95% CI
slightly better substantially better
Significant differences
were found between
three pairs.
SIGNIFICANT, P<0.01
SIGNIFICANT, P<0.002
Let’s examine each difference in detail between the pairs…
Mean Response
123
Control
Deceptive
Significant differences were
found between the control (M
= 1.61, 95% CI [1.41, 1.84] and
deceptive (M = 2.81, 95% CI
[2.61, 3.04]) conditions (U =
1671.5, p < 0.0001)
Mean Response
123
Control
… and between the control
(M = 1.61, 95% CI [1.41,
1.84] and control-deceptive
(M = 2.12, 95% CI [1.92,
2.35]) conditions (U = 2061,
p < 0.01)
Control
-Deceptive
NOT SIGNIFICANT
(c)
Category B has a bimodal distribution!
(d)
We rearranged the bars to compare A vs. E
Fig. 9. Examples for Stage 9: (a). A complex result [63] can be presented
using a sequence of secondary visualizations (b) or by using multiple
explanations of a repeated chart to focus on specific messages. (c) A
Zoom pattern [13] can be used to reveal distribution details (d) and panel
sequences can illustrate operations like variable re-ordering.
illustrate tasks (GeoInf,CompPsy,3DVis); c) simple pictographs to
demonstrate differences between technical terms (GeoInf,CompPsy,
Health,3DVis,SysEng) (e.g.,‘mood’ and ‘emotion’), and d) illustration
and interpretation of the conditions and materials used in the study
(GeoInf,SysEng). All participants presented annotated result charts
interpreting features such as significant differences (GeoInf ), error bars
(ML) and the presence of ‘less fluctuation’ and ‘more fluctuation’ in a
Gaussian process regression (CompPsy).
Qualitative feedback
All participants in the collaborative de-
sign session expressed very positive comments about the experience
and the results of the session, describing data comics as, e.g., a “vi-
sual interface into a more written format” (GeoInf ). Participants were
motivated to participate
in the collaborative design session as it was
an “interesting problemto “expand [the] visibility of [their] research”
and to enable “more people get the message” (CompPsy). Participants
found data comics a “very good opportunity to [create] a visual, easy
to consume format for a blog post” (GeoInf ).
About
current ways of reporting
, participants found the complex-
ity of the information a challenge: “it is often very time consuming to
collect all the valuable/required information about a study reported in
a classical paper format [...] one has to ‘close read’ the paper and
investigate additional material to find this information, and [expand]
the effort to find the right information.(GeoInf ). From an author’s per-
spective, the drawbacks of current reporting include that it is difficult to
“convey/communicate the complexity of study designs and/or stimuli in
a compelling, easy to understand format.(GeoInf ). Moreover, there
is little support for grasping the information in a paper: “the current
expected paper format/structure does not encourage authors to provide
a (standardised) high-level yet detailed summary of such information in
a ‘front-matter’ of a paper (besides the abstract, where detail is limited;
or teaser-images).(GeoInf ).
For the
eventual use of their comic
, participants agreed on general
uses of “outreach, publicity” (GeoInf ). More concrete scenarios in-
cluded the use of comics as a “script for [a] conference presentation”
(ML) and as suitable material for blog posts, posters (“instead of posters
I can also reuse data comics. (SecPriv)), conference presentations
and online project pages to “present what we did[,] in a 5 minute-
read” (SecPriv). For future, wider use, one participant wished that data
comics “would become an accepted format [...] for publishing research”
and mentioned potential extensions through “interactive formats like
scroll[ing] stories” (GeoInf ). This participant also acknowledged the
need for guidance and standardization “similar to pre-registration tem-
plates or the stages-framework used in the [co-design workshop].
With respect to the
instructional material
provided from our side,
participants found our motivating examples “well-illustrated” and high-
lighted that the feedback from instructors was very helpful “since this
challenges my perception of other people’s understanding” (CompPsy).
Our stages helped participants “[c]oming up with an idea of what I
want” (SecPriv), to “distill the required information” and create “a
structure to follow” (GeoInf ). However two participants found it easy
to follow the stages since they had already written up their study reports.
Stages were also found to be helpful to guide the inclusion or exlusion
of information, since they helped participants to focus on the process of
textual and visual explanation, as well as “to think about all the other
possibilities comics have to communicate a narrative.(GeoInf ).
Creating comics involved a set of
challenges
. Despite positive feed-
back on our stages, three participants reported issues with deciding
what information should be included and identifying the target audience
to get “the message across” (CompPsy). This is a common issue in
storytelling, which goes beyond the specific domain of visualization.
Drawing with pen and paper was reported as an issue by one partic-
ipant. With respect to individual stages, we found most difficulties
arose with stage #5 (Stimuli and Materials,“explain[ing] multiple
datasets/methods”), stage #7 (Study Setup,“complexity of study de-
signs and the complexity of stimuli”) (GeoInf ) and stage #8 (Data
Transformations and Checks,“explaining transformations/re-binning of
collected information”). Although the stages were given as guidance,
one participant found difficulties in deciding “what sequence was the
best, the amount of text to include and [the way of] making it easy-to-
understand” (SysEng). A specific challenge arose from the need to
explain specific machine learning models (ML), algorithms (CompPsy)
and formulas (SysEng). While outside the scope of this paper, this will
require significant attention in future work.
Asked whether
creating storyboards changed participants’
thinking
about reporting studies, the answers indicated that partici-
pants found “storytelling is important” (Health) and data comics a
“very suitable format” (GeoInf ) “on top of my list of methods/techniques
to report findings or study methods.” (SysEng) and would suggest such
comics “as a service for researchers” (SecPriv) since they “capture
the idea of my study in a very short period of time and the idea stays in
their mind” (ML).
6 DISCUSSION AND FUTURE WORK
Designing Comics for Studies:
Our collaborative design ses-
sions showed that researchers unfamiliar with general visualization
practices clearly see the need for better communication and the poten-
tial of techniques such as data comics. Moreover, our stages discussed
in Sect. 4, alongside examples, helped participants understand this
novel medium and guide them in expressing their own ideas visually as
storyboards. As crafting sophisticated visual content for was found to
be an intimidating task by some participants, our collaborative sessions
employed a 2-step iterative co-design process, comprising of clear in-
structions and guidelines about what was expected from a participant
(stages, storyboard) and the involvement of a professional designer
polishing the storyboard and helping with design solutions. We be-
lieve this process can be refined for future, including more extensive
workshops for illustrating studies and for data comics in general [77].
Further workshops and teaching practice will help refine our method-
ologies and build a formal set of guidelines, instructions, and design
templates. An open question is to what extent we can guide decisions
on level-of-detail and better design data comics for specific audiences.
Design Objectives:
Our collection of comics represents a series
of points in the solution space for visually explaining procedures and
findings in comic-style study reports. We use this section to reflect on
how these relate to our initial design objectives (O1-O5).
Our first design goals were aimed at finding ways to visually explain
(
O2
) key concepts (
O1
). These concepts were represented through dif-
ferent types of graphics; for example, iconic illustrations were used to
display contextual messages related to the goals of the study. Schemat-
ics were used to present study methodologies or task workflows. Screen-
shots were used to present realistic representations of prototype UIs and
interactions performed by study participants. Where iconic represen-
tations where not possible, e.g., due to the visual space required (e.g.,
graph sizes in Fig. 1 15 -16 ) or the abstract nature of the concept (e.g.,
a technique) the designer used more symbolic representations to create
a visual vocabulary (
O3
). Data visualizations were naturally used to
present results, but also to show demographics and data transformations.
Some figures explain visualizations and charts, e.g., error-bars, chart
axes, or user interface components (GeoInf ).
We found some solutions were able to serve as patterns or templates,
similar to data comics design patterns [13], supporting transparency and
understanding in visual explanations. Examples include step-by-step
explanations of tasks (e,g, Fig. 5(a)) making evidence for key claims
visually apparent, e.g., facilitating comparison by reordering the bars
in a bar chart (Fig. 9(d)); and evaluating hypotheses by comparing
schematic visualizations of hypotheses with the actual results (Fig. 1).
Our criteria of visual overview (
O4
) refers to the ability to provide in-
formation at different levels of detail through overview and details. For
example, by structuring a comic layout into larger and smaller panels,
headlines and subsections, and nested subpanels, and placing panels in
clear sequence (Fig. 1
25
-
28
) to show a temporal relation, or in parallel
(Fig. 1
31
-
33
) to show information of parallel meaning. For complex
layouts, arrows indicate reading direction and avoid ”backlocks” [28].
We seek visual consistency in illustration style; colors used for
highlighting and indicating different conditions; and visual lexicons for
individual objects or actions in a comic (
O3
). This visual vocabulary,
which is gradually introduced to the audience in the early stages of
the narrative framework, allows readers to build associations between
different experimental concepts and to quickly refer back.
It should be noted that using conventional comic-style presentation
does not indicate poor quality (
O5
). This aesthetic is chosen with the
goal of complementing the textual annotations, and reducing visuals
that are irrelevant to a viewer’s understanding of the study report. While
the style of data comics was not focus of our work, we believe there
are many open research questions about the impact of the visual style
for understanding, as well as on attractiveness and credibility [80].
Limitations and Open Questions:
The research presented in
this paper is clearly limited by the
number of examples
we could pro-
duce and the number of interactions we could have with the participants
in our design discussion and collaborative design sessions. Unlike for
infographics [19,27, 54], data videos [9], or data comics [14], there is no
corpus that we could curate and learn from to understand best practices,
or the effectiveness of existing data comics for reporting empirical
studies. Our research has created an initial set of such examples, devel-
oped through design exploration and iterative collaborative design, that
we make available on our website:
https://statscomics.github.io
. Our
results are only the beginning of a much larger exploration of this vast
design space that is yet to be structured and formally evaluated through,
e.g., controlled user studies to assess which of our solutions work best
and to investigate the effect of various factors on effective scientific
communication for various audiences with respect to comprehension,
memorability, engagement, and educational value.
Providing
better support for authoring
data comics is another item
to be added to the research agenda. Creating comics involves many
skills [77] and authoring support for specialized comics such as data
comics [50] is still in its infancy. Possibly, authoring tools could be
integrated with data exploration and analysis platforms such as Python
notebooks. One challenge with creating such tools is to find the right
balance between expressive power and usability, as well as between
guidance and creative freedom [67]. This is further complicated by
the wide range of elements and concepts in study reports, and the rich
visual vocabulary afforded by data comics.
More detailed and specialized guidelines for common study proto-
cols (e.g., Fitt’s Law studies) could also help address
challenges
raised
by the participants, some of whom found complex study designs diffi-
cult to communicate. Guidelines could also be refined according to a
particular comic’s intended use case, by indicating to authors the ap-
propriate level of simplification for their audience. Without additional
guidance, there is the risk that authors might leave out important details
when attempting to communicate to a non-technical audience.
We have deliberately left out in our work the
explanation of statis-
tical and other tests, algorithms, and complex systems
(though this
was mentioned as a difficult point in the process by several participants).
For example, while descriptive statistics and regression analysis are
commonly explained using graphics, more sophisticated methods pose
challenges to understanding and transparency, and could benefit from
research into how they could be communicated using comics.
Towards better support for reporting studies.
During our
research, we engaged in co-design with researchers from economics,
cyber-security, computational social psychology, machine learning,
geo-informatics, 3D scientific visualization, and HCI. In this paper
we provide design solutions of what one could do. While some fields
rely on data analysis and modelling uncommon in HCI, others conduct
mostly qualitative research, where data comprises of interview records
and other observations. We do not yet provide specific visual solutions
for such scenarios, but these examples can inform future research.
Moreover, the boundaries between comics for reporting studies
and engaging in public outreach begin to blur, e.g., PhD Comics [22]
and xkcd [57] are examples of science-related comics that often ”go
viral” [23]. We believe that, as comics are a universal communica-
tion medium deeply rooted in human culture, data comics (and their
close cousins, infographics and graphical abstracts) will receive greater
consideration in the near future, helping to narrow the gaps between
scientists and the general public, and between communicators and audi-
ences. These formats can inform the design of better graphics in papers,
appendices, webpages, talks and slideshows, posters, and videos. Ac-
tively engaging in making visual information more commonplace can
be transformative to communication: as scientists begin to think about
their research more visually, the media for communicating and transmit-
ting thoughts will evolve, and the audience will in turn become more
visually literate. We encourage researchers and science communicators
to seek inspiration in our work to go beyond the preconceptions associ-
ated with comics: data comics and sketching are less about cartoons
or particular visual styles and more about thinking, logic, and effective
communication. We hope our work will contribute to a new culture
of reporting and discussing scientific findings, and to a discussion of
transparency, teaching statistics, and data literacy in general.
7 CONCLUSION
This paper explores data comics for illustrating reports of controlled
studies. Though an iterative co-design process, we defined common
design solutions, guided by five design objectives. Our results and
co-design experience are encouraging, and can inform further research
on improving scientific reporting. We contribute a set of examples of
how comics support that reporting, providing an initial set of concrete
ideas for how to communicate common stages of HCI studies and sta-
tistical analysis, and may serve as a starting point for future exploration.
Finally, we show that our new method allows people to create clear and
compelling visual summaries of complex research designs.
REFERENCES
[1] Distill.pub. https://distill.pub/, 2020.
[2]
Experimental design and analysis of variance.
https:
//yieldingresults.org/modules/module-2/, 2020.
[3] medium.com. https://medium.com/, 2020.
[4]
Publication Manual of the American Psychological Association, Seventh
Edition. American Psychological Association, 2020.
[5] Researchblogging.org. http://researchblogging.org/, 2020.
[6] Seeing theory. https://seeing-theory.brown.edu/, 2020.
[7]
B. Alper, B. Bach, N. Henry Riche, T. Isenberg, and J.-D. Fekete. Weighted
graph comparison techniques for brain connectivity analysis. In Proceed-
ings of the 2013 CHI Conference on Human Factors in Computing Systems,
pp. 483–492. ACM, 2013.
[8]
S. V. Amaral, T. Forte, J. Ramalho-Santos, and M. T. G. da Cruz. I want
more and better cells!–an outreach project about stem cells and its impact
on the general population. PloS one, 10(7), 2015.
[9]
F. Amini, N. Henry Riche, B. Lee, C. Hurter, and P. Irani. Understanding
data videos: Looking at narrative visualization through the cinematography
lens. In Proceedings of the 33rd Annual ACM Conference on Human
Factors in Computing Systems, pp. 1459–1468, 2015.
[10]
B. Bach, N. Kerracher, K. W. Hall, S. Carpendale, J. Kennedy, and
N. Henry Riche. Telling stories about dynamic networks with graph
comics. In Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems, pp. 3670–3682. ACM, 2016.
[11]
B. Bach, N. H. Riche, S. Carpendale, and H. Pfister. The emerging genre
of data comics. IEEE computer graphics and applications, 37(3):6–13,
2017.
[12]
B. Bach, R. Sicat, J. Beyer, M. Cordeil, and H. Pfister. The hologram in
my hand: How effective is interactive exploration of 3d visualizations in
immersive tangible augmented reality? IEEE transactions on visualization
and computer graphics, 24(1):457–467, 2017.
[13]
B. Bach, Z. Wang, M. Farinella, D. Murray-Rust, and N. Henry Riche. De-
sign patterns for data comics. In Proceedings of the 2018 CHI Conference
on Human Factors in Computing Systems, p. 1–12. ACM, New York, NY,
USA, 2018. doi: 10. 1145/3173574.3173612
[14]
B. Bach, Z. Wang, N. Henry Riche, M. Farinella, D. Murray-Rust,
S. Carpendale, and H. Pfister. Data comics.
https://www.datacomics.
net/, 2017-2020.
[15]
S. K. Badam, Z. Liu, and N. Elmqvist. Elastic documents: Coupling
text and tables through contextual visualizations for enhanced document
reading. IEEE transactions on visualization and computer graphics, 2019.
[16]
A. Barberousse. What is the use of diagrams in theoretical modeling?
Science in Context, 26(2):345–362, 2013.
[17]
C. B. Begg. Publication bias. The handbook of research synthesis, 25:299–
409, 1994.
[18]
R. Beyth-Marom, F. Fidler, and G. Cumming. Statistical cognition: To-
wards evidence-base practice in statistics and statistics education. Statistics
Education Research Journal, 7(2), 2008.
[19]
M. A. Borkin, A. A. Vo, Z. Bylinskii, P. Isola, S. Sunkavalli, A. Oliva, and
H. Pfister. What makes a visualization memorable? IEEE transactions on
visualization and computer graphics, 19(12):2306–2315, 2013.
[20]
M. P. Camilleri and C. K. Williams. The extended dawid-skene
model: Fusing information from multiple data schemas. arXiv preprint
arXiv:1906.01251, 2019.
[21]
I. Chalmers and P. Glasziou. Avoidable waste in the production and
reporting of research evidence. The Lancet, 374(9683):86–89, 2009.
[22] J. Cham. Phd comics. http://phdcomics.com/.
[23]
J. Cham. The science gap.
https://tedx.ucla.edu/talks/jorge_
cham_the_science_gap/, 2012.
[24]
Chan, Long Ting. Leveraging the proteus effect to motivate emotional
support in a serious game for mental health, 2019.
[25]
L. L. Chen, W. Magdy, H. Whalley, and M. Wolters. Examining the role
of mood patterns in predicting self-reported depressive symptoms. 2020.
[26]
F. Chevalier, P. Dragicevic, and S. Franconeri. The not-so-staggering effect
of staggered animated transitions on visual tracking. IEEE transactions
on visualization and computer graphics, 20(12):2241–2250, 2014.
[27]
F. Chevalier, R. Vuillemot, and G. Gali. Using concrete scales: A practical
framework for effective visual depiction of complex measures. IEEE
transactions on visualization and computer graphics, 19(12):2426–2435,
2013.
[28]
N. Cohn. Navigating comics: an empirical and theoretical approach to
strategies of reading comic page layouts. Frontiers in psychology, 4:186,
2013.
[29]
N. Cohn. The Visual Language of Comics: Introduction to the Structure
and Cognition of Sequential Images. A&C Black, 2013.
[30]
M. Correll and M. Gleicher. Error bars considered harmful: Exploring
alternate encodings for mean and error. IEEE transactions on visualization
and computer graphics, 20(12):2142–2151, 2014.
[31]
N. Desrochers, A. Paul-Hus, S. Haustein, R. Costas, P. Mongeon, A. Quan-
Haase, T. D. Bowman, J. Pecoskie, A. Tsou, and V. Larivi
`
ere. Authorship,
citations, acknowledgments and visibility in social media: Symbolic capi-
tal in the multifaceted reward system of science. Social science informa-
tion, 57(2):223–248, 2018.
[32]
P. Dragicevic. Fair statistical communication in hci. In Modern Statistical
Methods for HCI, pp. 291–330. Springer, 2016.
[33]
P. Dragicevic, Y. Jansen, A. Sarma, M. Kay, and F. Chevalier. Increasing
the transparency of research papers with explorable multiverse analyses. In
Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems, pp. 1–15. ACM, 2019.
[34]
A. Eiselmayer, C. Wacharamanotham, M. Beaudouin-Lafon, and W. E.
Mackay. Touchstone2: An interactive environment for exploring trade-offs
in hci experiment design. In Proceedings of the 2019 CHI Conference
on Human Factors in Computing Systems, pp. 217:1–217:11. ACM, New
York, NY, USA, 2019. doi: 10. 1145/3290605.3300447
[35]
P. Endicott, S. Y. Ho, and C. Stringer. Using genetic evidence to evaluate
four palaeoanthropological hypotheses for the timing of neanderthal and
modern human origins. Journal of human evolution, 59(1):87–95, 2010.
[36]
M. Farinella. The potential of comics in science communication. Journal
of science communication, 17(01):Y01–1, 2018.
[37] L. Gonick and W. Smith. The cartoon guide to statistics. Collins, 1993.
[38]
D. Graeber. The utopia of rules: On technology, stupidity, and the secret
joys of bureaucracy. Melville House, 2015.
[39]
T. Grossman, F. Chevalier, and R. H. Kazi. Your paper is dead! bringing
life to research articles with animated figures. In Proceedings of the
33rd Annual ACM Conference Extended Abstracts on Human Factors in
Computing Systems, pp. 461–475, 2015.
[40]
C. Group. Consort: Consolidated standards of reporting trials.
http:
//www.consort-statement.org/, 2010.
[41]
W.-C. Ho, Y. Ohya, and J. Zhang. Testing the neutral hypothesis of
phenotypic evolution. Proceedings of the National Academy of Sciences,
114(46):12219–12224, 2017.
[42]
K. Hornbæk, S. S. Sander, J. A. Bargas-Avila, and J. Grue Simonsen. Is
once enough? on the extent and content of replications in human-computer
interaction. In Proceedings of the 2014 CHI Conference on Human Factors
in Computing Systems, pp. 3523–3532. ACM, 2014.
[43]
J. Hosler and K. Boomer. Are comic books an effective way to engage
nonmajors in learning and appreciating science? CBE—Life Sciences
Education, 10(3):309–317, 2011.
[44]
J. Hullman and B. Bach. Picturing science: Design patterns in graphical
abstracts. In International Conference on Theory and Application of
Diagrams, pp. 183–200. Springer, 2018.
[45]
J. Hullman, X. Qiao, M. Correll, A. Kale, and M. Kay. In pursuit of error:
A survey of uncertainty visualization evaluation. IEEE transactions on
visualization and computer graphics, 25(1):903–913, 2018.
[46]
S. Huron, S. Carpendale, J. Boy, and J.-D. Fekete. Using viskit: A manual
for running a constructive visualization workshop. 2016.
[47]
M. Kang. Measuring social media credibility: A study on a measure of
blog credibility. Institute for Public Relations, pp. 59–68, 2010.
[48]
K. Karaca. Representing experimental procedures through diagrams
at cern’s large hadron collider: The communicatory value of diagram-
matic representations in collaborative research. Perspectives on science,
25(2):177–203, 2017.
[49]
C. Kilkenny, W. J. Browne, I. C. Cuthill, M. Emerson, and D. G. Alt-
man. Improving bioscience research reporting: the arrive guidelines for
reporting animal research. PLoS biology, 8(6), 2010.
[50]
N. W. Kim, N. Henry Riche, B. Bach, G. Xu, M. Brehmer, K. Hinckley,
M. Pahud, H. Xia, M. J. McGuffin, and H. Pfister. Datatoon: Drawing
dynamic network comics with pen+ touch interaction. In Proceedings of
the 2019 CHI Conference on Human Factors in Computing Systems, pp.
1–12. ACM, 2019.
[51]
R. B. Kline. What’s wrong with statistical tests–and where we go from
here. 2004.
[52]
J. Liem, C. Perin, and J. Wood. Structure and empathy in visual data
storytelling: Evaluating their influence on attitude. In Computer Graphics
Forum. Wiley Online Library, 2020.
[53]
S.-F. Lin, H.-s. Lin, L. Lee, and L. D. Yore. Are science comics a good
medium for science communication? the case for public learning of
nanotechnology. International Journal of Science Education, Part B,
5(3):276–294, 2015.
[54]
M. Lu, C. Wang, J. Lanir, N. Zhao, H. Pfister, D. Cohen-Or, and H. Huang.
Exploring visual information flows in infographics. In Proceedings of the
2020 CHI Conference on Human Factors in Computing Systems, pp. 1–12.
ACM, 2020.
[55] M. Lynch and S. Woolgar. Representation in scientific practice. 1990.
[56]
S. McCloud. Understanding comics: The invisible art. Northampton,
Mass, 1993.
[57] R. Munroe. Xkcd. https://xkcd.com/.
[58]
A. Negrete and C. Lartigue. Learning from education to communicate
science as a good story. Endeavour, 28(3):120–124, 2004.
[59]
M. O’Donnell. Why doctors don’t read research papers: scientific papers
are not written to disseminate information. BMJ: British Medical Journal,
330(7485):256, 2005.
[60] C. Olah and S. Carter. Research debt. Distill, 2(3):e5, 2017.
[61]
L. Pauwels. Visual cultures of science: rethinking representational prac-
tices in knowledge building and science communication. UPNE, 2006.
[62]
R. Peck, C. Olsen, and J. L. Devore. Introduction to statistics and data
analysis. Nelson Education, 2011.
[63]
J. Ritchie, D. Wigdor, and F. Chevalier. A lie reveals the truth: Quasi-
modes for task-aligned data presentation. In Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems, pp. 1–13. ACM,
2019.
[64]
H. R. Rothstein, A. J. Sutton, and M. Borenstein. Publication bias in
meta-analysis. Publication bias in meta-analysis: Prevention, assessment
and adjustments, pp. 1–7, 2005.
[65]
S. Ruan, J. O. Wobbrock, K. Liou, A. Ng, and J. A. Landay. Comparing
speech and keyboard text entry for short messages in two languages on
touchscreen phones. Proceedings of the ACM on Interactive, Mobile,
Wearable and Ubiquitous Technologies, 1(4):1–23, 2018.
[66]
J. D. Scargle. Publication bias (the” file-drawer problem”) in scientific
inference. arXiv preprint physics/9909033, 1999.
[67]
B. Shneiderman. Creativity support tools: Accelerating discovery and
innovation. Communications of the ACM, 50(12):20–32, 2007.
[68]
I. Simera, D. Moher, J. Hoey, K. F. Schulz, and D. G. Altman. A catalogue
of reporting guidelines for health research. European journal of clinical
investigation, 40(1):35–53, 2010.
[69]
I. Simon, D. Morris, and S. Basu. Mysong: automatic accompaniment
generation for vocal melodies. In Proceedings of the SIGCHI conference
on human factors in computing systems, pp. 725–734. ACM, 2008.
[70]
A. N. Spiegel, J. McQuillan, P. Halpin, C. Matuk, and J. Diamond. Engag-
ing teenagers with science through comics. Research in science education,
43(6):2309–2326, 2013.
[71]
H. Strobelt, D. Oelke, C. Rohrdantz, A. Stoffel, D. A. Keim, and
O. Deussen. Document cards: A top trumps visualization for documents.
IEEE transactions on visualization and computer graphics, 15(6):1145–
1152, 2009.
[72]
D. F. Stroup, J. A. Berlin, S. C. Morton, I. Olkin, G. D. Williamson,
D. Rennie, D. Moher, B. J. Becker, T. A. Sipe, S. B. Thacker, et al. Meta-
analysis of observational studies in epidemiology: a proposal for reporting.
Jama, 283(15):2008–2012, 2000.
[73]
M. Tahaei, K. Vaniea, and N. Saphra. Understanding privacy-related
questions on stack overflow. 2020.
[74] S. Takahashi et al. The manga guide to statistics. No Starch Press, 2008.
[75]
B. Victor. Explorable explanations. Online.
http://worrydream.com/
ExplorableExplanations/, 2011.
[76]
B. Victor. Scientific communication as sequential art. Online.
http://
worrydream.com/ScientificCommunicationAsSequentialArt/
,
2011.
[77]
Z. Wang, H. Dingwall, and B. Bach. Teaching data visualization and
storytelling with data comic workshops. In Extended Abstracts of the 2019
CHI Conference on Human Factors in Computing Systems, pp. 1–9. ACM,
2019.
[78]
Z. Wang, L. Sundin, D. Murray-Rust, and B. Bach. Cheat sheets for data
visualization techniques. In Proceedings of the 2020 CHI Conference on
Human Factors in Computing Systems, p. 1–13. ACM, New York, NY,
USA, 2020. doi: 10. 1145/3313831.3376271
[79]
Z. Wang, S. Wang, M. Farinella, D. Murray-Rust, N. Henry Riche, and
B. Bach. Comparing effectiveness and engagement of data comics and
infographics. In Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems, pp. 1–12, 2019.
[80]
J. Wood, P. Isenberg, T. Isenberg, J. Dykes, N. Boukhelifa, and A. Slingsby.
Sketchy rendering for information visualization. IEEE transactions on
visualization and computer graphics, 18(12):2749–2758, 2012.
[81]
S. C. Yarnes. Introduction to randomization and layout. Online.
https:
//pbgworks.org/index.php?q=node/1534, 2013.
[82]
P. T. Zellweger, B.-W. Chang, and J. D. Mackinlay. Fluid links for in-
formed and incremental link transitions. In Proceedings of the ninth ACM
conference on Hypertext and hypermedia: links, objects, time and space—
structure in hypermedia systems: links, objects, time and space—structure
in hypermedia systems, pp. 50–57. ACM, 1998.
[83]
P. T. Zellweger, A. Mangen, and P. Newman. Reading and writing fluid
hypertext narratives. In Proceedings of the thirteenth ACM conference on
Hypertext and hypermedia, pp. 45–54. ACM, 2002.
[84]
Z. Zhao, R. Marr, and N. Elmqvist. Data comics: Sequential art for
data-driven storytelling. tech. report, 2015.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
We analyse Stack Overflow (SO) to understand challenges and confusions developers face while dealing with privacy-related topics. We apply topic modelling techniques to 1,733 privacy-related questions to identify topics and then qualitatively analyse a random sample of 315 privacy-related questions. Identified topics include privacy policies, privacy concerns, access control, and version changes. Results show that developers do ask SO for support on privacy-related issues. We also find that platforms such as Apple and Google are defining privacy requirements for developers by specifying what "sensitive" information is and what types of information developers need to communicate to users (e.g. privacy policies). We also examine the accepted answers in our sample and find that 28% of them link to official documentation and more than half are answered by SO users without references to any external resources.
Chapter
Full-text available
While label fusion from multiple noisy annotations is a well understood concept in data wrangling (tackled for example by the Dawid-Skene (DS) model), we consider the extended problem of carrying out learning when the labels themselves are not consistently annotated with the same schema. We show that even if annotators use disparate, albeit related, label-sets, we can still draw inferences for the underlying full label-set. We propose the Inter-Schema AdapteR (ISAR) to translate the fully-specified label-set to the one used by each annotator, enabling learning under such heterogeneous schemas, without the need to re-annotate the data. We apply our method to a mouse behavioural dataset, achieving significant gains (compared with DS) in out-of-sample log-likelihood (−3.40 to −2.39) and F1-score (0.785 to 0.864).
Conference Paper
Full-text available
This paper introduces the concept of ‘cheat sheets’ for data visualization techniques, a set of concise graphical explanations and textual annotations inspired by infographics, data comics, and cheat sheets in other domains. Cheat sheets aim to address the increasing need for accessible material that supports a wide audience in understanding data visualization techniques, their use, their fallacies and so forth. We have carried out an iterative design process with practitioners, teachers and students of data science and visualization, resulting six types of cheat sheet (anatomy, construction, visual patterns, pitfalls, false-friends and well-known relatives) for six types of visualization, and formats for presentation. We assess these with a qualitative user study using 11 participants that demonstrates the readability and usefulness of our cheat sheets.
Conference Paper
Full-text available
This paper presents a method for hands-on creation of data comics in a workshop context and includes a description of the results, lessons learned and future improvements. Data comics is a promising format for data-driven storytelling, leveraging the power of data visualization and visual storytelling with comics. However, authoring data comics requires a diverse range of skills that are both creative and analytical. The workshop was developed to refine a blue-print for future workshops; building in reflections on challenges and potential improvements. Within a 3-week assignment for an illustration class, we ran three 3-hour sessions. Our design was informed by the experiences of previous data- comics workshops. Results show the creative potential of data comics. Challenges to learn from the workshop include the stages to introduce data visualizations and journalistic narratives, the structuring of stories and the method of developing iterations of comic drafts. We close by reflecting on these challenges and how they can inform future improvements and adaptations.
Conference Paper
Full-text available
This paper compares the effectiveness of data comics and infographics for data-driven storytelling. While infographics are widely used, comics are increasingly popular for explaining complex and scientific concepts. However, empirical evidence comparing the effectiveness and engagement of infographics, comics and illustrated texts is still lacking. We report on the results of two complementary studies, one in a controlled setting and one in the wild. Our results suggest participants largely prefer data comics in terms of enjoyment, focus, and overall engagement and that comics improve understanding and recall of information in the stories. Our findings help to understand the respective roles of the investigated formats as well as inform the design of more effective data comics and infographics.
Article
Full-text available
Understanding and accounting for uncertainty is critical to effectively reasoning about visualized data. However, evaluating the impact of an uncertainty visualization is complex due to the difficulties that people have interpreting uncertainty and the challenge of defining correct behavior with uncertainty information. Currently, evaluators of uncertainty visualization must rely on general purpose visualization evaluation frameworks which can be ill-equipped to provide guidance with the unique difficulties of assessing judgments under uncertainty. To help evaluators navigate these complexities, we present a taxonomy for characterizing decisions made in designing an evaluation of an uncertainty visualization. Our taxonomy differentiates six levels of decisions that comprise an uncertainty visualization evaluation: the behavioral targets of the study, expected effects from an uncertainty visualization, evaluation goals, measures, elicitation techniques, and analysis approaches. Applying our taxonomy to 86 user studies of uncertainty visualizations, we find that existing evaluation practice, particularly in visualization research, focuses on Performance and Satisfaction-based measures that assume more predictable and statistically-driven judgment behavior than is suggested by research on human judgment and decision making. We reflect on common themes in evaluation practice concerning the interpretation and semantics of uncertainty, the use of confidence reporting, and a bias toward evaluating performance as accuracy rather than decision quality. We conclude with a concrete set of recommendations for evaluators designed to reduce the mismatch between the conceptualization of uncertainty in visualization versus other fields.
Conference Paper
Touchstone2 offers a direct-manipulation interface for generating and examining trade-offs in experiment designs. Based on interviews with experienced researchers, we developed an interactive environment for manipulating experiment design parameters, revealing patterns in trial tables, and estimating and comparing statistical power. We also developed TSL, a declarative language that precisely represents experiment designs. In two studies, experienced HCI researchers successfully used Touchstone2 to evaluate design trade-offs and calculate how many participants are required for particular effect sizes. We discuss Touchstone2's benefits and limitations, as well as directions for future research.
Conference Paper
We present explorable multiverse analysis reports, a new approach to statistical reporting where readers of research papers can explore alternative analysis options by interacting with the paper itself. This approach draws from two recent ideas: i) multiverse analysis, a philosophy of statistical reporting where paper authors report the outcomes of many different statistical analyses in order to show how fragile or robust their findings are; and ii) explorable explanations, narratives that can be read as normal explanations but where the reader can also become active by dynamically changing some elements of the explanation. Based on five examples and a design space analysis, we show how combining those two ideas can complement existing reporting approaches and constitute a step towards more transparent research papers.
Conference Paper
Designers are often discouraged from creating data visualizations that omit or distort information, because they can easily be misleading. However, the same representations that could be used to deceive can provide benefits when chosen to appropriately align with user tasks. We present an interaction technique, Perceptual Glimpses, which allows for the transparent presentation of so-called 'deceptive' views of information that are made temporary using quasimodes. When presented using Perceptual Glimpses, message-level exaggeration caused by a truncated axis on a bar chart was reduced under some conditions, but users require guidance to avoid errors, and view presentation order may affect trust. When Perceptual Glimpses was extended to display a range of views that might otherwise be deceptive or difficult to understand if shown out of context, users were able to understand and leverage these transformations to perform a range of low-level tasks. Design recommendations and examples suggest extensions of the technique.