ArticlePDF Available

A Task-Based Taxonomy of Cognitive Biases for Information Visualization

Authors:

Abstract and Figures

—Information visualization designers strive to design data displays that allow for efficient exploration, analysis, and communication of patterns in data, leading to informed decisions. Unfortunately, human judgment and decision making are imperfect and often plagued by cognitive biases. There is limited empirical research documenting how these biases affect visual data analysis activities. Existing taxonomies are organized by cognitive theories that are hard to associate with visualization tasks. Based on a survey of the literature we propose a task-based taxonomy of 154 cognitive biases organized in 7 main categories. We hope the taxonomy will help visualization researchers relate their design to the corresponding possible biases, and lead to new research that detects and addresses biased judgment and decision making in data visualization.
Content may be subject to copyright.
HAL Id: hal-01868738
https://hal.sorbonne-universite.fr/hal-01868738v2
Submitted on 1 Oct 2018
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
A Task-based Taxonomy of Cognitive Biases for
Information Visualization
Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia
Bezerianos, Pierre Dragicevic
To cite this version:
Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, Pierre Dragicevic.
A Task-based Taxonomy of Cognitive Biases for Information Visualization. IEEE Transactions on
Visualization and Computer Graphics, Institute of Electrical and Electronics Engineers, In press.
<hal-01868738v2>
AUTHORS’ VERSION - SEPTEMBER 2018 1
A Task-based Taxonomy of
Cognitive Biases for Information Visualization
Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic
Abstract—Information visualization designers strive to design data displays that allow for efficient exploration, analysis, and
communication of patterns in data, leading to informed decisions. Unfortunately, human judgment and decision making are imperfect
and often plagued by cognitive biases. There is limited empirical research documenting how these biases affect visual data analysis
activities. Existing taxonomies are organized by cognitive theories that are hard to associate with visualization tasks. Based on a
survey of the literature we propose a task-based taxonomy of 154 cognitive biases organized in 7 main categories. We hope the
taxonomy will help visualization researchers relate their design to the corresponding possible biases, and lead to new research that
detects and addresses biased judgment and decision making in data visualization.
Index Terms—cognitive bias, visualization, taxonomy, classification, decision making.
F
1 INTRODUCTION
Estimation
Decision
Hypothesis
Recall
Opinion
reporting
assessment
Causal
attribution
Other
Fig. 1: Overview of 154 cognitive biases organized by exper-
imental task. Dots represent the different cognitive biases.
VISUALIZATION designers must consider three kinds
of limitations: those of computers, of displays, and
of humans [1]. For humans, the designer must consider
the limitations of human vision along with the limitations
of human reasoning. We focus on the latter, highlighting
pitfalls of human judgment and decision making.
Our judgments and decisions routinely rely on approxi-
mations, heuristics, and rules of thumb, even when we are
not consciously aware of these strategies. The imperfections
of these strategies manifest themselves as cognitive biases
[2]. While visualization tools are meant to support judg-
ments and decisions, little is known about how cognitive
E. Dimara was with Inria and Sorbonne Univ.
E-mail: evanthia.dimara@gmail.com
S. Franconeri was with Northwestern Univ.
E-mail: franconeri@northwestern.edu
C. Plaisant was with Univ. of Maryland, and Inria Chair, associated with
the Inria Foundation.
E-mail: plaisant@cs.umd.edu
A. Bezerianos was with Univ. Paris-Sud & CNRS (LRI), Inria, and Univ.
Paris-Saclay. E-mail: anastasia.bezerianos@lri.fr
P. Dragicevic was with Inria, and Univ. Paris-Saclay.
E-mail: pierre.dragicevic@inria.fr
biases affect how people use these tools. To understand how
visualizations can support judgment and decision making,
we first need to understand how the limitations of human
reasoning can affect visual data analysis.
Within the information visualization community, there
has been a growing interest in decision making [3], [4] and
cognitive biases [5], [6], [7]. The IEEE VIS conference has
held two workshops [8] on cognitive biases in information
visualization. Other papers have acknowledged the impor-
tance of studying cognitive biases in visual data analysis [5],
[9], [10]. Despite this growing interest, empirical work re-
mains limited. Most evaluations of visualizations do not in-
clude decision tasks [4], and those that do typically assume
that human decision making is a rational process. Further-
more, studies that either confirm or disprove the existence
of a particular cognitive bias in information visualization
are rare [6]. Meanwhile, most experimental tasks studied in
the cognitive bias research use textual representations, and
the information given consists of very small datasets - or no
data at all. Therefore, the interplay between cognitive biases
and visual data analysis remains largely unexplored.
We aim to help bridge the gap between cognitive psy-
chology and visualization research by providing a broad
review of cognitive biases, targeted to information visu-
alization researchers. We define a taxonomy of cognitive
biases classified by user task, instead of by proposals for
psychological explanations of why biases occur. The goal
of the paper is to lay out the problem space, facilitate
hypothesis generation, and guide future studies that will
ultimately help visualization designers anticipate – and
possibly alleviate – limitations in human judgment.
The paper is organized as follow: Section 2 provides
background information with definitions and related work.
Section 3 describes the process we followed to generate
the taxonomy. Section 4 reviews each category of bias,
discuss related work in visualization, and highlights poten-
tial research opportunities. Finally, limitations are discussed
before the conclusion section.
AUTHORS’ VERSION - SEPTEMBER 2018 2
2 BACKG ROUN D
We first describe what cognitive biases are and why they are
challenging to study. We then review existing taxonomies
of cognitive biases and their limitations. More detailed de-
scriptions and references for individual biases are included
in the next section where we describe our taxonomy.
2.1 What is a cognitive bias?
Normative models of judgment and decision making as-
sume that people follow axioms of rationality, have a fixed
set of preferences, and make decisions that maximize their
benefit. Expected utility theory, introduced by John Von
Neumann and Oskar Morgenstern in 1947, is one classic
example [11]. This theory laid down a set of normative rules,
which were later extended [12], [13], [14], [15], [16]. Any
decision that violated these rules was considered irrational.
In contrast, evidence for systematic violations of such
rules were identified by Kahneman and Tversky [2], [17],
and named “cognitive biases”. For example, in an exper-
iment where people were asked to choose a program to
combat an unusual disease, the program framed as having
a “33% chance of saving a life” was preferred over the
program described as having a “66% chance of death” –
despite the two programs being the same [18]. This bias
was named the framing effect. Another well-known example
is the confirmation bias, according to which people seek and
favor information that confirms their beliefs [19].
Pohl [20] defines a cognitive bias as a cognitive phe-
nomenon which:
1) reliably deviates from reality,
2) occurs systematically,
3) occurs involuntarily,
4) is difficult or impossible to avoid, and
5) appears rather distinct from the normal course of
information processing.
Thus, a cognitive bias is a cognitive phenomenon which
involves a deviation from reality that is predictable and
relatively consistent across people. A person who is subject
to a cognitive bias is unaware of it, and believes that their
decision, judgment, or memory is unbiased. Biases often
persist even when people are informed and trained on how
to overcome them [21], [22]. The last criterion from Pohl fur-
ther contrasts cognitive biases with more mundane forms of
human error such as misunderstanding or misremembering:
cognitive biases differ from regular thought processes and
as such, they “stick out” and “pique our curiosity” [20].
2.2 Difficulties with the cognitive bias concept
A major difficulty with the concept of cognitive bias lies in
deciding what constitutes a deviation from “reality”. This
stands in contrast with the study of perceptual biases. In
a visual illusion, reality is physically defined, and one can
show that perception objectively diverges from that reality.
However, for cognitive biases, reality is often difficult to
operationalize. Reality is typically defined based on norma-
tive models of judgment and of decision making, but such
models are not universally accepted and new normative
models can emerge in the future, thereby changing what
constitutes “reality”. Furthermore, the quality of a decision
or a judgment is difficult to assess without full information
about the cost of ’incorrect’ decisions, compared to the costs
of following normative principles [23]. Thus, the concept
of cognitive bias has fueled long controversies within the
field of decision making. While some researchers maintain
that cognitive biases are real and have important implica-
tions [24], others argue that cognitive heuristics that yield er-
rors in experimental settings can actually be effective strate-
gies for solving complex problems in the real world [25].
The answer likely lies in-between: in many cases, heuris-
tics and rules of thumb can simplify complex problems and
yield effective decisions, given that humans have limited
time and cognitive resources. Yet we know that heuristics
do not consistently lead to optimal decisions [2]. Some
heuristics can routinely lead to decisions that deviate from
optimality in a systematic and predictable fashion, possibly
with real-life consequences. Thus, despite the difficulties in
defining and studying them, cognitive biases are important
and visualization researchers need to be aware of them.
2.3 Taxonomies of cognitive biases
We review existing taxonomies from different domains (psy-
chology, decision systems, intelligence analysis, visualiza-
tion). We found that the majority of existing taxonomies
are explanatory: they organize biases according to why they
occur, by considering cognitive mechanisms and explana-
tory theories. In contrast, our taxonomy is task-based. It
organizes biases based on the experimental tasks they have
been observed in, in order to help visualization researchers
identify biases that may affect visualization tasks.
2.3.1 Explanatory taxonomies
Tversky and Kahneman classify biases according to which
strategy (heuristic) is hypothesized that people follow to
make a decision or judgment [17]. For example, some biases
are classified as outcomes of the “representative heuristic”
where people estimate probabilities by the degree to which
one event is similar to another event. Imagine we are given
a salient description of an imaginary person named Linda
with adjectives such as “bright”, “outspoken”, “deeply
concerned with discrimination issues and social justice”. If
asked to chose the most likely alternative between “Linda is
a bank teller” or “Linda is a bank teller and is active in the
feminist movement”, people tend to choose the second even
though the conjunction of the two events cannot be more
likely (in terms of probabilities) than either event alone [23].
Another class of biases includes the ones that are considered
as outcomes of the “availability heuristic”, in which people
estimate an event as frequent or imaginable if they can recall
it more easily in their minds, neglecting to apply a rational
probability rule [17]. For example, hearing news about a
plane crash may temporarily alter people’s feelings on flight
safety [26]. Similarly, Baron [27] classifies 53 cognitive biases
based on both the normative models they violate (e.g., Bayes
theorem, regression to the mean) and their explanations
(e.g., availability). This strategy-based classification raised
several criticisms by Gigerenzer [20], who considers these
strategies to be conceptually vague, imprecise and difficult
to falsify, while other scientists give alternative explanations
for why most of these biases occur [20]. In contrast, our
AUTHORS’ VERSION - SEPTEMBER 2018 3
taxonomy groups both these biases under the class ”Estima-
tion”, to indicate they occurred in experimental tasks where
decision makers attempt to estimate probability outcomes.
Other taxonomies from the psychology literature (e.g.,
[28], [29]) also consider possible cognitive mechanisms that
may lead to bias. One common approach is to consider
the dual-process model of human reasoning [30], assigning
biases either to system 1 of the model (heuristic/intuitive
reasoning) or to system 2 (analytic/reflective reasoning).
Padilla et al. [31] expanded the dual-process model for
visualizations, emphasizing that the system 1 may result in
faulty decision making, if salient aspects of the encodings
focus on non-critical information. While these groupings
explain when biases can occur in the reasoning process,
in contrast to our taxonomy they do not explicitly group
biases based on the tasks involved. Moreover, recent work
indicates that the relation between the two systems in the
dual-process model, and the heuristics (strategies) discussed
by Tversky and Kahneman [17], is not so clear-cut [32], [33].
These taxonomies that focus on the mechanisms behind
the origin of biases (e.g., heuristics) are often not exhaustive
when it comes to including all biases, but rather give exam-
ples of biases originating from specific reasoning processes.
2.3.2 Taxonomies considering tasks
Classifications developed in the domain of decision-support
information systems also tend to be mostly explanatory, but
some consider high-level tasks when grouping biases. In
1986, Remus and Kottemann [34] divided about 20 biases
into two categories, data presentation and information pro-
cessing (a high-level task of when biases occur) and later
subdivided these categories based on the reasons why these
biases occur (e.g., use of a certain heuristic, not understand-
ing statistics, etc.). Similarly, Arnott in 2006 [35] considered
the nature of the cognitive bias and classified 37 cognitive
biases into categories, examples of which are: situation, for
biases related to how a person responds to the general
decision situation, or confidence, for biases that are believed
to occur in order to increase the confidence of a person.
Arnott [35] did not group biases by task, but he did map
each bias category with components of a decision-support
system schema, e.g., data acquisition, processing, or output.
These components can be seen as high-level tasks.
Both of these taxonomies associate biases with complex
data processing, though these associations are not sup-
ported by strong empirical evidence. Most of the biases have
been only verified with small puzzles using static textual
representations and not in the context of using a decision-
support computer system dealing with data.
While not proposing a cognitive bias taxonomy per se,
Heuer [36] discusses biases which are likely to affect high-
level tasks of intelligence analysis, e.g., hindsight bias in the
evaluation of reports.
More recently in psychology, Pohl [20] classified cog-
nitive biases into “memory”, “thinking”, and “judgment”
biases. The memory class involves systematic errors in recall-
ing or recognizing events [20]. The thinking class involves
systematic errors in applying a certain rule (e.g., Bayes’ the-
orem, hypothesis testing, syllogistic reasoning) [20]. These
rules come from several norms, e.g., probability theory,
expected utility, or the falsification principle, which deter-
mine the actions that deviate from “reality”. The judgment
class involves systematic errors when subjectively rating
a stimulus (e.g., pleasantness, frequency, or veracity) [20].
In judgment biases, people can be affected by feelings of
familiarity or confidence. As Pohl [20] himself mentions, this
taxonomy has several limitations. Most biases in judgment
and thinking also involve memory processes such as en-
coding, storage, and retrieval [20]. Also, when the material
to memorize is outside of the laboratory, memory and
subjective judgment biases cannot be distinguished because
a faulty recall can be the reason for a faulty judgment (or
not) [20]. Judgment and thinking classes also often overlap,
e.g., people may not know that they are supposed to apply a
Bayesian rule to estimate a probability and instead perform
a subjective judgment of frequency. Our taxonomy goes
beyond this grouping, considering a larger number of tasks
to provide a more detailed classification of when biases
occur.
In summary, while high-level tasks have been used to
name a few categories in existing taxonomies, not all biases
were grouped by task. Our taxonomy provides a grouping
based on the lower level tasks where these biases have been
observed and measured.
2.3.3 Reviews of biases in visualization
We found no comprehensive review of cognitive biases in
the visualization literature. Some studies have begun to or-
ganize biases relevant to specific aspects of visual analysis.
Zuk and Carpendale [9] categorize biases and heuristics
relevant to uncertainty, discussing how improved design
could mitigate them. For example, when making decisions
individuals may be unable to retrieve from memory all
relevant instances to a problem (availability bias) and rely
on recent information (recency effect), but a visualization
can display all instances [9]. Nevertheless, the focus is not
on grouping all known biases based on the tasks where
they have been observed, but rather on a subset of biases
related to uncertainty in reasoning, grouped by how we
could mitigate them using visual analysis. Ellis and Dix [10]
briefly discuss seven biases that could affect visual analysis
(such as anchoring and availability biases) and emphasize the
lack of studies investigating whether or not visualizations
elicit cognitive biases.
2.3.4 Need for a new taxonomy
The majority of existing bias classifications are explanatory
(Sec. 2.3.1) based on generic explanations of their nature,
such as why the bias occurs or which heuristic people use
when it appears. This evolving body of work indicates that
there is still no agreement among scientists about the cause
of cognitive biases, as observed by Pohl [20]. Moreover, in
some cases the association prepossessed between biases and
complex data processing is not always strongly supported
[34], [35], or is fairly high-level [20]. While understanding
the nature and provenance of biases is important, we take a
more practical approach. Based on papers that have experi-
mentally studied the biases, we classify them based on the
tasks they have observed and measured in.
Finally, most classifications include only a small subset
of biases, usually 10-40, whereas a collaboratively-edited
AUTHORS’ VERSION - SEPTEMBER 2018 4
Wikipedia page lists 176 known cognitive biases [37] as of
today. Some classifications only gave examples [34], others
listed a small sample of these biases [35]. One reason behind
this limitation may be that not all biases are as relevant to all
scientific domains. For example, the in-group favoritism bias,
where people tend to make judgments that favor their own
group, is more important in social psychology. The attraction
effect, where people’s choices are affected by inferior alterna-
tives, is primarily studied in marketing research. Therefore,
most taxonomies tend to account for the biases that are
established in the respective domain of the authors. Another
explanation, especially for explanatory taxonomies, could
be that the objective of the taxonomy is to offer an abstract,
unified theory that explains multiple biases (giving a few
examples), rather than to organize and extensively review
all biases covered in previous works. Data visualization can
be used in a variety of domains, so researchers need to be
aware of a larger set of biases.
3 METHODOLOGY
After a standard bibliographic search we gathered an initial
list of biases. The second step was to search for the most
representative paper that empirically tested each of the
biases in our list. Third, we categorized the cognitive biases,
using a bottom-up grouping method similar to card sorting.
The categories and their labels were refined iteratively in
an attempt to make them more useful to visualization re-
searchers. We categorised all biases and reviewed each one
from a visualization perspective by: 1) searching for existing
relevant visualization work, if any (reported in Table 2);
and 2) brainstorming future opportunities for visualization
research (reported in their respective category description).
The new cognitive bias taxonomy proposed in this paper
is organized by the tasks users are performing when the
bias is observed. For example, estimating the likelihood of
a heart attack or breast cancer is considered an estimation
task. Choosing between different health insurance policies
is a decision task. The causes for the bias may have been
studied (e.g., false probability estimations may lead to a bad
insurance choice), but this is ignored in our classification of
the biases. Our taxonomy focuses on the tasks where biases
occur, instead of why they occur - as previously done.
3.1 Initial list of biases
We decided to start with the list of cognitive biases (and
their synonym names) we had found on the Wikipedia page
”List of Cognitive Biases” [38] - retrieved on 20 November
2017. With a total of 176 cognitive biases it was by far the
longest list of biases we could find. We refer to this page
as the Wikipedia list page. Each entry of this list points to
a separate individual Wikipedia page describing the bias.
Later on, a few missing biases were added (see below).
Although Wikipedia was the largest list of biases we found,
it is not curated by researchers. Therefore, the next section
describes our method to verify which of these cognitive
biases are detected through reliable experimental protocols.
3.2 Selection of sources
Because of the large number of biases, we kept only one
representative paper per bias in order to keep the number
of references manageable. For each of the 176 biases, we
used the following process:
Step 1: We searched whether the bias has been men-
tioned in InfoVis literature by typing the search term “bias
name” + “information visualization” in Google Scholar. We
collected all InfoVis papers mentioning the bias (See Table 2,
column ”Relevance to InfoVis”). In the visualization papers
mentioning a bias, we collected the source reference used to
describe the bias and determined if it was an eligible source
(see below). We kept only one source paper.
Step 2: When we could not find an InfoVis paper men-
tioning the bias, or if these papers did not cite an eligible
source, we searched for eligible sources in the Wikipedia list
page, or in the individual Wikipedia page. If we did not find
an eligible source in Wikipedia, we searched for a source
by typing the search term “bias name” + “experiment”
in Google Scholar. We only considered the first page of
results and examined the papers by decreasing order of
citations. We picked the first eligible paper. If no eligible
paper was found, we repeated the process using a synonym
for the cognitive bias. Synonyms had been collected on the
Wikipedia pages and in academic sources (see Table 3).
Step 3: When no source could be found at all, the bias
was removed from the list. This occurred for 21 of the biases
on the Wikipedia list.
Source eligibility: A source was considered eligible if:
1) It was a peer-reviewed paper.
2) We were able to access the document.
3) AND the paper either:
a) reported on a human subject study testing
for the existence of the bias (we did full-
text searches for the terms ”experiment” and
”study”), or
b) cited another paper reporting on such study,
and described the paper’s experimental task
in detail.
Method 3.b was used when the original paper was too
old for the document to be accessible, or when a peer-
reviewed survey existed that described experimental tasks
in enough detail. In general, we favored literature-reviews
as references when they provided a good overview of the
different studies conducted on a particular cognitive bias.
We applied the accessibility rule (2) only to help us select
one source over another - no bias was eliminated because of
this rule. The reliability of the experiment (e.g., experiment
design, validity of statistical methods, effect size, etc.) was
not examined.
3.3 Final list of biases
At the end of the source selection process we were left with
151 cognitive biases out of the 176 in the initial list. The
Wikipedia list contained 13 duplicates biases, that either
pointed to the same individual Wikipedia pages, or had
different individual Wikipedia page but with the same refer-
enced work. We also added 3 additional cognitive biases we
had found in literature: the ballot names bias (a bias identified
in InfoVis [39] but which was not given a specific name); the
phantom effect [40]; and the compromise effect [41] - which
were mentioned in the attraction effect literature but not
listed in the Wikipedia page.
AUTHORS’ VERSION - SEPTEMBER 2018 5
Two biases on the list were included even though they
blur the line between a cognitive bias and a perceptual
illusion. The Weber-Fechner law occurs when differences
between quantities seem smaller (as absolute values) when
the baseline quantity increases [42]. We retained this bias
because it has cognitive analogues – a fifty dollar upgrade
offer does not seem expensive when buying a 20,000 dollar
car, but seems large when buying a 300 dollar phone. We
also kept Pareidolia, the propensity to see faces where none
exist (within clouds, or on toasted bread). It is a perceptual
bias, but analogous to cognitive biases such as the confirma-
tion bias, as both show the existence of a heavily weighted
prior probability toward a particular state of the world.
To non experts, some of the biases in the table may
appear similar to each other, but we kept them separated in
cases where bias researchers considered them as different. In
addition, if two biases are known under different names and
reported in different research papers, we kept both biases,
even if they appeared to be similar.
3.4 Establishing categories
To produce a task-based taxonomy we first had to identify
the type of task used in the study of the bias. We then used
open card-sorting analysis to generate categories [43].
Task Identification: We went back to the original exper-
iment protocol described in the representative paper and
identified the task participants had performed when the
bias was measured. Our assumption was that tasks should
have many similarities across biases and that the number of
tasks would be of manageable size. After identifying the
experiment task used in the study, we proposed a short
label to describe the task. When appropriate we reused
previously assigned labels if they adequately described the
task. If not, we proposed a new short label for the task.
One coder performed the initial labeling of all tasks, then
another two coders reviewed and proposed revisions to the
labels until all three coders were satisfied.
Task Grouping: Similar tasks/labels were then grouped
in seven categories, to form the tasks in our taxonomy. An
initial grouping and task group names were proposed by a
single person, and iteratively revised by four others (the co-
authors). Tasks that could not be assigned to large groupings
were placed under the category ”Other”.
The task-categories are (with the color used in Figure 1):
1) ESTIMATION
2) DECISION
3) HYPOTHESIS ASSESSMENT
4) CAUSAL ATTRIBUTION
5) RECALL
6) OPINION REPORTING
7) OTHER
These categories were created by the open card-sorting
method described above, relying on only the papers from
the bias literature. To avoid constraining these categories
to the context of data visualization, we purposely did not
base these categories on the numerous task taxonomies (e.g.,
[44], [45]) that have been proposed in the data visualization
literature. We contrast our resulting taxonomy with these
alternatives within the ’Visualization Research’ subsection
for each bias, and more globally in Section 5.2.
Because each category includes a fairly large number
of biases, we added subcategories. Since there was not a
clear set of subtasks to use for these set of subcategories,
we instead chose a set of sub-categories (which we call
flavors) that reflect other types of similarities among biases.
We do not view these flavors as a primary contribution of
this work, because they were developed in an intuitive way,
instead of a more rigorous division into tasks, which can
be traced directly to a user study protocol. We hope that the
flavors will help readers see connections between the biases,
both within and between categories.
The flavors we identified are:
1) Association, where cognition is biased by
associative connections between information items
2) Baseline, where cognition is biased by
a comparison with (what is perceived as) a baseline
3) Inertia, where cognition is biased by
the prospect of changing the current state
4) Outcome, where cognition is biased by
how well something fits an expected or desired outcome
5) Self perspective, where cognition is biased by
a self-oriented view point.
Figure 2 illustrates how the task categories are distributed
among flavors. In the next section we will describe the
taxonomy table, and then discuss each category in detail.
4 TASK -BASED TAXONOMY OF COGNITIVE BIASES
The complete taxonomy is summarized in Table 2. The first
column shows the task category color of the bias. The col-
umn Flavor tries to capture the general phenomenon behind
the bias. The column Cognitive bias shows the name of each
bias (synonym names for some biases can be found in Table
3). The column Ref shows the selected representative paper
in which the bias was experimentally detected. The column
Relevance to InfoVis shows how the bias has been studied
in visualization research. The last column provides a very
short description of the bias.
In order to reveal the scarcity of research about cognitive
bias in visualization we color-coded the InfoVis column:
various shades of red indicate that the bias has been em-
pirically studied. Black indicates that the bias has been
discussed in a visualization paper but not yet studied.
Shades of gray represent our estimate on how relevant the
bias may be for visualization research (dark gray for biases
more likely to be important, light gray for those less likely).
This is a subjective rating only meant to help the reader get
started when using the table.
We will now review each category by providing ex-
amples of tasks, describing a subset of the biases using
examples from psychology research, discuss related work in
visualization, and highlight potential research opportunities
for visualization research.
4.1 Biases in estimation tasks
In estimation tasks, people are asked to assess the value of a
quantity. For example, in real-life decision making tasks, a
person may need to estimate the likelihood of theft to decide
whether or not to insure their car, or to estimate their future
retirement needs in order to choose a retirement plan.
AUTHORS’ VERSION - SEPTEMBER 2018 6
The ESTIMATION category includes all systematic
biases that have been experimentally observed when partic-
ipants were asked to make an estimation. We identified 33
estimation biases, listed in Table 2.
4.1.1 Psychology research
In cognitive bias research, many estimation tasks require
assessing the likelihood that an event will occur in a hypo-
thetical situation or in the future (that is, prediction tasks).
Thus, much of our discussion in this section focuses on
probability estimation tasks.
Several psychology experiments involve a probability
estimation task where the correct answer can be derived by
calculation (e.g., by applying Bayes’ theorem). Systematic
deviations from the true answer are taken to be suggestive
of a cognitive bias. For example, research on the base rate
fallacy suggests that people can grossly overestimate the
likelihood of an event (e.g., having breast cancer after a
positive mammography) because they tend to focus on
the specific event instance while ignoring probabilities that
apply to the general population (e.g., the number of women
with breast cancer) [46]. Probability estimation tasks with a
well-defined ground truth have helped uncover other cogni-
tive biases such as the conjunction fallacy [47], where people
believe that specific events are more probable than general
ones. Moreover, according to studies on the conservatism bias,
people typically do not sufficiently revise their probability
estimations in the light of new information [48], [49], [50].
Some experiments involve probability estimation tasks
without ground truth. Here, responses are not evaluated
based on how much they agree with a true answer, but on
how consistent they are with basic normative principles of
rationality. For example, according to experiments on the
optimism bias, when people are asked to make predictions
about future events (e.g., finding a dream job, getting di-
vorced, or getting lung cancer), they tend to make more
optimistic predictions for themselves than for others [51].
A number of experiments use estimation tasks that do
not involve explicit probabilities, but have a probabilistic
component. Frequency estimation is such an example. As
an example of bias, people tend to think that words starting
with the letter “R” are more frequent than words having the
letter “R” in third position [52]. This is thought to occur
because people employ the availability heuristic, whereby
words starting with “R” are easier to retrieve from memory
and, therefore, are perceived to be more frequent [52]. Time
prediction is another example of a “quasi-probabilistic” esti-
mation task, since estimating the time or duration of a future
event is related to estimating the probability that the event
will fall before or after a certain moment in time. Several
studies have been conducted where participants were asked
to predict the time it will take them to complete a task
(e.g., an academic project or an assignment), and where
predictions were compared with actual outcomes [53]. These
studies consistently show that people tend to be overly
optimistic in their predictions irrespective of their past
experience, a bias called the planning fallacy [53].
Finally, some experiments involve clearly non-
probabilistic estimation tasks, such as estimating country
populations. In most studies, responses are again not
evaluated based on a ground truth, but based on how
consistent they are with basic principles of rationality. For
example, Tversky and Kahneman asked participants to
spin a fortune wheel, and then to estimate the number
of African countries in the UN [17]. People’s responses
tended to be close to the number the fortune wheel landed
on [17]. Since this number bears no relationship with the
question, its influence on the answers is strongly suggestive
of irrationality. That tendency for people’s quantitative
estimations to be biased toward a value they were initially
exposed to is named the anchoring effect [17]. On the other
hand, Goldstein and Gigerenzer [54] showed that when
students are asked to compare cities by their population,
they perform better with cities that are not from their home
country thanks to their use of the recognition heuristic (i.e.,
if I never heard of a city, then it must be small) [54]. This
experiment should serve as a warning that heuristics do
not necessarily lead to cognitive biases, and can sometimes
even yield more accurate judgments.
Other examples of non-probabilistic estimation tasks are
experiments in which participants have to estimate their
performance after solving a given problem. Most often
participants exhibit overconfidence, i.e., their self-rating is
higher than their accuracy [55]. In a smaller number of bi-
ases, people exhibit low confidence. Confidence can change
according to the difficulty of the task (overconfidence for
hard tasks, conservatism for easy ones [56]), or the expertise
of the participant (overconfidence in non-specialists, conser-
vatism in experts [57]).
4.1.2 Visualization research
Although estimation tasks are not explicitly listed in visual-
ization task taxonomies, they are omnipresent in visual data
analysis. An analytic task may involve estimation whenever
an exact answer is not possible or is not required.
In information visualization, there has been very little
research on estimation biases that occur at the cognitive
level (as opposed to the perceptual level), with the notable
exception of research on Bayesian reasoning [58], [59], [60],
[61]. Researchers have studied whether visualizations such
as Euler diagrams and frequency grids can reduce the base
rate fallacy [59], [60]. Even though the studies did not observe
a systematic bias, people were often highly inaccurate in
their responses, and visualizations did not seem to provide
clear benefits [59], [60]. Micaleff et al. [59] conjectured that
many participants may have ignored the visualizations and
attempted calculations using the numbers provided in the
textual narrative (also shown in the visualization condition).
Their last experiment indeed suggests that visualizations
can have a facilitating effect if numerals are not provided
in the text, leading the authors to conclude that presen-
tation formats that encourage approximate estimation and
discourage precise calculation need to be further investi-
gated [59].
Several other information visualization studies have ex-
amined biases in probability and frequency estimation tasks
(e.g., [62], [63]), but the biases that were investigated were
perceptual rather than cognitive.
Two recent studies have examined the impact of the an-
choring effect in an information visualization context. Valdez
et al. [64] gave participants a series of class separability tasks
AUTHORS’ VERSION - SEPTEMBER 2018 7
on scatterplots, and found that their responses were influ-
enced by the first scatterplot shown. Similarly, Cho et al. [7]
asked participants to explore a real Twitter dataset using
a visual analytic system, and found that their responses
to a quantitative estimation question were influenced by
the presence of an anchor. While Cho et al. only found an
effect when the anchor was presented as a numeral, user
log analyses suggested that visualization anchors can affect
participants’ analytic process.
Xiong et al. [65] provided preliminary but compelling
evidence for the existence of a curse of knowledge bias in
visualization communication, using an estimation task. The
curse of knowledge refers to people’s tendency to overes-
timate how much they share their knowledge or expertise
with other people [66] Xiong et al. [65] showed that partic-
ipants who are exposed to a text narrative before seeing a
visualization find the patterns related to the text narrative
more salient. Crucially, participants tended to predict that
the same patterns would be salient to viewers who were not
exposed to the textual narrative.
In addition to empirical work, the information vi-
sualization literature has produced position papers that
discuss how visualization designs might alleviate (rather
than cause) estimation biases. For example, Dragicevic and
Jansen [67] list four possible strategies to alleviate the plan-
ning fallacy using visualizations and data management tools,
while Dimara et al. [26] suggest three ways visualization
could be used to alleviate the availability bias. However,
no tool has been developed and no experiment has been
conducted to evaluate the effectiveness of these strategies.
Although self-reported confidence is a common metric in
information visualization evaluation [68], it can be subject
to biases [56], [57]. Findings from cognitive bias research
suggest that confidence metrics need to be calibrated [69]
and put in context with task accuracy. Even if confidence
judgments can be compatible with normative statistical
principles [70], they can be easily influenced by context.
Previous visualization research suggests that indirect con-
fidence assessment (e.g., “How likely are you to change
your choice if a recommendation system gives you another
suggestion?”) may be more reliable than direct confidence
ratings (e.g., “How confident are you in your choice?”)
[4]. However, to our knowledge, no previous visualization
study examined biases related to performance estimation.
There is too little empirical data available at this point
to provide strong guidelines for practitioners. One possible
recommendation is that visualizations should be designed
to minimize the number of estimations needed to derive
answers to questions. One way of achieving this is by cal-
culating and visually presenting relevant summary values
to the user. However, it is typically impossible for the de-
signer to anticipate all questions a user may have about the
data. When users are likely to derive answers to unantici-
pated questions by combining several pieces of information,
preliminary work on Bayesian estimation problems [59]
suggests that showing numbers next to (or on top of)
visualizations can be counterproductive. The reason is that
numbers prompt users to calculate, and miscalculations can
yield errors much greater than imperfect approximations.
Unless precise values are needed, it is advised to encode all
quantitative values visually.
4.2 Biases in decision tasks
By decision task, we refer to any task involving the selection
of one over several alternative options. Psychology experi-
ments using such tasks are called choice studies. Study partic-
ipants in these studies are “required to exhibit a preference for
one of the several stimuli or make a different prescribed response
to each of them” [71]. For example, people can choose a car to
purchase or a university to apply to.
The DECISION category includes all systematic bi-
ases that have been experimentally observed when partici-
pants are asked to make a decision. We identified 33 decision
biases, listed in Table 2.
4.2.1 Psychology research
Some decision biases occur when people are dealing with
uncertainty. For example, in ambiguity effect people tend to
avoid decisions associated with ambiguous outcomes [72];
or in the zero-risk bias, if the set of choices contains an alter-
native that eliminates risk completely they tend to stick to it
even if it is not the optimal decision [73]. People also often
show different preferences based on whether the problem
is a gain (e.g., allowances) or a loss (e.g., prohibitions) [74],
known as loss aversion, or if it is simply framed as a gain or
a loss, known as framing effect [18].
Nevertheless, not all decision biases are related to uncer-
tain outcomes or framing. When people choose one alterna-
tive over the other, they are often unconsciously influenced
by factors irrelevant to the decision to be made. In most
situations, decision makers do not evaluate alternatives in
isolation, but within the context in which the alternatives
occur [23]. One well-studied example is the attraction effect,
where one’s decision between two alternatives is influenced
by the presence of irrelevant (inferior) alternatives [75].
In some biases such as the less is better effect people’s deci-
sions are affected by whether the alternatives are presented
separately or juxtaposed [76], or by whether the alternatives
are presented among more extreme ones (compromise effect)
[41], unavailable ones (phantom effect) [40], or more familiar
alternatives (mere-exposure effect) [77].
Other cognitive biases refer to people who appear more
attracted to alternatives for which they can receive an
immediate reward such as the hyperbolic discounting [78],
or for which they had previously invested self-effort, such
as the IKEA effect [79]. Examples also include attraction to
alternatives which people owned in the past [80] (endowment
effect), or avoiding to make any decision that requires a
change of one’s current state (status quo bias) [81].
4.2.2 Visualization research
Decision tasks are common when using visualizations. Di-
mara et al. defined a decision task which articulates the
link between decision making and multidimensional data
visualizations named multi-attribute choice task. Several visu-
alization systems exist that are explicitly designed to sup-
port multi-attribute choice tasks [82], [83], [84] or decision
making in general [85], [86], [87]. Visualization researchers
often mention decision biases under uncertainty [9], [10],
[86], [88] but there is very limited empirical work studying
their existence in visualization [89], [90].
AUTHORS’ VERSION - SEPTEMBER 2018 8
Exceptions include the attraction effect for which Dimara
et al. showed that it also exists in scatterplot visualizations,
and confirmed that even if data is correctly visualized and
understood, the decision may still be irrational [6]. Recent
work [91] showed that visualizations can mitigate the at-
traction effect by allowing users to remove information from
the display that should not affect a rational decision making
process. Zhang et al. [92] further showed that startup com-
panies presented with static tabular visualizations of star
ratings tended to be subject to loss aversion bias.
Another example of a decision bias that was studied in a
visualization context was the identifiable victim effect, where
people are more likely to help a single concretely described
person in need, compared to larger numbers of abstractly
or statistically described people. [93]. In contrast, Boy et al.
[94] found that data graphics that used simiarly concrete
anthropomorphized icons did not increase a user’s empathy
for vulnerable populations. This result shows that it can be
hard to anticipate the results of combining cognitive bias
findings with visualization designs.
Visualizations can also be used to find evidence for a
cognitive bias. An example of such a decision bias was
found in government elections. Several scientific studies had
long investigated the hypothesis that the order of candidates
in the ballot papers can affect the result of the elections,
but they only found inconclusive evidence. Wood et al.
[39] collected data from 5000 candidates of the Greater
London local elections held on the 6th May 2010, analyzed
them using hierarchical spatially arranged visualizations,
and showed that the position1of candidate names on the
ballot paper indeed influenced the number of votes they
received. Wood et al.’s visual analytic techniques showed
that an alphabetical tabular representation of candidates can
lead to biased election results.
Although decision biases can be critical for visualiza-
tion systems that target decision-support [31], most of the
decision-support visualizations rarely evaluate the quality
of users decisions (e.g., wrt to their consistency with per-
sonal preferences [4] or with rational principles [6]).
4.3 Biases in hypothesis assessment tasks
By hypothesis assessment task, we refer to any task involving
an investigation of whether one or more hypotheses are true
or false. The term “hypothesis” here does not necessarily
refer to a formal statistical hypothesis, but any statement,
informal or formal, that can be either confirmed or discon-
firmed using previous or new knowledge.
The HYPOTHESIS ASSESSMENT category includes
all systematic biases that have been experimentally ob-
served when participants were asked to assess if a statement
is true or false. We identified 11 hypothesis assessment
biases, listed in Table 2.
4.3.1 Psychology research
One of the best known and most impactful biases is the
confirmation bias, according to which people tend to favor
1. The ballot names bias shares some similarities with the serial-
positioning effect [95] where people better recall the first (primacy) and
last (recency) items in a list. However, it is not the same bias as the
ballot task is to choose (a candidate) and not to recall her name. But it is
possible that people chose the candidates who were easier to remember.
evidence that confirm an initial hypotheses while subcon-
sciously ignoring disconfirming evidence [19]. As Nickerson
puts it, “[the bias] appears to be sufficiently strong and pervasive
that one is led to wonder whether the bias, by itself, might
account for a significant fraction of the disputes, altercations, and
misunderstandings that occur among individuals, groups, and
nations.” [96]. Related biases in this category are the illusory
truth effect, according to which people consider a proposition
as true after repeated exposure to it [97]; the congruence bias
where people test if a hypothesis is true without considering
alternative hypotheses [98]; and the illusory correlation bias
when people consider a relationship between variables that
does not exist [99].
Scientists themselves are subject to biases in hypothesis
assessment. For example, according to studies on the ex-
perimenter effect, experimenters can subconsciously influence
participants to behave in a way that confirms their experi-
mental hypotheses [100].
4.3.2 Visualization research
Hypothesis assessment tasks are common in data analy-
sis and reasoning using visualization tools, e.g., exploring
whether trucks have more accidents than regular cars; if the
horsepower of a car is correlated to its weight; or if earth
temperatures are increasing. Keim et al. refer to these type of
high-level tasks as confirmatory analysis [101] and Amar and
Stasko [102] characterize them as confirm hypotheses tasks in
their taxonomy.
Although hypothesis assessment biases have been men-
tioned as critical challenges for information visualiza-
tion [9], [10], we are not aware of any empirical study
that tries to assess them. A natural first step would be
to empirically confirm that hypothesis assessment biases
indeed occur while using visualizations.
To mitigate the confirmation bias in particular, several
strategies have been proposed in the psychology literature.
These include the “analysis of competing hypotheses” and
“evidence marshalling” [103]. These methods respectively
encourage analysts to generate multiple hypotheses and to
carefully record evidence confirming or rejecting each of
them before reaching any conclusion. Some software tools
help users follow those methods [104] by facilitating the
recording and the linking of evidence with hypotheses.
These approaches could likely mitigate other biases in this
category (such as the the congruence bias and the illusory truth
effect), opening new opportunities for research.
As visualization designers, we could consider other pos-
sible design features as possible ways to mitigate hypothesis
assessment biases. For example, we could study if the confir-
mation and other related biases can be reduced by showing
what data has already been examined in our visualizations,
and what has been ignored [5]. Or if they can be reduced
when we suggest or require specific analytic workflows to
be followed in our tools. While we strive to label all displays
clearly, there may be cases where temporarily hiding labels
could reduce hypothesis assessment biases by emulating
a ”blind test” situation. For example, consider an analyst
examining evidence about crime or input statistics based on
ethnicity. If at the initial review of income or crime data,
the visualization displayed simple labels (such as A, B and
C), instead of the actual ethnicity value, it could remove
AUTHORS’ VERSION - SEPTEMBER 2018 9
possible preconceptions about ethnicity and aid analysts
consider all evidence at their disposal.
Beyond pinpointing opportunities to test for these biases
and to mitigate them in visualizations, we also hope to
increase awareness of the existence of hypothesis assess-
ment biases within our community, since as researchers we
can be prone to them. We hope this awareness will help
visualization researchers adopt themselves methods that
are more robust to these biases. For example, confirmation
bias can be reduced by conducting more risky hypothe-
sis tests [105] (e.g., including tasks that might refute our
hypotheses), while strategies also exist to address experi-
menter effects [106].
4.4 Biases in causal attribution tasks
By causal attribution task, we refer to any task involving
an assessment of causality [107]. In social psychology, the
attribution theory studies how people explain the causes of
behavior and events [107]. For example, when interpreting
an event, people can either attribute the event to external
factors (e.g., John had a car accident because the road was
in bad condition) or to internal ones (e.g., John had a car
accident because he is not a good driver).
The CAUSAL ATTRIBUTION category includes all
systematic biases that have been experimentally observed
when participants were asked to provide explanations of
events or behaviors. We identified 12 causal attribution
biases, listed in Table 2.
4.4.1 Psychology research
Attribution biases have been mostly studied in social psy-
chology, so experimental scenarios typically focus on judg-
ments of human behavior. They reveal people’s tendency to
favor themselves over others in the explanations they give.
For example, the egocentric bias suggests that people tend
to overestimate their contribution when asked to explain
why a joint achievement was successful [108]. Similarly,
the self-serving bias suggests that people tend to attribute
success to their own abilities and efforts, but ascribe failure
to external factors [109]. For example, a student can attribute
a good exam grade to their own effort, but a poor one to
external factors such as the poor quality of their teacher or
the unfair questions in the exam. When it comes to failures,
according to the actor-observer bias, people tend to attribute
their own to situational factors, but attribute the failures
of others to personality weaknesses [110]. For example,
they are more likely to attribute a car accident they had
to bad road conditions or other drivers, but attribute the car
accidents of others to their poor driving skills. People also
tend sometimes to attribute others’ ambiguous behavior to
intentionally negative reasons, e.g., I see my peers laugh,
they may be laughing about me [111] (hostile attribution bias).
Attribution biases occur not only when people unfairly
evaluate their actions against those of others, but also the
actions of members of their group (in-group members) to
people outside it (out-group members). For example, in the
ultimate attribution error, when Hindu and Muslim partic-
ipants were asked to explain undesirable acts performed
by Hindus or Muslims, Hindus attributed external causes
to the acts of fellow Hindus, but internal causes (e.g.,
related to personality) for undesirable acts committed by
Muslims, and vice versa [112]. When judging the actions
of out-group members, people also tend to overgeneralize
individual behaviors For example, in the group attribution
error people tend to generalize decisions made by a group
to individual people (e.g., the action of a whole nation is
also the preference of an individual citizen) [113].
4.4.2 Visualization research
Causal attribution tasks can also be common when using
visualizations. Such tasks are explicitly identified by Amar
and Stasko [102] as formulate cause and effect tasks in their
taxonomy. In principle, any analytic task can involve causal
attribution when users try to explain why a phenomenon
occurs while exploring their data, like trying to explain
peaks or outliers. For example, there are tasks when an
analyst is trying to determine ”Why are there more mass
killings in the US than other countries?” or ”What caused
the recent decrease in road fatalities in France?”.
Data analytic activities include describing patterns in
data, but can also include prescribing decisive steps based
on those patterns, often relying on the user’s internal causal
model of what factors affect what outcomes in the data.
Some visual tools exist already to help conduct such analysis
(e.g., the cause and effect analysis diagrams [114]), but
further study of their effectiveness is needed.
Although causal attribution biases have not been the
subject of substantial work in visualization, the area is
potentially ripe for research. Past studies have identified
situations where analysts reached wrong causal conclusions
based on the existence of correlations. For example, when
it was observed that hormone replacement therapy (HRT)
patients also had a lower-than-average incidence of coro-
nary heart disease, doctors proposed that HRT protected
against hear disease; a later analysis indicated that it was
more likely that because HRT patients came from higher
socio-economic groups they followed better-than-average
diet and exercise regimens [115]. It might be fruitful to
study visualization designs that might minimize false causal
attributions when correlations are present.
Another possible research direction comes from anec-
dotal data that dashboard users are more likely to make
unwarranted causal claims from visualized data. Upon see-
ing a graph showing that drivers using a new GPS system
get in more accidents (1 percent per year) compared to
users of older systems (2 percent per year), viewers are
anecdotally more likely to incorrectly posit that the new GPS
device leads to more accidents [116]. To draw the correct
conclusions, viewers have to consider pre-existing driver
behavior. In this case, most safe drivers had not used the
new system. It was mostly risk-taking drivers (that tend to
have more accidents anyway) that had decided to use the
new GPS system, thus inflating the number of accidents for
the new system. In fact, the new system improved safety
for both driver categories when considered separately. It
is important to collect empirical data to demonstrate the
existence of this types of potentially critical errors that stem
from the choice of data combinations to visualize.
As we saw in the previous section, another side effect
of faulty causal attribution is that people can not properly
monitor the result of a joint action (for example neglecting
AUTHORS’ VERSION - SEPTEMBER 2018 10
or undervaluing the contributions of others. Effective col-
laboration though can be essential in visual data analysis,
for example when multiple investigators are monitoring
different suspicious individuals as a dangerous situation
evolves in real time. It would be interesting to investigate if
adding visualizations that demonstrate colleagues activity
can promote a more balanced appreciation of others’ work.
4.5 Biases in recall tasks
The RECALL category includes all systematic biases that
have been experimentally observed when participants were
asked to recall or recognize previous material. We identified
39 recall biases, listed in Table 2.
4.5.1 Psychology research
Memories are not copies of past experiences, but are instead
reconstructed at the time of recall [23].
This means that post-event information can change a re-
membered event, known as misinformation effect [117]. More
generally, people tend to better recall visual representations
over words [118], auditory information over visual infor-
mation [119], self-generated content over read content [120],
pleasant over unpleasant emotions [121], interrupted tasks
over completed ones [122], humorous [123] or bizarre items
[124], and information that took more work to comprehend
[125] or easy to find through a search engine (known as
the Google effect) [126]. People can also mistake the ideas of
others as original thoughts, [127], which can be an unin-
tentional cause of plagiarism. Conversely, people consider
some imaginary events as real [128], a phenomenon often
observed in criminal witness interviews after misleading
suggestions [129].
4.5.2 Visualization research
In theory, using visualizations should help a viewer over-
come limited (and biased) memory recall processes, by
supplementing memory with unbiased views of all relevant
information. But in complex datasets, not all information
could be displayed, and even if it could, it could not be all be
processed by the viewer. In addition, the viewer would still
rely on a limited and biased memory system to link viewed
data points with examples, context, and emotions. So visu-
alizations have the ability to decrease, but not to eliminate,
biased recall processes. One solution to this problem may
be annotation systems that guide an observer to record
observations and judgments for subsets of viewed data, but
then later organize and provide automated comparisons of
the user’s preferences.
Some properties of visualizations can make them more
memorable. Including real-world images or objects or mak-
ing visualizations distinct from others can lead participants
to better recall that they saw those visualizations (e.g.,
[130]). Some work is beginning to additionally show that
memory can be improved for the data patterns depicted
in visualizations. Converting a traditional bar graph to a
stack of iconic pictures of objects (e.g., depicting a number
of baseball games as a stack of baseballs) can improve short-
term memory for depicted information [131]. Linking data
patterns to real-world objects (e.g., noting that an uptick in
global temperatures looks like a hockey stick) can improve
long-memory for those patterns, when testing weeks later
[132]. An automated system that gives suggestions for data
shape mnemonics could bolster limited human memory.
4.6 Biases in opinion reporting tasks
The OPINION REPORTING category includes all sys-
tematic biases that have been experimentally observed
when participants were asked to answer questions regard-
ing their beliefs or opinions on political, moral, or social
issues. We identified 21 opinion reporting biases, listed in
Table 2.
Even though participants’ opinions can play a role in
much of the other bias categories, in the OPINION RE-
PORTING category the task is to explicitly report this
opinion (e.g., Americans are smart). In contrast, in the
CAUSAL ATT RIBUTION category, the task is to explain
a phenomenon (e.g, the US enjoys economic growth be-
cause Americans are smart). In the HYPOTHESIS ASSESS-
MENT category the goal is to investigate if a statement
is true or false (e.g., according to these data, i.e., US IQ
scores, articles, facts, are Americans smart or not?). In the
ESTIMATION category, the goal is to assess a quantity or
predict an outcome (e.g., the US will likely grow, because
Americans are smart). OPINION REPORTING biases dif-
fer from other categories, as people who have certain beliefs
will not necessarily reason or predict the future based on
these beliefs.
4.6.1 Psychology research
According to the bandwagon effect, people’s reported beliefs
on issues such as abortion can change according to the ma-
jority opinion [133]. Yet, people tend to believe that others
are more biased (naive cynicism) [134] and more affected by
mass media propaganda (third-person effect) [135], compared
to themselves. People also tend to generalize some char-
acteristics from a member of a group (e.g., race, ethnicity,
gender, age) to the entire group, often ignoring conflicting
evidence (stereotyping) [136]. Finally, people tend to assign
moral blame depending on outcomes, not on actions (moral
luck) – for example, not wearing a seatbelt is seen as more
particularly irresponsible if an accident happens [137].
4.6.2 Visualization research
The OPINION REPORTING category is inherently linked
to people’s attitudes, moral beliefs and behavior, rather than
biases observed in more general analytical tasks, and may be
less useful to visualization researchers. However, similarly
to all other bias categories, the possible connection of such
errors to visualization systems is an unexplored topic, in
particular when it comes to bias alleviation.
4.7 Other tasks
The last OTHER category includes all systematic biases
that have been experimentally observed without being tied
to any of the tasks discussed previously. We identified 5
other task biases, listed in Table 2.
Several biases in this category involve observing behav-
ior rather than assessing responses. For example, according
to the unit bias, people tend to eat more food in bigger con-
tainers [138]. Another example is the tendency of investors
AUTHORS’ VERSION - SEPTEMBER 2018 11
to monitor their portfolios less frequently when they show
negative information [139] (the ostrich effect). People also
tend to develop more risky behavior once their perceived
safety increases, e.g., to drive faster with a car with better
airbags [140] (risk compensation).
Although these biases are not tied to a specific large
category of tasks, they may be relevant for visualization
design. For example, the unit bias might be relevant for
visual judgments of quantity, e.g. increased white space
around a collection of scatterplot points, due to axis scaling
choices, might affect judgments of how many data points
are present. The ostrich effect would be relevant for any
data or analysis display where people might downplay or
ignore information that the viewer would consider nega-
tive, and suggests that automated systems could highlight
this information to counteract the bias. Biases similar to
risk compensation might arise when a viewer is considering
how to set thresholds for data values to appear in a view,
and changing that threshold based on new, but irrelevant,
information about other parameters.
5 DISCUSSION
5.1 Benefits of a task-based approach
The proposed taxonomy is organized by task, as opposed to
previous efforts that were based on often untested, hard to
grasp and even conflicted explanations of why a cognitive
bias occurs. We believe that this organization will make it
easier for visualization researchers to find out which biases
may be relevant to their system or research area. It assumes
that a task analysis has been performed (which is a standard
user interface design practice), rather than requiring visual-
ization researchers to guess which inner cognitive processes
users may have to follow.
Moreover, this new task-based classification of cognitive
biases may reveal new patterns by presenting biases from a
different angle. For example, similarities between tasks may
reveal biases with the same root.
Finally, our taxonomy preserves the pointers to the orig-
inal experiments, which may help visualization researchers
conduct new evaluations using methodologies that are well-
established in other fields.
It is likely that additional biases will be identified and
the list of biases will have to be further expanded. To
our knowledge, this taxonomy is by far the largest in the
literature and includes biases studied in different research
domains (e.g., psychology, consumer research, sociology).
5.2 Visualization tasks
We derived our categories of biases from an analysis of
experimental tasks used to detect those biases. These cat-
egories therefore capture tasks that do not necessarily align
with visualization tasks that are described in visualization
taxonomies (e.g., look-up, explore, identify, compare [44])
and used in empirical visualization work [141], [142]. Such
tasks tend to be lower-level but can be building blocks to
many of the higher-level tasks in our taxonomy (e.g., iden-
tifying and comparing options before making a DECISION
). However, as seen in our categories, some of our tasks
are indeed shared with visualization taxonomies, such as
hypothesis assessment or cause and effect formulation [102].
Nevertheless, all tasks we identified are highly rele-
vant to the goals of visualization systems and studies.
For example, users of decision-support systems often have
to make choices (e.g., multi-attribute choices [4]), and the
DECISION category reveals the biases that are likely
to be a factor when users perform such tasks. Similarly,
visualization researchers interested in the memorability of
visualization designs [130], [143] can focus on RECALL
biases. Researchers who study confirmatory analysis tasks
[101] could start with the HYPOTHESIS ASSESSME NT
category and researchers working on uncertainty visualiza-
tion [9] may want to focus on the ESTIMATION category.
5.3 Opportunities for future research
There are so few studies of cognitive biases in visualization
that the topic offers many opportunities for future visual-
ization research. Researchers can draw from the rich set of
cognitive biases provided in Table 2 by choosing a bias, test-
ing whether the bias persists when standard visualizations
are provided, and if so, investigate whether the bias can be
alleviated by using improved designs [91].
Previous visualization research provides examples of
methodological approaches for studying cognitive biases in
an information visualization context and sometimes discuss
pitfalls. For example, Micallef et al. [59] suggest that when
an experiment includes a task whose answer can be calcu-
lated numerically, it is recommended to i) use a continuous
error metric rather than a dichotomous “correct/incorrect”
metric, and ii) include conditions where no numeral is
provided, in order to force participants to derive the answer
from the provided visualization. Another pitfall consists of
not presenting the same information in all conditions [60]:
in order to demonstrate that a visualization can alleviate a
cognitive bias, it is crucial to ascertain that the improvement
over the baseline is due to the visualization itself, and not
differences in the information presented.
Studying cognitive biases in an information visualization
context also provides opportunities to extend methods and
results from psychology. For example, the attraction effect
was a decision bias that was only defined with three alter-
natives in numerical tables. In a visualization study, Dimara
et al. [6] extended the definition of the attraction effect to
more than three alternatives and proposed a procedure for
constructing a stimuli dataset.
The psychology literature often suggests ways a bias
could be alleviated, and some of the strategies may be
applied to visualization experiments [91]. Since alleviation
strategies are bias-specific [91], it is impossible to cover them
all in this article. As previous work has illustrated [26], [67],
[91], each bias needs its own survey of the literature. We
hope our taxonomy will facilitate such surveys by providing
references as starting points.
5.4 Limitations
154 biases is a lot, but it is likely that more will be dis-
covered. While all biases are listed and classified in table
Table 2 the paper itself could only discuss a subset of the
biases. We focused our discussion on the biases that we felt
were most important to visualization, most well-established
in psychology, and most reflected a rationality violation.
AUTHORS’ VERSION - SEPTEMBER 2018 12
Furthermore, each bias was assigned a single category.
Our methodology for creating the taxonomy leaves open the
possibility that the same cognitive bias exists in more than
one task type, across different studies. Even though most
academic studies tend to consistently replicate the same
tasks, this concern is indeed a possibility, but not necessarily
a limitation. The assumption behind the classification of our
taxonomy is that different user tasks should be approached
differently by researchers. A good example of such a case
where the same bias has been observed in two different
tasks exists in the literature on the attraction effect. The
attraction effect has been massively replicated as a decision
task among three commercial products. Some papers exist
that tested the attraction effect in visual judgments tasks,
such as identifying which of the two rectangles is bigger
[144] or finding similarities in circle and line pairs [145].
Even though these cases appear similar to the attraction
effect and likely have similar roots, it is best if they are
approached as perceptual biases, since people mainly fail
to encode the visual property of an object.
Also, the initial coding and sorting was conducted by
a single person. This was mitigated by having multiple
reviews and an iterative process involving all five authors.
To keep the number of overall citations manageable, the
search of the representative paper stopped at the first that
satisfied the source eligibility requirements and was not
exhaustive. Our taxonomy is a starting point, but to study a
specific bias in depth requires a separate literature review.
Finally, a cognitive bias assumes by definition a “devia-
tion from reality”, a notion that is complex and controver-
sial. We still do not have a definitive proof that the known
cognitive biases actually reflect irrationality. Therefore, In-
foVis researchers should attempt to verify in their studies
that erroneous responses really reflect irrationality, and not
some optimal strategy based on alternative interpretations
of the task. We also encourage visualization researchers to
remain updated about the current debates in cognitive bias
research surrounding the concept of irrationality.
6 CONCLUSION
This paper classified 154 cognitive biases, cases where peo-
ple systematically and involuntarily deviate from what is
expected to be a rational “reality”. For example, their deci-
sions are often influenced by reasons irrelevant to the objec-
tive qualities of the decision alternatives. Our classification
is task-based (when the bias occurs), rather than explanatory
(why it occurs), to help visualization researchers identify
possible bias that could affect their visualization tasks.
Cognitive biases are often mentioned as important in
the visualization literature [9]. Some works indeed discuss
cognitive biases in the context of visualizations [59], [94], but
they do not provide evidence of detecting or of alleviating
the bias when using visualizations. In our review we only
found one empirical study [91] that alleviates a cognitive
bias using visualization designs. More generally, it seems
that there are very few visualization studies (e.g., [6]) that
even provide evidence for the existence of cognitive biases
in visualizations. We believe this space provides ample
opportunities for research in visualization, and we hope the
directions we suggest in the different bias categories will
inspire future work.
ACKNOWLEDGMENTS
We thank G. Bailly and E. Lee for their precious feedback.
REFERENCES
[1] T. Munzner, Visualization Analysis and Design. CRC Press, 2014.
[2] D. Kahneman, Thinking, fast and slow. Macmillan, 2011.
[3] X. Chen, S. D. Starke, C. Baber, and A. Howes, “A cognitive
model of how people make decisions through interaction with
visual displays,” in Proceedings of the 2017 CHI Conference on
Human Factors in Computing Systems. ACM, 2017, pp. 1205–1216.
[4] E. Dimara, A. Bezerianos, and P. Dragicevic, “Conceptual and
methodological issues in evaluating multidimensional visualiza-
tions for decision support,” IEEE Transactions on Visualization and
Computer Graphics, 2018.
[5] E. Wall, L. M. Blaha, L. Franklin, and A. Endert, “Warning, bias
may occur: A proposed approach to detecting cognitive bias in
interactive visual analytics,” in IEEE Conference on Visual Analytics
Science and Technology (VAST), 2017.
[6] E. Dimara, A. Bezerianos, and P. Dragicevic, “The attraction effect
in information visualization,” IEEE Transactions on Visualization
and Computer Graphics, vol. 23, no. 1, pp. 471–480, 2017.
[7] I. Cho, R. Wesslen, A. Karduni, S. Santhanam, S. Shaikh, and
W. Dou, “The anchoring effect in decision-making with visual
analytics,” IEEE Transactions on Visualization and Computer Graph-
ics, 2018.
[8] “Decisive 2017 dealing with cognitive biases in visualisations
: a vis 2017 workshop,” http://decisive-workshop.dbvis.de/,
accessed: 2017-08-01.
[9] T. Zuk and S. Carpendale, “Visualization of uncertainty and rea-
soning,” in International Symposium on Smart Graphics. Springer,
2007, pp. 164–177.
[10] G. Ellis and A. Dix, “Decision making under uncertainty in
visualisation?” in IEEE VIS2015, 2015.
[11] J. Von Neumann and O. Morgenstern, Theory of games and eco-
nomic behavior. Princeton university press, 2007.
[12] J. Savage Leonard, “The foundations of statistics,” NY, John Wiley,
pp. 188–190, 1954.
[13] P. C. Fishburn, “Ssb utility theory and decision-making under
uncertainty,” Mathematical social sciences, vol. 8, no. 3, pp. 253–
285, 1984.
[14] U. S. Karmarkar, “Subjectively weighted utility: A descriptive
extension of the expected utility model,” Organizational behavior
and human performance, vol. 21, no. 1, pp. 61–72, 1978.
[15] J. W. Payne, “Alternative approaches to decision making under
risk,” Psychological Bulletin, vol. 80, no. 6, pp. 439–453, 1973.
[16] C. Coombs, “Portfolio theory and the measurement of risk. en
mf kaplan & s. schwartz (eds). human judgment and deci-sion
processes,” 1975.
[17] A. Tversky and D. Kahneman, “Judgment under uncertainty:
Heuristics and biases,” in Utility, probability, and human decision
making. Springer, 1975, pp. 141–162.
[18] ——, “The framing of decisions and the psychology of choice.”
Science, 1981.
[19] M. J. Mahoney, “Publication prejudices: An experimental study
of confirmatory bias in the peer review system,” Cognitive therapy
and research, vol. 1, no. 2, pp. 161–175, 1977.
[20] R. F. Pohl, Cognitive illusions: Intriguing Phenomena in thinking,
judgement and memory. Psychology Press, 2016.
[21] M. L. Graber, S. Kissam, V. L. Payne, A. N. Meyer, A. Sorensen,
N. Lenfestey, E. Tant, K. Henriksen, K. LaBresh, and H. Singh,
“Cognitive interventions to reduce diagnostic error: a narrative
review,” BMJ Quality & Safety, pp. bmjqs–2011, 2012.
[22] B. Fischhoff, “Debiasing/kahneman, d., slovic, p. and tversky,
a,” in Judgment Under Uncertainty: Heuristics and Biases, D. Kah-
neman, P. Slovic, and A. Tversky, Eds. Cambridge University
Press, 1982.
[23] S. Plous, The psychology of judgment and decision making. Mcgraw-
Hill Book Company, 1993.
[24] D. Kahneman and A. Tversky, “On the reality of cognitive
illusions.” 1996.
[25] G. Gigerenzer and H. Brighton, “Homo heuristicus: Why biased
minds make better inferences,” Topics in cognitive science, vol. 1,
no. 1, pp. 107–143, 2009.
AUTHORS’ VERSION - SEPTEMBER 2018 13
[26] E. Dimara, P. Dragicevic, and A. Bezerianos, “Accounting for
availability biases in information visualization,” in DECISIVe:
Workshop on Dealing with Cognitive Biases in Visualizations. IEEE
VIS., 2014.
[27] J. Baron, Thinking and deciding. Cambridge University Press,
2000.
[28] J. Evans, Hypothetical thinking: Dual processes in reasoning and
judgement (Essays in Cognitive Psychology). Psychology Press,
2007.
[29] K. Stanovich, Rationality and the Reflective Mind. Oxford Univer-
sity Press, 2011.
[30] G. D. K. D. Gilovich, Thomas, Heuristics and Biases: The Psychology
of Intuitive Judgment. Cambridge University Press, 2002.
[31] L. M. Padilla, S. H. Creem-Regehr, M. Hegarty, and J. K.
Stefanucci, “Decision making with visualizations: a cognitive
framework across disciplines,” Cognitive Research: Principles and
Implications, vol. 3, no. 29, 2018.
[32] J. S. B. T. Evans, “Questions and challenges for the new psychol-
ogy of reasoning,” Thinking & Reasoning, vol. 18, no. 1, pp. 5–31,
2012.
[33] D. Kahneman and S. Frederick, “Representativeness revisited:
Attribute substitution in intuitive judgment,” in Heuristics and
Biases: The Psychology of Intuitive Judgment, G. D. K. D. Gilovich,
Thomas, Ed. Cambridge University Press, 2002.
[34] W. E. Remus and J. E. Kottemann, “Toward intelligent deci-
sion support systems: An artificially intelligent statistician,” MIS
Quarterly, pp. 403–418, 1986.
[35] D. Arnott, “Cognitive biases and decision support systems devel-
opment: a design science approach,” Information Systems Journal,
vol. 16, no. 1, pp. 55–78, 2006.
[36] R. J. Heuer, Psychology of intelligence analysis. Lulu. com, 1999.
[37] Wikipedia, “List of cognitive biases — wikipedia, the free
encyclopedia,” 2017, [Online; accessed 23-July-2017 ]. [Online].
Available: https://en.wikipedia.org/w/index.php?title=List of
cognitive biases&oldid=791032058
[38] ——, “List of cognitive biases — wikipedia, the free
encyclopedia,” 2017, [Online; accessed 20-November-2017
]. [Online]. Available: https://en.wikipedia.org/w/index.php?
title=List of cognitive biases&oldid=811316852
[39] J. Wood, D. Badawood, J. Dykes, and A. Slingsby, “Ballotmaps:
Detecting name bias in alphabetically ordered ballot papers,”
Visualization and Computer Graphics, IEEE Transactions on, vol. 17,
no. 12, pp. 2384–2391, 2011.
[40] J. C. Pettibone and D. H. Wedell, “Examining models of nondom-
inated decoy effects across judgment and choice,” Organizational
Behavior and Human Decision Processes, vol. 81, no. 2, pp. 300 –
328, 2000.
[41] I. Simonson, “Choice based on reasons: The case of attraction and
compromise effects,” Journal of Consumer Research, vol. 16, no. 2,
pp. 158–174, 1989.
[42] H. Choo and S. Franconeri, “Enumeration of small collections
violates webers law,” Psychonomic bulletin & review, vol. 21, no. 1,
pp. 93–99, 2014.
[43] J. R. Wood and L. E. Wood, “Card sorting: current practices and
beyond,” Journal of Usability Studies, vol. 4, no. 1, pp. 1–6, 2008.
[44] M. Brehmer and T. Munzner, “A multi-level typology of abstract
visualization tasks,” IEEE Transactions on Visualization and Com-
puter Graphics, vol. 19, no. 12, pp. 2376–2385, Dec 2013.
[45] R. A. Amar and J. T. Stasko, “Knowledge precepts for design
and evaluation of information visualizations,” IEEE Transactions
on Visualization and Computer Graphics, vol. 11, no. 4, pp. 432–442,
2005.
[46] A. K. Barbey and S. A. Sloman, “Base-rate respect: From ecolog-
ical rationality to dual processes,” Behavioral and Brain Sciences,
vol. 30, no. 03, pp. 241–254, 2007.
[47] A. Tversky and D. Kahneman, “Extensional versus intuitive
reasoning: The conjunction fallacy in probability judgment.”
Psychological review, vol. 90, no. 4, p. 293, 1983.
[48] L. D. Phillips and W. Edwards, “Conservatism in a simple prob-
ability inference task.” Journal of experimental psychology, vol. 72,
no. 3, p. 346, 1966.
[49] H. M. Johnson and C. M. Seifert, “Sources of the continued
influence effect: When misinformation in memory affects later
inferences.” Journal of Experimental Psychology: Learning, Memory,
and Cognition, vol. 20, no. 6, p. 1420, 1994.
[50] A. Furnham and H. C. Boo, “A literature review of the anchoring
effect,” The Journal of Socio-Economics, vol. 40, no. 1, pp. 35–42,
2011.
[51] N. D. Weinstein, “Unrealistic optimism about future life events.”
Journal of personality and social psychology, vol. 39, no. 5, p. 806,
1980.
[52] A. Tversky and D. Kahneman, “Availability: A heuristic for
judging frequency and probability,” Cognitive psychology, vol. 5,
no. 2, pp. 207–232, 1973.
[53] R. Buehler, D. Griffin, and J. Peetz, “The planning fallacy: cog-
nitive, motivational, and social origins,” Advances in experimental
social psychology, vol. 43, pp. 1–62, 2010.
[54] G. Gigerenzer and D. G. Goldstein, “Reasoning the fast and
frugal way: models of bounded rationality.” Psychological review,
vol. 103, no. 4, p. 650, 1996.
[55] J. Klayman, J. B. Soll, C. Gonz´
alez-Vallejo, and S. Barlas, “Over-
confidence: It depends on how, what, and whom you ask,”
Organizational behavior and human decision processes, vol. 79, no. 3,
pp. 216–247, 1999.
[56] S. Lichtenstein and B. Fischhoff, “Do those who know more also
know more about how much they know?” Organizational behavior
and human performance, vol. 20, no. 2, pp. 159–183, 1977.
[57] J. Kruger and D. Dunning, “Unskilled and unaware of it: how dif-
ficulties in recognizing one’s own incompetence lead to inflated
self-assessments.” J. Pers. Soc. Psychol., vol. 77, no. 6, p. 1121, 1999.
[58] A. Ottley, E. M. Peck, L. T. Harrison, D. Afergan, C. Ziemkiewicz,
H. A. Taylor, P. K. Han, and R. Chang, “Improving bayesian rea-
soning: the effects of phrasing, visualization, and spatial ability,”
IEEE transactions on visualization and computer graphics, vol. 22,
no. 1, pp. 529–538, 2016.
[59] L. Micallef, P. Dragicevic, and J.-D. Fekete, “Assessing the effect
of visualizations on bayesian reasoning through crowdsourcing,”
IEEE Transactions on Visualization and Computer Graphics, vol. 18,
no. 12, pp. 2536–2545, 2012.
[60] A. Khan, S. Breslav, M. Glueck, and K. Hornbæk, “Benefits
of visualization in the mammography problem,” International
Journal of Human-Computer Studies, vol. 83, pp. 94–113, 2015.
[61] A. Khan, S. Breslav, and K. Hornbæk, “Interactive instruction in
bayesian inference,” Human–Computer Interaction, pp. 1–27, 2016.
[62] M. Correll and M. Gleicher, “Error bars considered harmful:
Exploring alternate encodings for mean and error,” Visualization
and Computer Graphics, IEEE Transactions on, vol. 20, no. 12, pp.
2142–2151, 2014.
[63] M. Correll and J. Heer, “Surprise! bayesian weighting for de-
biasing thematic maps,” IEEE transactions on visualization and
computer graphics, vol. 23, no. 1, pp. 651–660, 2017.
[64] A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and anchoring
effects in visualizations,” IEEE Transactions on Visualization and
Computer Graphics, 2017.
[65] C. Xiong, L. van Weelden, and S. Franconeri, “The curse of
knowledge in visual data communication,” in DECISIVe: Work-
shop on Dealing with Cognitive Biases in Visualizations. IEEE VIS.,
2017.
[66] B. Keysar and A. S. Henly, “Speakers’ overestimation of their
effectiveness,” Psychological Science, vol. 13, no. 3, pp. 207–212,
2002.
[67] P. Dragicevic and Y. Jansen, “Visualization-mediated alleviation
of the planning fallacy,” in DECISIVe: Workshop on Dealing with
Cognitive Biases in Visualizations. IEEE VIS., 2014.
[68] F. Du, C. Plaisant, N. Spring, and B. Shneiderman, “Finding
similar people to guide life choices: Challenge, design, and
evaluation,” in Proceedings of the 2017 CHI Conference on Human
Factors in Computing Systems. ACM, 2017, pp. 5498–5544.
[69] A. O’Hagan, C. E. Buck, A. Daneshkhah, J. R. Eiser, P. H.
Garthwaite, D. J. Jenkinson, J. E. Oakley, and T. Rakow, “4.3: The
calibration of subjective probabilities: theories and explanations,”
in Uncertain judgements: eliciting experts’ probabilities. John Wiley
& Sons, 2006.
[70] J. I. Sanders, B. Zs Hangya, and A. Kepecs, “Signatures of a
Statistical Computation in the Human Sense of Confidence,”
2016.
[71] C. Authors, Psychology Today: An Introduction (Second Edition).
Comunication Research Machines (CRM), 1972.
[72] I. Ritov and J. Baron, “Reluctance to vaccinate: Omission bias and
ambiguity,” Journal of Behavioral Decision Making, vol. 3, no. 4, pp.
263–277, 1990.
AUTHORS’ VERSION - SEPTEMBER 2018 14
[73] J. Baron, R. Gowda, and H. Kunreuther, “Attitudes toward man-
aging hazardous waste: What should be cleaned up and who
should pay for it?” Risk Analysis, vol. 13, no. 2, pp. 183–192, 1993.
[74] R. H. Thaler, A. Tversky, D. Kahneman, and A. Schwartz, “The
effect of myopia and loss aversion on risk taking: An experimen-
tal test,” The Quarterly Journal of Economics, vol. 112, no. 2, pp.
647–661, 1997.
[75] J. Huber, J. W. Payne, and C. Puto, “Adding asymmetrically dom-
inated alternatives: Violations of regularity and the similarity
hypothesis,” Journal of Consumer Research, vol. 9, no. 1, pp. 90–
98, 1982.
[76] C. K. HSEE ˜
A, “Less is better: When low-value options are val-
ued more highly than high-value options,” Journal of Behavioral
Decision Making, vol. 11, pp. 107–121, 1998.
[77] R. B. Zajonc, “Mere exposure: A gateway to the subliminal,”
Current directions in psychological science, vol. 10, no. 6, pp. 224–
228, 2001.
[78] R. Thaler, “Some empirical evidence on dynamic inconsistency,”
Economics letters, vol. 8, no. 3, pp. 201–207, 1981.
[79] M. I. Norton, D. Mochon, and D. Ariely, “The ikea effect: When
labor leads to love,” Journal of Consumer Psychology, vol. 22, no. 3,
pp. 453–460, 2012.
[80] C. K. Morewedge and C. E. Giblin, “Explanations of the endow-
ment effect: an integrative review,” Trends in cognitive sciences,
vol. 19, no. 6, pp. 339–348, 2015.
[81] W. Samuelson and R. Zeckhauser, “Status quo bias in decision
making,” Journal of risk and uncertainty, vol. 1, no. 1, pp. 7–59,
1988.
[82] S. Gratzl, A. Lex, N. Gehlenborg, H. Pfister, and M. Streit,
“Lineup: Visual analysis of multi-attribute rankings,” IEEE trans-
actions on visualization and computer graphics, vol. 19, no. 12, pp.
2277–2286, 2013.
[83] G. Carenini and J. Loyd, “Valuecharts: analyzing linear models
expressing preferences and evaluations,” in Proceedings of the
working conference on Advanced visual interfaces. ACM, 2004, pp.
150–157.
[84] S. Pajer, M. Streit, T. Torsney-Weir, F. Spechtenhauser, T. Muller,
and H. Piringer, “Weightlifter: Visual weight space exploration
for multi-criteria decision making.” IEEE transactions on visualiza-
tion and computer graphics, vol. 23, no. 1, p. 611, 2017.
[85] T. Asahi, D. Turo, and B. Shneiderman, “Using treemaps to visu-
alize the analytic hierarchy process,” Information Systems Research,
vol. 6, no. 4, pp. 357–375, 1995.
[86] S. Rudolph, A. Savikhin, and D. S. Ebert, “Finvis: Applied visual
analytics for personal financial planning,” in Visual Analytics
Science and Technology, 2009. VAST 2009. IEEE Symposium on.
IEEE, 2009, pp. 195–202.
[87] B. A. Aseniero, T. Wun, D. Ledo, G. Ruhe, A. Tang, and S. Carpen-
dale, “Stratos: Using visualization to support decisions in strate-
gic software release planning,” in Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing Systems. ACM,
2015, pp. 1479–1488.
[88] A. Dasgupta, J. Poco, Y. Wei, R. Cook, E. Bertini, and C. T.
Silva, “Bridging theory with practice: An exploratory study of
visualization use and design for climate model comparison,”
IEEE transactions on visualization and computer graphics, vol. 21,
no. 9, pp. 996–1014, 2015.
[89] J. Hullman, P. Resnick, and E. Adar, “Hypothetical outcome
plots outperform error bars and violin plots for inferences about
reliability of variable ordering,” PloS one, vol. 10, no. 11, p.
e0142444, 2015.
[90] M. Fernandes, L. Walls, S. Munson, J. Hullman, and M. Kay,
“Uncertainty displays using quantile dotplots or cdfs improve
transit decision-making,” in Proceedings of the 2018 CHI Conference
on Human Factors in Computing Systems. ACM, 2018, p. 144.
[91] E. Dimara, G. Bailly, A. Bezerianos, and S. Franconeri, “Mitigat-
ing the attraction effect with visualizations,” IEEE Transactions on
Visualization and Computer Graphics, vol. 25, no. 1, 2019.
[92] Y. Zhang, R. K. Bellamy, and W. A. Kellogg, “Designing infor-
mation for remediating cognitive biases in decision-making,” in
Proceedings of the 33rd annual ACM conference on human factors in
computing systems. ACM, 2015, pp. 2211–2220.
[93] K. Jenni and G. Loewenstein, “Explaining the identifiable victim
effect,” Journal of Risk and Uncertainty, vol. 14, no. 3, pp. 235–257,
1997.
[94] J. Boy, A. V. Pandey, J. Emerson, M. Satterthwaite, O. Nov, and
E. Bertini, “Showing people behind data: Does anthropomorphiz-
ing visualizations elicit more empathy for human rights data?”
in Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems. ACM, 2017, pp. 5462–5474.
[95] B. B. Murdock Jr, “The serial position effect of free recall.” Journal
of experimental psychology, vol. 64, no. 5, p. 482, 1962.
[96] R. S. Nickerson, “Confirmation bias: A ubiquitous phenomenon
in many guises.” Review of general psychology, vol. 2, no. 2, p. 175,
1998.
[97] L. Hasher, D. Goldstein, and T. Toppino, “Frequency and the
conference of referential validity,” Journal of verbal learning and
verbal behavior, vol. 16, no. 1, pp. 107–112, 1977.
[98] P. C. Wason, “On the failure to eliminate hypotheses in a con-
ceptual task,” Quarterly journal of experimental psychology, vol. 12,
no. 3, pp. 129–140, 1960.
[99] L. J. Chapman and J. P. Chapman, “Illusory correlation as an
obstacle to the use of valid psychodiagnostic signs.” Journal of
abnormal psychology, vol. 74, no. 3, p. 271, 1969.
[100] R. Rosnow and R. Rosenthal, People studying people: Artifacts and
ethics in behavioral research. WH Freeman, 1997.
[101] D. A. Keim, F. Mansmann, J. Schneidewind, and H. Ziegler,
“Challenges in visual data analysis,” in Information Visualization,
2006. IV 2006. International Conference on. IEEE, 2006, pp. 9–16.
[102] R. Amar and J. Stasko, “A Knowledge Task-Based Framework for
Design and Evaluation of Information Visualizations,” in IEEE
Symposium on Information Visualization, 2004, pp. 143—-49.
[103] R. J. Heuer, “Analysis of competing hypotheses,” in Psychology of
intelligence analysis. Central Intelligence Agency, 1999, ch. 8.
[104] W. Wright, D. Schroh, P. Proulx, A. Skaburskis, and B. Cort, “The
sandbox for analysis: concepts and methods,” in Proceedings of the
SIGCHI conference on Human Factors in computing systems. ACM,
2006, pp. 801–810.
[105] P. E. Meehl, “Theory-testing in psychology and physics: A
methodological paradox,” Philosophy of science, vol. 34, no. 2, pp.
103–115, 1967.
[106] R. Rosenthal and R. L. Rosnow, Artifacts in Behavioral Research:
Robert Rosenthal and Ralph L. Rosnow’s Classic Books. Oxford
University Press, 2009.
[107] H. H. Kelley, “The processes of causal attribution.” American
psychologist, vol. 28, no. 2, p. 107, 1973.
[108] M. Ross, F. Sicoly et al., “Egocentric biases in availability and
attribution,” Journal of personality and social psychology, vol. 37,
no. 3, pp. 322–336, 1979.
[109] W. K. Campbell and C. Sedikides, “Self-threat magnifies the self-
serving bias,” Review of General Psychology, vol. 3, no. 1, pp. 23–43,
1999.
[110] D. T. Miller and S. A. Norman, “Actor-observer differences in
perceptions of effective control.” J. Pers. Soc. Psychol., vol. 31,
no. 3, p. 503, 1975.
[111] S. Graham, C. Hudley, and E. Williams, “Attributional and
emotional determinants of aggression among african-american
and latino young adolescents.” Developmental Psychology, vol. 28,
no. 4, p. 731, 1992.
[112] T. F. Pettigrew, “The ultimate attribution error: Extending all-
port’s cognitive analysis of prejudice,” Personality and social psy-
chology bulletin, vol. 5, no. 4, pp. 461–476, 1979.
[113] S. T. Allison and D. M. Messick, “The group attribution error,”
Journal of Experimental Social Psychology, vol. 21, no. 6, pp. 563–
579, 1985.
[114] K. Ishikawa, Guide to Quality Control. Asian Productivity Orga-
nization, 1968.
[115] D. A. Lawlor, G. Davey Smith, and S. Ebrahim, “Commentary:
The hormone replacementcoronary heart disease conundrum:
is this the death of observational epidemiology?” International
Journal of Epidemiology, vol. 33, no. 3, pp. 464–467, 2004.
[116] J. Shapiro, “3 ways data dashboards can mislead you,” 2017,
[Online; accessed 7-February-2018 ]. [Online]. Available: https:
//hbr.org/2017/01/3-ways-data-dashboards-can-mislead- you
[117] M. S. Ayers and L. M. Reder, “A theoretical review of the mis-
information effect: Predictions from an activation-based memory
model,” Psychonomic Bulletin & Review, vol. 5, no. 1, pp. 1–21,
1998.
[118] D. M. McBride and B. A. Dosher, “A comparison of conscious
and automatic memory processes for picture and word stimuli: A
process dissociation analysis,” Consciousness and cognition, vol. 11,
no. 3, pp. 423–460, 2002.
[119] P. Ginns, “Meta-analysis of the modality effect,” Learning and
Instruction, vol. 15, no. 4, pp. 313–331, 2005.
AUTHORS’ VERSION - SEPTEMBER 2018 15
[120] N. J. Slamecka and P. Graf, “The generation effect: Delineation
of a phenomenon.” Journal of experimental Psychology: Human
learning and Memory, vol. 4, no. 6, p. 592, 1978.
[121] W. R. Walker and J. J. Skowronski, “The fading affect bias: But
what the hell is it for?” Applied Cognitive Psychology, vol. 23, no. 8,
pp. 1122–1136, 2009.
[122] G. O. Einstein, M. A. McDaniel, C. L. Williford, J. L. Pagan, and
R. Dismukes, “Forgetting of intentions in demanding situations
is rapid.” Journal of Experimental Psychology: Applied, vol. 9, no. 3,
p. 147, 2003.
[123] H. Summerfelt, L. Lippman, and I. E. Hyman Jr, “The effect
of humor on memory: Constrained by the pun,” The Journal of
General Psychology, vol. 137, no. 4, pp. 376–394, 2010.
[124] M. A. McDaniel, G. O. Einstein, E. L. DeLosh, C. P. May, and
P. Brady, “The bizarreness effect: it’s not surprising, it’s com-
plex.” Journal of Experimental Psychology: Learning, Memory, and
Cognition, vol. 21, no. 2, p. 422, 1995.
[125] F. I. Craik and E. Tulving, “Depth of processing and the retention
of words in episodic memory.” Journal of experimental Psychology:
general, vol. 104, no. 3, p. 268, 1975.
[126] B. Sparrow, J. Liu, and D. M. Wegner, “Google effects on memory:
Cognitive consequences of having information at our fingertips,”
science, vol. 333, no. 6043, pp. 776–778, 2011.
[127] A. S. Brown and D. R. Murphy, “Cryptomnesia: Delineating inad-
vertent plagiarism.” Journal of Experimental Psychology: Learning,
Memory, and Cognition, vol. 15, no. 3, p. 432, 1989.
[128] C. J. Brainerd and V. F. Reyna, “Fuzzy-trace theory and false
memory,” Current Directions in Psychological Science, vol. 11, no. 5,
pp. 164–169, 2002.
[129] D. S. Lindsay and M. K. Johnson, “The eyewitness suggestibility
effect and memory for source,” Memory & Cognition, vol. 17, no. 3,
pp. 349–358, 1989.
[130] M. A. Borkin, Z. Bylinskii, N. W. Kim, C. M. Bainbridge, C. S. Yeh,
D. Borkin, H. Pfister, and A. Oliva, “Beyond memorability: Visu-
alization recognition and recall,” IEEE transactions on visualization
and computer graphics, vol. 22, no. 1, pp. 519–528, 2016.
[131] S. Haroz, R. Kosara, and S. L. Franconeri, “Isotype visualiza-
tion: Working memory, performance, and engagement with pic-
tographs,” in Proceedings of the 33rd annual ACM conference on
human factors in computing systems. ACM, 2015, pp. 1191–1200.
[132] S. Bateman, R. L. Mandryk, C. Gutwin, A. Genest, D. McDine,
and C. Brooks, “Useful junk?: the effects of visual embellishment
on comprehension and memorability of charts,” in Proceedings
of the SIGCHI Conference on Human Factors in Computing Systems.
ACM, 2010, pp. 2573–2582.
[133] R. Nadeau, E. Cloutier, and J.-H. Guay, “New evidence about
the existence of a bandwagon effect in the opinion formation
process,” International Political Science Review, vol. 14, no. 2, pp.
203–213, 1993.
[134] J. Kruger and T. Gilovich, “” naive cynicism” in everyday theories
of responsibility assessment: On biased assumptions of bias.” J.
Pers. Soc. Psychol., vol. 76, no. 5, p. 743, 1999.
[135] N. Antonopoulos, A. Veglis, A. Gardikiotis, R. Kotsakis, and
G. Kalliris, “Web third-person effect in structural aspects of the
information on media websites,” Computers in Human Behavior,
vol. 44, pp. 48–58, 2015.
[136] J. Correll, B. Park, C. M. Judd, and B. Wittenbrink, “The police
officer’s dilemma: using ethnicity to disambiguate potentially
threatening individuals.” J. Pers. Soc. Psychol., vol. 83, no. 6, p.
1314, 2002.
[137] F. Cushman, “Crime and punishment: Distinguishing the roles
of causal and intentional analyses in moral judgment,” Cognition,
vol. 108, no. 2, pp. 353–380, 2008.
[138] A. B. Geier, P. Rozin, and G. Doros, “Unit bias: A new heuristic
that helps explain the effect of portion size on food intake,”
Psychological Science, vol. 17, no. 6, pp. 521–525, 2006.
[139] N. Karlsson, G. Loewenstein, and D. Seppi, “The ostrich effect:
Selective attention to information,” Journal of Risk and uncertainty,
vol. 38, no. 2, pp. 95–115, 2009.
[140] J. Hedlund, “Risky business: safety regulations, risk compensa-
tion, and individual behavior,” Injury prevention, vol. 6, no. 2, pp.
82–89, 2000.
[141] Y. Kim and J. Heer, “Assessing Effects of Task and Data Distri-
bution on the Effectiveness of Visual Encodings,” in Eurographics
Conference on Visualization (EuroVis), vol. 37, no. 3, 2018.
[142] B. Saket, A. Endert, and C¸ . . Gatay Demiralp, “Task-Based Effec-
tiveness of Basic Visualizations,” IEEE Transactions on Visualiza-
tion and Computer Graphics, 2018.
[143] C. Healey and J. Enns, “Attention and visual memory in visual-
ization and computer graphics,” IEEE transactions on visualization
and computer graphics, vol. 18, no. 7, pp. 1170–1188, 2012.
[144] J. S. Trueblood, S. D. Brown, A. Heathcote, and J. R. Busemeyer,
“Not just for consumers context effects are fundamental to de-
cision making,” Psychological science, vol. 24, no. 6, pp. 901–908,
2013.
[145] J. M. Choplin and J. E. Hummel, “Comparison-induced decoy
effects,” Memory & Cognition, vol. 33, no. 2, pp. 332–343, 2005.
[146] M. Correll and J. Heer, “Regression by eye: Estimating trends in
bivariate visualizations,” in Proceedings of the 2017 CHI Conference
on Human Factors in Computing Systems. ACM, 2017, pp. 1387–
1396.
[147] G. Loewenstein, “Hot-cold empathy gaps and medical decision
making.” Health Psychology, vol. 24, no. 4S, p. S49, 2005.
[148] O. Svenson, “Decisions among time saving options: When intu-
ition is strong and wrong,” Acta Psychologica, vol. 127, no. 2, pp.
501–509, 2008.
[149] A. C. Valdez, M. Ziefle, and M. Sedlmair, “A framework for
studying biases in visualization research,” in DECISIVe: Workshop
on Dealing with Cognitive Biases in Visualizations. IEEE VIS., 2017.
[150] T. Gilovich, R. Vallone, and A. Tversky, “The hot hand in bas-
ketball: On the misperception of random sequences,” Cognitive
psychology, vol. 17, no. 3, pp. 295–314, 1985.
[151] F. Attneave, “Psychological probability as a function of experi-
enced frequency.” Journal of Experimental Psychology, vol. 46, no. 2,
p. 81, 1953.
[152] A. Tversky and D. J. Koehler, “Support theory: A nonextensional
representation of subjective probability.” Psychological review, vol.
101, no. 4, p. 547, 1994.
[153] L. Harrison, F. Yang, S. Franconeri, and R. Chang, “Ranking
visualizations of correlation using weber’s law,” IEEE transactions
on visualization and computer graphics, vol. 20, no. 12, pp. 1943–
1952, 2014.
[154] W. A. Wagenaar and G. B. Keren, “Calibration of probability
assessments by professional blackjack dealers, statistical experts,
and lay people,” Organizational Behavior and Human Decision
Processes, vol. 36, no. 3, pp. 406–416, 1985.
[155] L. J. Sanna and N. Schwarz, “Integrating temporal biases: The
interplay of focal thoughts and accessibility experiences,” Psy-
chological science, vol. 15, no. 7, pp. 474–481, 2004.
[156] J. Baron and J. C. Hershey, “Outcome bias in decision evalua-
tion.” Journal of personality and social psychology, vol. 54, no. 4, p.
569, 1988.
[157] L. F. Nordgren, F. v. Harreveld, and J. v. d. Pligt, “The restraint
bias: How the illusion of self-restraint promotes impulsive be-
havior,” Psychological Science, vol. 20, no. 12, pp. 1523–1528, 2009.
[158] M. G. Haselton, “The sexual overperception bias: Evidence of
a systematic bias in men from a survey of naturally occurring
events,” Journal of Research in Personality, vol. 37, no. 1, pp. 34–47,
2003.
[159] C. Heath, “On the social psychology of agency relationships:
Lay theories of motivation overemphasize extrinsic incentives,”
Organizational behavior and human decision processes, vol. 78, no. 1,
pp. 25–62, 1999.
[160] L. Ross, D. Greene, and P. House, “The ”false consensus ef-
fect”: An egocentric bias in social perception and attribution
processes,” Journal of experimental social psychology, vol. 13, no. 3,
pp. 279–301, 1977.
[161] S. C. Thompson, “Illusions of control: How we overestimate our
personal influence,” Current Directions in Psychological Science,
vol. 8, no. 6, pp. 187–190, 1999.
[162] K. Savitsky and T. Gilovich, “The illusion of transparency and
the alleviation of speech anxiety,” Journal of experimental social
psychology, vol. 39, no. 6, pp. 618–625, 2003.
[163] B. Park and M. Rothbart, “Perception of out-group homogeneity
and levels of social categorization: Memory for the subordinate
attributes of in-group and out-group members.” J. Pers. Soc.
Psychol., vol. 42, no. 6, p. 1051, 1982.
[164] T. Sharot, A. M. Riccardi, C. M. Raio, and E. A. Phelps, “Neural
mechanisms mediating optimism bias,” Nature, vol. 450, no. 7166,
pp. 102–105, 2007.
[165] T. Gilovich, V. H. Medvec, and K. Savitsky, “The spotlight effect
in social judgment: an egocentric bias in estimates of the salience
AUTHORS’ VERSION - SEPTEMBER 2018 16
of one’s own actions and appearance.” J. Pers. Soc. Psychol.,
vol. 78, no. 2, p. 211, 2000.
[166] J. Kruger, “Lake wobegon be gone! the” below-average effect”
and the egocentric nature of comparative ability judgments.” J.
Pers. Soc. Psychol., vol. 77, no. 2, p. 221, 1999.
[167] S. Milgram, “Behavioral study of obedience.” The Journal of
abnormal and social psychology, vol. 67, no. 4, p. 371, 1963.
[168] D. Manzey, J. Reichenbach, and L. Onnasch, “Human perfor-
mance consequences of automated decision aids: The impact of
degree of automation and system experience,” Journal of Cognitive
Engineering and Decision Making, vol. 6, no. 1, pp. 57–87, 2012.
[169] D. Sacha, H. Senaratne, B. C. Kwon, G. Ellis, and D. A. Keim,
“The role of uncertainty, awareness, and trust in visual analytics,”
IEEE transactions on visualization and computer graphics, vol. 22,
no. 1, pp. 240–249, 2016.
[170] C. R. Sunstein, “Probability neglect: Emotions, worst cases, and
law,” The Yale Law Journal, vol. 112, no. 1, pp. 61–107, 2002.
[171] A. Tversky and D. Kahneman, “Rational choice and the framing
of decisions,” Journal of business, pp. S251–S278, 1986.
[172] D. Walker and E. Vul, “Hierarchical encoding makes individuals
in a group seem more attractive,” Psychological Science, vol. 25,
no. 1, pp. 230–235, 2014.
[173] P. Raghubir and J. Srivastava, “The denomination effect,” Journal
of Consumer Research, vol. 36, no. 4, pp. 701–713, 2009.
[174] M. Weber and C. F. Camerer, “The disposition effect in securities
trading: An experimental analysis,” Journal of Economic Behavior
& Organization, vol. 33, no. 2, pp. 167–184, 1998.
[175] C. K. Hsee and J. Zhang, “Distinction bias: misprediction and
mischoice due to joint evaluation.” Journal of personality and social
psychology, vol. 86, no. 5, p. 680, 2004.
[176] E. Shafir, P. Diamond, and A. Tversky, “Money illusion,” The
Quarterly Journal of Economics, vol. 112, no. 2, pp. 341–374, 1997.
[177] B. M. Staw, “Knee-deep in the big muddy: A study of escalating
commitment to a chosen course of action,” Organizational behavior
and human performance, vol. 16, no. 1, pp. 27–44, 1976.
[178] T. P. German and H. C. Barrett, “Functional fixedness in a techno-
logically sparse culture,” Psychological Science, vol. 16, no. 1, pp.
1–5, 2005.
[179] A. Dasgupta, S. Burrows, K. Han, and P. J. Rasch, “Empirical
Analysis of the Subjective Impressions and Objective Measures
of Domain Scientists’ Visual Analytic Judgments,” in ACM Con-
ference on Human Factors in Computing Systems (CHI), 2017, pp.
1193—-1204.
[180] O. Kaplan, G. Yamamoto, Y. Yoshitake, T. Taketomi, C. Sandor,
and H. Kato, “In-situ visualization of pedaling forces on cycling
training videos,” in Systems, Man, and Cybernetics (SMC), 2016
IEEE International Conference on. IEEE, 2016, pp. 000 994–000 999.
[181] M. Mortell, H. H. Balkhy, E. B. Tannous, and M. T. Jong,
“Physician defiancetowards hand hygiene compliance: Is there a
theory–practice–ethics gap?” Journal of the Saudi Heart Association,
vol. 25, no. 3, pp. 203–208, 2013.
[182] D. F. Baker, “Enhancing group decision making: An exercise to
reduce shared information bias,” Journal of Management Education,
vol. 34, no. 2, pp. 249–279, 2010.
[183] M. Oghbaie, M. J. Pennock, and W. B. Rouse, “Understanding
the efficacy of interactive visualization for decision making for
complex systems,” in Systems Conference (SysCon), 2016 Annual
IEEE. IEEE, 2016, pp. 1–6.
[184] W. B. Jackson and J. V. Jucker, “An empirical study of travel time
variability and travel choice behavior,” Transportation Science,
vol. 16, no. 4, pp. 460–475, 1982.
[185] S. S. Brehm, “Psychological reactance and the attractiveness of
unobtainable objects: Sex differences in children’s responses to
an elimination of freedom,” Sex Roles, vol. 7, no. 9, pp. 937–949,
1981.
[186] S. Stoppel and S. Bruckner, “Vol 2 velle: Printable interactive vol-
ume visualization,” IEEE transactions on visualization and computer
graphics, vol. 23, no. 1, pp. 861–870, 2017.
[187] D. Antons and F. T. Piller, “Opening the black box of not invented
here: attitudes, decision biases, and behavioral consequences,”
The Academy of Management Perspectives, vol. 29, no. 2, pp. 193–
217, 2015.
[188] L. Ross and C. Stillinger, “Barriers to conflict resolution,” Negoti-
ation Journal, vol. 7, no. 4, pp. 389–404, 1991.
[189] S. M. Garcia, H. Song, and A. Tesser, “Tainted recommendations:
The social comparison bias,” Organizational Behavior and Human
Decision Processes, vol. 113, no. 2, pp. 97–101, 2010.
[190] M. S. McGlone and J. Tofighbakhsh, “Birds of a feather flock con-
jointly (?): Rhyme as reason in aphorisms,” Psychological Science,
vol. 11, no. 5, pp. 424–428, 2000.
[191] J. Tobacyk, G. Milford, T. Springer, and Z. Tobacyk, “Paranormal
beliefs and the barnum effect,” Journal of Personality Assessment,
vol. 52, no. 4, pp. 737–739, 1988.
[192] J. S. B. Evans, J. L. Barston, and P. Pollard, “On the conflict
between logic and belief in syllogistic reasoning,” Memory &
cognition, vol. 11, no. 3, pp. 295–306, 1983.
[193] D. Kahneman and A. Tversky, “Subjective probability: A judg-
ment of representativeness,” Cognitive psychology, vol. 3, no. 3,
pp. 430–454, 1972.
[194] J. Baron, J. Beattie, and J. C. Hershey, “Heuristics and biases in
diagnostic reasoning: Ii. congruence, information, and certainty,”
Organizational Behavior and Human Decision Processes, vol. 42,
no. 1, pp. 88–110, 1988.
[195] J. L. Voss, K. D. Federmeier, and K. A. Paller, “The potato
chip really does look like elvis! neural hallmarks of conceptual
processing associated with finding novel shapes subjectively
meaningful,” Cerebral Cortex, vol. 22, no. 10, pp. 2354–2364, 2011.
[196] D. T. Gilbert, R. P. Brown, E. C. Pinel, and T. D. Wilson, “The
illusion of external agency.” J. Pers. Soc. Psychol., vol. 79, no. 5, p.
690, 2000.
[197] C. L. Hafer and L. B`
egue, “Experimental research on just-world
theory: problems, developments, and future challenges.” Psycho-
logical bulletin, vol. 131, no. 1, p. 128, 2005.
[198] J. T. Jost, B. W. Pelham, and M. R. Carvallo, “Non-conscious forms
of system justification: Implicit and behavioral preferences for
higher status groups,” Journal of Experimental Social Psychology,
vol. 38, no. 6, pp. 586–602, 2002.
[199] K. G. Shaver, “Defensive attribution: Effects of severity and
relevance on the responsibility assigned for an accident.” J. Pers.
Soc. Psychol., vol. 14, no. 2, p. 101, 1970.
[200] D. T. Gilbert and P. S. Malone, “The correspondence bias.”
Psychological bulletin, vol. 117, no. 1, p. 21, 1995.
[201] D. M. Taylor and J. R. Doria, “Self-serving and group-serving bias
in attribution,” The Journal of Social Psychology, vol. 113, no. 2, pp.
201–211, 1981.
[202] J. A. Usher and U. Neisser, “Childhood amnesia and the begin-
nings of memory for four early life events.” Journal of Experimental
Psychology: General, vol. 122, no. 2, p. 155, 1993.
[203] E. Tulving, “Cue-dependent forgetting: When we forget some-
thing we once knew, it does not necessarily mean that the mem-
ory trace has been lost; it may only be inaccessible,” American
Scientist, vol. 62, no. 1, pp. 74–82, 1974.
[204] B. L. Fredrickson and D. Kahneman, “Duration neglect in retro-
spective evaluations of affective episodes.” Journal of personality
and social psychology, vol. 65, no. 1, p. 45, 1993.
[205] E. F. Loftus and J. C. Palmer, “Reconstruction of automobile
destruction: An example of the interaction between language
and memory,” Journal of verbal learning and verbal behavior, vol. 13,
no. 5, pp. 585–589, 1974.
[206] A. Koriat, M. Goldsmith, and A. Pansky, “Toward a psychology
of memory accuracy,” Annual review of psychology, vol. 51, no. 1,
pp. 481–537, 2000.
[207] H. M. Paterson, R. I. Kemp, and J. P. Forgas, “Co-witnesses,
confederates, and conformity: Effects of discussion and delay on
eyewitness memory,” Psychiatry, Psychology and Law, vol. 16, no.
sup1, pp. S112–S124, 2009.
[208] J. A. Gibbons, A. K. Velkey, and K. T. Partin, “Influence of recall
procedures on the modality effect with numbers and enumerated
stimuli,” Journal of gen. psychology, vol. 135, no. 1, pp. 84–104,
2008.
[209] J. D. Mayer, L. J. McCormick, and S. E. Strong, “Mood-congruent
memory and natural mood: New evidence,” Personality and Social
Psychology Bulletin, vol. 21, no. 7, pp. 736–746, 1995.
[210] M. Brenner, “The next-in-line effect,” Journal of Verbal Learning
and Verbal Behavior, vol. 12, no. 3, pp. 320–323, 1973.
[211] N. J. Slamecka, “An examination of trace storage in free recall.”
Journal of experimental psychology, vol. 76, no. 4p1, p. 504, 1968.
[212] M. Mather and L. L. Carstensen, “Aging and motivated cogni-
tion: The positivity effect in attention and memory,” Trends in
cognitive sciences, vol. 9, no. 10, pp. 496–502, 2005.
[213] E. J. O’brien and J. L. Myers, “When comprehension difficulty
improves memory for text.” Journal of Experimental Psychology:
Learning, Memory, and Cognition, vol. 11, no. 1, p. 12, 1985.
AUTHORS’ VERSION - SEPTEMBER 2018 17
[214] A. Jansari and A. J. Parkin, “Things that go bump in your life:
Explaining the reminiscence bump in autobiographical memory.”
Psychology and Aging, vol. 11, no. 1, p. 85, 1996.
[215] L. A. Henkel and K. J. Coffman, “Memory distortions in coerced
false confessions: A source monitoring framework analysis,”
Applied Cognitive Psychology, vol. 18, no. 5, pp. 567–588, 2004.
[216] R. L. Greene, “Spacing effects in memory: Evidence for a two-
process account.” Journal of Experimental Psychology: Learning,
Memory, and Cognition, vol. 15, no. 3, p. 371, 1989.
[217] J. Morton, R. G. Crowder, and H. A. Prussin, “Experiments with
the stimulus suffix effect.” 1971.
[218] S. M. Janssen, A. G. Chessa, and J. M. Murre, “Memory for time:
How people date events,” Memory & cognition, vol. 34, no. 1, pp.
138–147, 2006.
[219] M. A. McDaniel, J. L. Anderson, M. H. Derbish, and N. Mor-
risette, “Testing the testing effect in the classroom,” European
Journal of Cognitive Psychology, vol. 19, no. 4-5, pp. 494–513, 2007.
[220] A. S. Brown, “A review of the tip-of-the-tongue experience.”
Psychological bulletin, vol. 109, no. 2, p. 204, 1991.
[221] J. Poppenk, G. Walia, A. McIntosh, M. Joanisse, D. Klein, and
S. K¨
ohler, “Why is the meaning of a sentence better remembered
than its form? an fmri study on the role of novelty-encoding
processes,” Hippocampus, vol. 18, no. 9, pp. 909–918, 2008.
[222] G. O. McDaniel, Mark A.; Einstein, “Bizarre imagery as an
effective memory aid: The importance of distinctiveness.” Journal
of Exp. Psychology: Learning, Memory, and Cognition, vol. 12, 1986.
[223] M. Cary and L. M. Reder, “A dual-process account of the list-
length and strength-based mirror effects in recognition,” Journal
of Memory and Language, vol. 49, no. 2, pp. 231–248, 2003.
[224] A. Parker, E. Wilding, and C. Akerman, “The von restorff effect
in visual object recognition memory in humans and monkeys:
The role of frontal/perirhinal interaction,” Journal of Cognitive
Neuroscience, vol. 10, no. 6, pp. 691–703, 1998.
[225] M. Mather and M. K. Johnson, “Choice-supportive source moni-
toring: Do our decisions seem better to us as we age?” Psychology
and aging, vol. 15, no. 4, p. 596, 2000.
[226] B. Fischhoff and R. Beyth, “I knew it would happen: Remem-
bered probabilities of oncefuture things,” Organizational Behavior
and Human Performance, vol. 13, no. 1, pp. 1–16, 1975.
[227] T. R. Mitchell, L. Thompson, E. Peterson, and R. Cronk, “Tem-
poral adjustments in the evaluation of events: The rosy view,”
Journal of Experimental Social Psychology, vol. 33, no. 4, pp. 421–
448, 1997.
[228] J. W. Tanaka, M. Kiefer, and C. M. Bukach, “A holistic account
of the own-race effect in face recognition: Evidence from a cross-
cultural study,” Cognition, vol. 93, no. 1, pp. B1–B9, 2004.
[229] R. J. Crutcher and A. F. Healy, “Cognitive operations and the
generation effect.” Journal of Experimental Psychology: Learning,
Memory, and Cognition, vol. 15, no. 4, p. 669, 1989.
[230] T. B. Rogers, N. A. Kuiper, and W. S. Kirker, “Self-reference and
the encoding of personal information.” Journal of personality and
social psychology, vol. 35, no. 9, p. 677, 1977.
[231] R. E. Nisbett and T. D. Wilson, “The halo effect: Evidence for
unconscious alteration of judgments.” Journal of personality and
social psychology, vol. 35, no. 4, p. 250, 1977.
[232] B. Monin and D. T. Miller, “Moral credentials and the expression
of prejudice.” Journal of personality and social psychology, vol. 81,
no. 1, p. 33, 2001.
[233] S. T. Fiske, “Attention and weight in person perception: The
impact of negative and extreme behavior.” Journal of personality
and Social Psychology, vol. 38, no. 6, p. 889, 1980.
[234] D. A. Schkade and D. Kahneman, “Does living in california
make people happy? a focusing illusion in judgments of life
satisfaction,” Psychological Science, vol. 9, no. 5, pp. 340–346, 1998.
[235] B. Nyhan and J. Reifler, “When corrections fail: The persistence
of political misperceptions,” Political Behavior, vol. 32, no. 2, pp.
303–330, 2010.
[236] M. Spranca, E. Minsk, and J. Baron, “Omission and commission
in judgment and choice,” Journal of experimental social psychology,
vol. 27, no. 1, pp. 76–105, 1991.
[237] D. P. Crowne and D. Marlowe, “A new scale of social desirability
independent of psychopathology.” Journal of consulting psychol-
ogy, vol. 24, no. 4, p. 349, 1960.
[238] L. A. Rudman and S. A. Goodwin, “Gender differences in auto-
matic in-group bias: Why do women like women more than men
like men?” Journal of personality and social psychology, vol. 87, no. 4,
p. 494, 2004.
[239] J. D. Coley, M. Arenson, Y. Xu, and K. D. Tanner, “Intuitive
biological thought: Developmental changes and effects of biology
education in late adolescence,” Cognitive psychology, vol. 92, pp.
1–21, 2017.
[240] A. Waytz, J. Cacioppo, and N. Epley, “Who sees human? the
stability and importance of individual differences in anthropo-
morphism,” Perspectives on Psychological Science, vol. 5, no. 3, pp.
219–232, 2010.
[241] J. Jecker and D. Landy, “Liking a person as a function of doing
him a favour,” Human Relations, vol. 22, no. 4, pp. 371–378, 1969.
[242] E. Pronin and M. B. Kugler, “Valuing thoughts, ignoring behav-
ior: The introspection illusion as a source of the bias blind spot,”
Journal of Experimental Social Psychology, vol. 43, no. 4, pp. 565–
578, 2007.
[243] E. Pronin, J. Kruger, K. Savtisky, and L. Ross, “You don’t know
me, but i know you: The illusion of asymmetric insight.” J. Pers.
Soc. Psychol., vol. 81, no. 4, p. 639, 2001.
[244] D. Dunning, J. A. Meyerowitz, and A. D. Holzberg, “Ambiguity
and self-evaluation: The role of idiosyncratic trait definitions in
self-serving assessments of ability.” Journal of personality and social
psychology, vol. 57, no. 6, p. 1082, 1989.
[245] A. H. Hastorf and H. Cantril, “They saw a game; a case study.”
The Journal of Abnormal and Social Psychology, vol. 49, no. 1, p. 129,
1954.
[246] E. Pronin and L. Ross, “Temporal differences in trait self-
ascription: when the self is seen as an other.” J. Pers. Soc. Psychol.,
vol. 90, no. 2, p. 197, 2006.
[247] J. R´
o˙
zycka-Tran, P. Boski, and B. Wojciszke, “Belief in a zero-
sum game as a social axiom: A 37-nation study,” Journal of Cross-
Cultural Psychology, vol. 46, no. 4, pp. 525–548, 2015.
[248] A. C. Janes, D. A. Pizzagalli, S. Richardt, B. de B Frederick, A. J.
Holmes, J. Sousa, M. Fava, A. E. Evins, and M. J. Kaufman,
“Neural substrates of attentional bias for smoking-related cues:
an fmri study,” Neuropsychopharmacology, vol. 35, no. 12, pp. 2339–
2345, 2010.
[249] J. W. Choi, G. W. Hecht, and W. B. Tayler, “Strategy selection,
surrogation, and strategic performance measurement systems,”
Journal of Accounting Research, vol. 51, no. 1, pp. 105–133, 2013.
Fig. 2: Our taxonomy organized by the flavors discussed in
section 3.4. Each dot is a cognitive bias. Colors encode task-
category (ESTIMATION , DECISION , HYPOTHESIS
ASSESSMENT , CAUSAL ATTRIBUTION , RECALL
, OPINION REPORTING , and OTHER ).
TABLE 1: Color legend of the relation of each cognitive bias
to visualization research
#8 Evidence for the alleviation of the cognitive bias in visualization
#7 Evidence for the existence of the cognitive bias in visualization
#6 Studied in visualization, but no clear evidence of existence or alleviation
#5 Discussed in visualization research as important, but not yet studied
#4 Not discussed in visualization but likely relevant
#3 Probably relevant to visualization
#2 Potentially relevant to visualization
#1 Relevance to visualization currently unclear
AUTHORS’ VERSION - SEPTEMBER 2018 18
TABLE 2: Our taxonomy of cognitive biases classified by the tasks: ESTIMATION , DECISION , HYPOTHESIS
ASSESSMENT , CAUSAL ATTRIBUTION , RECALL , OPINION REPORTING , and OTHER . The first
column shows the task category color of each bias. The column “Flavor” describes the phenomenon behind the bias
(discussed in Sec. 3.4). The column “Cognitive bias” shows the name of each bias. The column “Ref” is a representative
peer-reviewed paper for each bias ( described in Sec. 3.2). The column “Relevance to InfoVis” indicates whether each
cognitive bias has been examined in information visualization research and reports the reference of the paper (discussed
in Sec. 4 ). The color coding of the “Relevance to InfoVis” column is explained in detail in legend Table 1
# Flavor Cognitive bias Ref Relevance to InfoVis Short description
TASK: ESTIMATION
1
Association
Availability bias [52] #5 [26], [146] Events more probable if easy to remember
2Conjunction fallacy [47] #5 [9] Specific outcomes more probable than general
3Empathy gap [147] #1 Estimations affected by not recognizing the role of current emotional state
4Time-saving bias [148] #4 Overestimate time saved when increasing speed
5
Baseline
Anchoring effect [50] #7 [7], [64] Estimation affected by first piece of information
6Base rate fallacy [46] #6 [59], [60] Ignore base rate probability of general population
7Dunning-Kruger effect [57] #5 [149] Low-ability people overestimate their performance (opposite for high-ability)
8Gambler’s fallacy [17] #4 Current outcome that is more frequent will be less frequent in future
9Hard-easy effect [56] #3 Overconfidence for hard tasks, underconfidence for easy
10 Hot-hand fallacy [150] #5 [146] Current outcome that is more frequent will be more frequent in future
11 Insensitivity to sample size [17] #5 [9], [10] Estimate probability ignoring sample size
12 Regressive bias [151] #4 Overestimate high probabilities, underestimate low ones
13 Subadditivity effect [152] #4 Overall probability less than the probabilities of the parts
14 Weber-Fechner law [42] #6 [153] Failure to perceive small differences in large quantities
15 Inertia Conservatism [48] #7 [92] New information insufficiently updates probability estimates
16
Outcome
Exaggerated expectation [154] #4 Exaggerating evidence to fit a conclusion
17 Illusion of validity [17] #5 [9] Overconfidence in judgment based on intuition and anecdotes
18 Impact bias [155] #1 Predict future emotional reactions as more intense
19 Outcome bias [156] #2 Evaluate decision maker only by choice outcome
20 Planning fallacy [53] #5 [67] Overoptimistic task completion predictions, especially for self
21 Restraint bias [157] #1 Overestimate of ability to resist temptation
22 Sexual overperception bias [158] #1 Over or underestimate of romantic interest from others
23
Self-perspective
Curse of knowledge [66] #7 [65] Experts assume that novices have same knowledge
24 Extrinsic incentives bias [159] #1 Others have extrinsic motivations (e.g.money), self are intrinsic (e.g.learning)
25 False consensus effect [160] #2 Overestimate the agreement of others with own opinions
26 Illusion of control [161] #3 Overestimation of one’s influence on an external event
27 Illusion of transparency [162] #1 Overestimate insight of others into own mental state, and vice versa
28 Naive cynicism [134] #2 Predict that the others will be more egocentrically biased
29 Optimism bias [51] #4 Positive outcomes more probable for oneself than others
30 Out-group homogeneity bias [163] #4 Estimate out-group will be more homogenous than in-group members
31 Pessimism bias [164] #4 Positive outcomes less probable for oneself than others
32 Spotlight effect [165] #1 Overestimate probability that people notice one’s appearance/behavior
33 Worse-than-average effect [166] #3 Underestimate own achievements relative to others in difficult tasks
TASK: DECISION
34
Association
Ambiguity effect [72] #4 Choices affected by their association with unknown outcomes
35 Authority bias [167] #1 Choices affected by their association with authority
36 Automation bias [168] #5 [169] Choices affected by their association with an automated system
37 Framing effect [18] #5 [9], [10] Choices affected by whether alternatives are presented as gains or losses
38 Hyperbolic discounting [78] #4 Choices affected by small short-term rewards
39 Identifiable victim effect [93] #6 [94] Donation choices affected by whether victims are identifiable
40 Loss aversion [74] #7 [88], [92] Choices affected by whether alternatives are gains or losses
41 Neglect of probability [170] #4 Choices affected by disregard of probability
42 Pseudocertainty effect [171] #5 [9] Choices affected by whether some alternatives are framed as certain
43 Zero-risk bias [73] #4 Choices affected by alternatives with complete risk elimination
44
Baseline
Attraction effect [75] #8 [6], [91] Choices affected by irrelevant dominated alternatives
45 Ballot names bias [39] #6 [39] Voting choices affected by the order of candidate names
46 Cheerleader effect [172] #1 Choices affected by whether people are in a group
47 Compromise effect [41] #5 [6] Choices affected if presented as extreme or average alternatives
48 Denomination effect [173] #4 Choices affected by whether the total amount comes for smaller currency bills
49 Disposition effect [174] #4 Selling choices affected by initial and not current value
50 Distinction bias [175] #4 Choices affected by how many are the alternatives
51 Less is better effect [76] #4 Choices affected if presented separately or juxtaposed
52 Money illusion [176] #4 Choices affected by nominal monetary values
53 Phantom effect [40] #4 Choices affected by dominant but unavailable alternatives
54
Inertia
Endowment effect [80] #4 Choices affected by ownership of alternatives
55 Escalation of commitment [177] #4 Choices affected by continued commitment to suboptimal outcome
56 Functional fixedness [178] #4 Choices of object use affected by the traditional way of use
57 Mere-exposure effect [77] #7 [10], [179], [180] Choices affected by familiarity (repeated exposure)
58 Semmelweis reflex [181] #3 Choices of medical practices affected by former established norms
59 Shared information bias [182] #4 Group choices affected by sharing only known information
60 Status quo bias [81] #5 [183] Choices affected by the urge to avoid a change, even when better expected value
61 Well traveled road effect [184] #4 Travel route choices affected by road familiarity
62 Outcome Reactance [185] #4 Choices affected by the urge to do the opposite of what someone wants you to do
AUTHORS’ VERSION - SEPTEMBER 2018 19
# Flavor Cognitive bias Ref Relevance to InfoVis Short description
63
Self-perspective
IKEA effect [79] #5 [186] Choices affected by whether alternatives involved self-effort
64 Not invented here [187] #4 Choices affected by alternatives of origin external to an organization
65 Reactive devaluation [188] #4 Choices affected by whether alternatives allegedly originated with an antagonist
66 Social comparison bias [189] #1 Hiring choices affected by own competences
TASK: HYPOTHESIS ASSESSMENT
67 Association Illusory truth effect [97] #4 Statement considered true after repeated exposure to it
68 Rhyme as reason effect [190] #1 Statement more likely true if it rhymes
69
Outcome
Barnum effect [191] #2 High accuracy ratings for vague and general statements
70 Belief bias [192] #3 Hypothesis true if conclusion is believable
71 Clustering illusion [193] #5 [10], [149] Seeing patterns in noise, e.g. clusters in a dot field
72 Confirmation bias [19] #5 [10] Favor reasoning or information that confirms preferred hypothesis
73 Congruence bias [98] #4 Seeking confirmation of preferred hypothesis, but not for alternatives
74 Experimenter effect [100] #3 Subconsciously influence study participants to confirm a hypothesis
75 Illusory correlation [99] #5 [9] Perceived relationship between variables that does not exist
76 Information bias [194] #4 Seek additional information irrelevant to a hypothesis or action
77 Pareidolia [195] #4 Seeing faces in noise, e.g. in your toast
TASK: CAUSAL ATTRIBUTION
78
Outcome
Group attribution error [113] #2 Group traits extrapolated to an individual member
79 Hostile attribution bias [111] #1 Ambiguous intents read as hostile
80 Illusion of external agency [196] #2 Positive outcomes attributed to mysterious external agents
81 Just-world hypothesis [197] #2 What goes around comes around
82 System justification [198] #3 Inertial bias for unfair systems (e.g. slavery)
83
Self-perspective
Actor-observer asymmetry [110] #3 Failures of others due to behavior or personality, own failures due to situation
84 Defensive attribution hypothesis [199] #1 Failure or mishap of others judged by own similarity with the actor
85 Egocentric bias [108] #3 Own contribution overestimated
86 Fundamental attribution error [200] #1 Failures of others due to behavior or personality, own failures due to situation
87 In-group favoritism [201] #1 Success and positive traits for ingroup members over outgroup
88 Self-serving bias [109] #2 Own achievement attributed to behavior or personality, failures to situation
89 Ultimate attribution error [112] #3 Failures of outgroup due to behavior or personality, ingroup failures due to situation
TASK: RECALL
90
Association
Childhood amnesia [202] #1 Harder to recall event details before certain age
91 Cryptomnesia [127] #3 Memory mistaken for imagination, inspiration (e.g. unintentional plagiarism)
92 Cue-dependent forgetting [203] #4 Failure to recall information without memory cues
93 Digital amnesia [126] #3 Less likely to remember easily searchable information
94 Duration neglect [204] #2 Recall unpleasant experiences according to intensity, ignoring duration
95 Fading affect bias [121] #1 Emotion of unpleasant events fades, but pleasant does not
96 False memory [205] #1 Imagination mistaken for a memory
97 Humor effect [123] #3 Easier to recall humorous items
98 Leveling and sharpening [206] #4 Recall sharpens some features, weakens others
99 Levels-of-processing effect [125] #4 Easier to recall result of deep level analysis
100 Misinformation effect [207] #3 Recall colored by new information
101 Modality effect [208] #4 Easier to recall items presented auditorily than visually
102 Mood-congruent memory [209] #1 Recall biased toward mood-congruent memories
103 Next-in-line effect [210] #2 Failure to recall words of previous speaker in turns speaking
104 Part-list cueing effect [211] #4 Harder to recall material after reexposure to subset
105 Picture superiority effect [118] #4 Easier to recall images (symbolic representations) than words
106 Positivity effect [212] #1 Easier to recall positive events than negative
107 Processing difficulty effect [213] #4 Easier to recall information which was hard to comprehend
108 Reminiscence bump [214] #1 Easier to recall events from adolescence and early adulthood
109 Source confusion [215] #2 Memory distorted after hearing people speak about a situation
110 Spacing effect [216] #3 Easier to recall information from spaced than massed exposures
111 Suffix effect [217] #2 Recency effect diminished by an irrelevant sound at list end
112 Suggestibility [129] #1 Ideas suggested by a questioner mistaken for memory
113 Telescoping effect [218] #3 Temporal displacement of an event
114 Testing effect [219] #3 Recall tests lead to better memory than recognition tests
115 Tip of the tongue phenomenon [220] #1 Recall parts of an item but not the whole
116 Verbatim effect [221] #1 Easier to recall gist than verbatim wording
117 Zeigarnik effect [122] #1 Easier to recall interrupted tasks than completed
118
Baseline
Bizarreness effect [222] #4 Easier to recall bizarre items
119 List-length effect [223] #4 Harder to recall items from longer lists
120 Serial-positioning effect [95] #4 Best recall first (primacy) and last (recency) items in a series
121 Von Restorff effect [224] #4 Distinct items are better remembered
122 Inertia Continued influence effect [49] #5 [5] Recall only first information even after correction
123
Outcome
Choice-supportive bias [225] #2 Recall past choices as better than they were
124 Hindsight bias [226] #5 [9] Recall past predictions as more accurate after seeing the outcome
125 Rosy retrospection [227] #1 Remember past overly positively
126
Self-perspective
Cross-race effect [228] #1 More difficultly distinguishing people of outgroup race
127 Self-generation effect [229] #4 Self-generated content is easier to recall than if simply read
128 Self-reference effect [230] #1 Easier to recall self-related information
TASK: OPINION REPORTING
129
Association
Halo effect [231] #2 Personality trait ascription affected by overall attractiveness
130 Moral credential effect [232] #1 Non-prejudice credentials allow prejudicial statements
AUTHORS’ VERSION - SEPTEMBER 2018 20
# Flavor Cognitive bias Ref Relevance to InfoVis Short description
131 Negativity bias [233] #3 Social judgments affected more by negative than positive information
132 Baseline Focusing effect [234] #4 Beliefs based on the most pronounced part of given information
133 Inertia Backfire effect [235] #3 Prior beliefs stronger when correction attempted
134 Omission bias [236] #3 Moral blame affected by whether the harm was due to inaction
135
Outcome
Bandwagon effect [133] #2 Beliefs affected by opinions of others
136 Moral luck [137] #1 Moral blame depends on event outcome, not just intent and action
137 Social desirability bias [237] #2 Respond in questionnaires in a socially approved manner
138 Stereotyping [136] #3 Assuming characteristics of group member
139 Women are wonderful effect [238] #1 Associate more positive characteristics to women
140
Self-perspective
Anthropocentric thinking [239] #1 Humans are the center of the universe
141 Anthropomorphism [240] #5 [94] Humans as analogical base for reasoning about non-human life and processes
142 Ben Franklin effect [241] #1 Opinion of others is affected by one’s behavior towards them
143 Bias blind spot [242] #2 Belief that biases are more prevalent in others than oneself
144 Illusion of asymmetric insight [243] #1 Belief that one knows more about others than others know about oneself
145 Illusory superiority [244] #2 Personality traits favorable to oneself over others
146 Naive realism [245] #1 Belief we experience objects in our world objectively
147 Third-person effect [135] #2 Others more vulnerable to mass media messages than oneself
148 Trait ascription bias [246] #2 Own traits are variable, others are predictable
149 Zero-sum bias [247] #1 Belief that one’s gain is another one’s loss
TASK: OTHER
150 Association Attentional bias [248] #3 People extract or process information in a weighted manner
151
Baseline
Risk compensation [140] #3 Risk tolerance based on constant risk, not minimization
152 Surrogation [249] #3 Metrics or models overtake what they were constructed to measure
153 Unit bias [138] #1 People eat more food from bigger containers
154 Outcome Ostrich effect [139] #4 Avoiding negative information
Evanthia Dimara Evanthia Dimara is a research
scientist at the ISIR Laboratory (HCI group) of
Sorbonne University. Her fields of research are
human-computer interaction and information vi-
sualization. Her focus is on decision making –
how to help people make unbiased and informed
decisions alone or in groups.
Steven Franconeri Steven Franconeri is a Pro-
fessor of Psychology at Northwestern University,
and Director of the Northwestern Cognitive Sci-
ence Program. He studies visuospatial thinking
and visual communication, across psychology,
education, and information visualization.
Catherine Plaisant Catherine Plaisant is a re-
search scientist at the University of Maryland,
College Park and assistant director of research
of the University of Maryland Human-Computer
Interaction Lab.
Anastasia Bezerianos Anastasia Bezerianos is
assistant professor in Univ. Paris-Sud and part
of the ILDA Inria team, France. Her interests
include interaction and visualization designs for
large displays, visual perception, user evalua-
tion, and collaborative work.
Pierre Dragicevic Pierre Dragicevic is a perma-
nent research scientist at Inria, France, in the
Aviz team. His research interests include data
physicalization, and visualizations for judgment
and decision making.
TABLE 3: Alternative names for cognitive biases
Bias Synonym / Similar Biases
Actor-observer asymmetry Actor-observer bias
Anchoring e t Focalism or Anchoring
Anthropomorphism Personification
Attraction e t Decoy e t or Asymmetric dominance e
Availability bias Availability heuristic
Base rate fallacy Base rate neglect or Base rate bias
or Extension neglect
Childhood amnesia Infantile amnesia
Choice-supportive bias Post-purchase rationalization
Clustering illusion Texas sharpshooter fallacy
Confirmation bias Confirmatory bias or Myside bias
or Semmelweis reflex
Conjunction fallacy Linda problem
Conservatism Belief revision
Cross-race e t Cross-race bias or Other-race bias
or Own-race bias
Cryptomnesia Inadvertent plagiarism
Cue-dependent forgetting Context e t or Retrieval failure
Digital amnesia Google e
Duration neglect Extension neglect or Peak-end rule
Hard-easy e t Discriminability e t or Difficulty e
Hyperbolic discounting Dynamic inconsistency
Empathy gap Projection bias
Endowment e t Divestiture aversion or Mere ownership e
Escalation of commitment Irrational escalation of commitment or Commitment bias or Sunk cost fallacy
Experimenter e t Experimenter-expectancy e t or Observer-expectancy e
or Experimenter’s bias or Expectancy bias or Subject-expectancy e
Fading a t bias FAD
Focusing e t Focusing illusion
Barnum e t Forer e t or Subjective validation or Acceptance phenomenon
or Personal validation
Fundamental attribution error Correspondence bias or Attribution e
Gambler’s fallacy Monte Carlo fallacy or Fallacy of the maturity of chances
Illusory truth e t Truth e t or Illusion-of-truth e t or Reiteration e t or Validity e
or Frequency-validity relationship or Availability cascade
Illusory superiority Above-average e t or Superiority bias or Leniency error
or Sense of relative superiority or Primus inter pares e
or Lake Wobegon e
Impact bias Durability bias
In-group favoritism In-group bias or In-group/out-group bias or Intergroup bias
Hindsight bias Knew-it-all-along e t or Creeping determinism
Hostile attribution bias Hostile attribution o
Hot-hand fallacy Hot hand phenomenon or Hot hand
noitacitsujtroffEtceffeAEKI
remmahnedloGrolevagswolsaMrotceffegnicapStceffegaL
Mere exposure e t Familiarity principle
Money illusion Price illusion
Negativity bias Negativity e
Not invented here NIH
Optimism bias Unrealistic optimism or Comparative optimism
Pessimism bias Pessimistic bias
Rhyme-as-reason e t Eaton-Rosen phenomenon
Risk compensation Peltzman e
Self-generation e t Generation e
Self-reference e t Self-relevance e
Serial position e t Recency e t or Primacy e
Survivorship bias Survival bias
Telescoping e t Telescoping bias
Testing e t Retrieval practice or Practice testing or Test-enhanced learning
fowaLsnosnikraPytilaivirT
Von Restor e t Isolation e
Weber-Fechner law Weber’s law or Fechner’s law
Worse-than-average e t Below-average e
AUTHORS’ VERSION - SEPTEMBER 2018 21
... There is a growing research interest in addressing this issue in the field of interactive visualization [2,4,5]. In this area, evaluation generally consists of assessing the usability of the technique as in [31], or evaluating the capacity of the technique to support data exploration for analytical tasks such as retrieving a particular value [10]. Based on qualitative studies, Boukhelifa et al. [6] show how experts resolve conflicts between competing objectives but do not address the comparative evaluation of multiple techniques. ...
... For an untrained user such as Alice who has little or no knowledge in thermal models, we hypothesize the following requirements. Visualization techniques targeted at non-experts performing multi-objective choice tasks should: (R1) Hide the complexity of the optimization problem while facilitating the understanding of the mutual influence between the criteria; (R2) Favor the exploration of the Pareto front to find the preferred "best" solution in an informed manner; (R3) Notify users when moving away from the optimal solution space; (R4) Make the Pareto front observable in order to limit the attraction effect bias and to incite users to select optimal solutions [10]. Value paths are a recommendable and effective method of visualization because they are easy to interpret without overloading the decision maker. ...
... The problem is two-fold: (1) Objective metrics are not enough to capture the quality of a decision, given that "finding a good trade-off" is subjective. Subjective metrics such as self-reported satisfaction are useful, but unreliable as they may be subject to cognitive biases [10]. (2) There is a lack of clear references for identifying an appropriate baseline for comparative assessment. ...
... While exploring datasets and making decisions, humans are susceptible to cognitive limitations and biases that arise naturally from perception and intuition [10,14]. There are many different types of biases. ...
Preprint
The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.
... Although the ideal visual analytic workflow is bias-free, in reality, people can fall victim to a variety of cognitive and perceptual biases when making sense of data [14,75,77,84,85]. Visualization readers can latch on to salient features when interpreting visualizations and gravitate toward unique colors [1], larger fonts [30,83], and patterns that are aligned with their beliefs and agendas [7,87], giving in to letting their intuition drive decision making [35,62]. ...
Preprint
When an analyst or scientist has a belief about how the world works, their thinking can be biased in favor of that belief. Therefore, one bedrock principle of science is to minimize that bias by testing the predictions of one's belief against objective data. But interpreting visualized data is a complex perceptual and cognitive process. Through two crowdsourced experiments, we demonstrate that supposedly objective assessments of the strength of a correlational relationship can be influenced by how strongly a viewer believes in the existence of that relationship. Participants viewed scatterplots depicting a relationship between meaningful variable pairs (e.g., number of environmental regulations and air quality) and estimated their correlations. They also estimated the correlation of the same scatterplots labeled instead with generic 'X' and 'Y' axes. In a separate section, they also reported how strongly they believed there to be a correlation between the meaningful variable pairs. Participants estimated correlations more accurately when they viewed scatterplots labeled with generic axes compared to scatterplots labeled with meaningful variable pairs. Furthermore, when viewers believed that two variables should have a strong relationship, they overestimated correlations between those variables by an r-value of about 0.1. When they believed that the variables should be unrelated, they underestimated the correlations by an r-value of about 0.1. While data visualizations are typically thought to present objective truths to the viewer, these results suggest that existing personal beliefs can bias even objective statistical values people extract from data.
... We identify three phases in the development of static visualisations: Define, Design and Refine. We focus on static visualisations because the cognitive processes when interpreting and using interactive visualisations are highly complex, and still widely disputed [113][114][115][116][117][118]. Whilst a UCD process is generally recommended for developing interactive visualisations, the specific development process includes additional steps such interface development, partially functional alpha releases and interaction measurement [119,120]. ...
Article
Full-text available
Visualisations are powerful communication tools that have the potential to help societies assess and manage natural hazard and disaster risks. However, the diversity of risk management contexts and user characteristics is a challenge to develop understandable and useable visualisations. We conducted a systematic literature review to understand the current state developing disaster risk visualisations following design best practices and accounting for the heterogeneity between end-users and disaster risk contexts. We find that, despite being widely recommended, tailoring visualisations to users through the process of user-centred design remains a relatively unexplored topic within disaster risk. To address this, we present a unifying user-centred design framework for disaster risk visualisation, based on existing visualisation frameworks. The framework contains three phases: the Define phase, which aims to define and characterise the disaster risk management context and end-user group who will benefit from a visualisation; the Design phase, which is highly iterative and presents an opportunity to test how users interpret different design elements; and the Refine phase, which focuses on evaluating how users understand, respond to, and make decisions based on the visualisation. The framework is sufficiently flexible to be applied to any disaster risk management and natural hazard context to identify challenges and design effective disaster risk visualisations that are understandable and useable.
Article
Data visualization is powerful in persuading an audience. However, when it is done poorly or maliciously, a visualization may become misleading or even deceiving. Visualizations give further strength to the dissemination of misinformation on the Internet. The visualization research community has long been aware of visualizations that misinform the audience, mostly associated with the terms “lie” and “deceptive.” Still, these discussions have focused only on a handful of cases. To better understand the landscape of misleading visualizations, we open‐coded over one thousand real‐world visualizations that have been reported as misleading. From these examples, we discovered 74 types of issues and formed a taxonomy of misleading elements in visualizations. We found four directions that the research community can follow to widen the discussion on misleading visualizations: (1) informal fallacies in visualizations, (2) exploiting conventions and data literacy, (3) deceptive tricks in uncommon charts, and (4) understanding the designers' dilemma. This work lays the groundwork for these research directions, especially in understanding, detecting, and preventing them.