Conference PaperPDF Available

Abstract and Figures

In this paper, we present an empirical analysis of deceptive visualizations. We start with an in-depth analysis of what deception means in the context of data visualization, and categorize deceptive visualizations based on the type of deception they lead to. We identify popular distortion techniques and the type of visualizations those distortions can be applied to, and formalize why deception occurs with those distortions. We create four deceptive visualizations using the selected distortion techniques, and run a crowdsourced user study to identify the deceptiveness of those visualizations. We then present the findings of our study and show how deceptive each of these visual distortion techniques are, and for what kind of questions the misinterpretation occurs. We also analyze individual differences among participants and present the effect of some of those variables on participants' responses. This paper presents a first step in empirically studying deceptive visualizations, and will pave the way for more research in this direction.
Content may be subject to copyright.
NELLCO
NELLCO Legal Scholarship Repository
.?(7:3%62>.:;2<A!=+42,*?*6-.0*4C.7:A
'7:3260!*8.:; .?(7:3%62>.:;2<A#,17747/*?

How Deceptive are Deceptive Visualizations?: An
Empirical Analysis of Common Distortion
Techniques
Anshul Vikram Pandey
*6;1=48*6-.A6A=.-=
Katharina Rall
3:6A=.-=
Margaret L. Sa,erthwaite
;*D.:<1.@,1*60.4*?6A=.-=
Oded Nov

Enrico Bertini

7447?<12;*6-*--2<276*4?7:3;*< 1D84;:6.44,77:06A=)844<?8
!*:<7/<1. *?*6-#7,2.<A75576;
C2;:<2,4.2;+:7=01<<7A7=/7:/:..*6-78.6*,,.;;+A<1..?(7:3%62>.:;2<A#,17747/*?*< .0*4#,174*:;128".87;2<7:A<1*;+..6
*,,.8<.-/7:26,4=;27626.?(7:3%62>.:;2<A!=+42,*?*6-.0*4C.7:A'7:3260!*8.:;+A*6*=<17:2B.-*-5262;<:*<7:7/ .0*4
#,174*:;128".87;2<7:A7:57:.26/7: 5*<27684.*;.,76<*,< <:*,A<1758;766.44,77:0
".,755.6-.-2<*<276
!*6-.A6;1=4&23:*5"*44*<1*:26*#*D.:<1?*2<.*:0*:.<7> -.-*6-.:<2626:2,77?.,.8<2>.*:..,.8<2>.
&2;=*42B*<276;6582:2,*46*4A;2;7/755762;<7:<276$.,1629=.;  
 !*8.:
1D84;:6.44,77:06A=)844<?8
How Deceptive are Deceptive Visualizations?:
An Empirical Analysis of Common Distortion Techniques
Anshul Vikram Pandey
School of Engineering,
New York University
anshul.pandey@nyu.edu
Katharina Rall
School of Law,
New York University
kr1326@nyu.edu
Margaret L. Satterthwaite
School of Law,
New York University
satterth@exchange.law.nyu.edu
Oded Nov
School of Engineering,
New York University
onov@nyu.edu
Enrico Bertini
School of Engineering,
New York University
enrico.bertini@nyu.edu
ABSTRACT
In this paper, we present an empirical analysis of deceptive
visualizations. We start with an in-depth analysis of what de-
ception means in the context of data visualization, and cate-
gorize deceptive visualizations based on the type of deception
they lead to. We identify popular distortion techniques and
the type of visualizations those distortions can be applied to,
and formalize why deception occurs with those distortions.
We create four deceptive visualizations using the selected dis-
tortion techniques, and run a crowdsourced user study to iden-
tify the deceptiveness of those visualizations. We then present
the findings of our study and show how deceptive each of
these visual distortion techniques are, and for what kind of
questions the misinterpretation occurs. We also analyze indi-
vidual differences among participants and present the effect
of some of those variables on participants’ responses. This
paper presents a first step in empirically studying deceptive
visualizations, and will pave the way for more research in
this direction.
Author Keywords
Deceptive Visualization; Empirical Analysis; Evaluation.
ACM Classification Keywords
H.5.m. Information Interfaces and Presentation (e.g. HCI):
Miscellaneous
INTRODUCTION
In recent years, data visualization has gained popularity as
a powerful communication tool to support arguments with
numbers while still making messages accessible. In fields
as disparate as business, policy analysis, human rights, and
Preprint of the full paper published at ACM CHI 2015.
journalism [14, 35], specialists and laypersons are using data
to shape compelling, informative, and convincing narratives,
conveyed through or supported by visualizations. While the
use of such visual depictions as persuasion devices is not
new, the popular use of visualizations has undoubtedly in-
creased due in part to user-friendly software that allows non-
experts to create visualizations. As such practices become
more widespread and accessible, important new challenges
and questions arise. If visualizations can make messages
more accessible, comprehensible and persuasive [27, 37], vi-
sual representations can also be easily misused and misunder-
stood - even by their creators.
This problem has been known for a long time and it is not
limited to visual representations but more to the general prob-
lem of communicating through numbers and statistics. Dar-
rell Huff’s "How to Lie with Statistics", published in 1954,
popularized the problem and warned against the many traps
of using statistics and charts in communication [15]. In the
1980s, Edward Tufte introduced the concept of graphical in-
tegrity in his classic "Visual Display of Quantitative Informa-
tion" and succinctly explained the many subtle ways in which
data graphics can distort information [38].
Despite their influence, these seminal works have not pre-
vented distortion through data visualization. If anything, the
heightened popularity of data visualization may have actually
increased the prevalence of such cases and the impact they
have on the population at large. Examples abound in popu-
lar media, TV and the Internet [4]. Some examples collected
from popular sources [3, 1, 2] involve notorious distortions,
such as manipulation of axis orientation/scale (Figure 1[a,c]),
use of disproportionate sizes (Figure 1[f]), incorrect represen-
tation (Figure 1[d]) and non-linear scales (Figure 1[b,e]).
Misrepresentations may result from a lack of expert knowl-
edge, as appears to be the case in distorted representations
collected from human rights organizations. However, the
possibility of influencing the audience can also create an in-
centive to use distortions intentionally - either in a zeal to
convince, in the case of advocates, or in an attempt to mis-
lead, in the case of unethical marketing firms. While jour-
Figure 1. Some of the real-world data visualization examples which might lead to misinterpretation of message, hence to deception.
nalists may run the risk of accidentally distorting visualiza-
tions due to their significant time constraints, some very ob-
vious examples of media misrepresentations suggest that dis-
torted visualizations are sometimes used intentionally. While
the motives behind these types of distortion differ dramat-
ically from an ethical point of view, their visual character-
istics and perception by the audience are largely the same.
In this work we take a step toward understanding the extent
to which audiences are deceived and whether there is a rela-
tionship between deception and individual differences among
people. While the literature cited above, and many newly
available works [4], warn against the danger of deceptive vi-
sualizations, we are surprisingly not aware of any empirical
work aimed at assessing the severity of deception. To close
this gap, we designed and ran a series of crowdsourced stud-
ies aimed at understanding the deceptive effect of distortion
techniques.
Our studies stem from a preliminary analysis of existing de-
ceptive visualizations which led us to (1) categorize decep-
tive visualization effects and focus on two main classes: mes-
sage reversal and message exaggeration/understatement and
(2) derive synthetic examples to reproduce these effects in a
controlled environment.
In the studies, we selected a set of common misrepresen-
tation techniques from the classes we identified as frequent
and created deceptive and non-deceptive versions of the same
charts. We also collected personal traits of participants related
to education,chart familiarity and visual ability to examine
whether these traits play a role as co-factors on deception.
Our results show that deceptive charts have a major impact
on how people interpret a message and that in some cases this
effect is modulated by some personal attributes we included
in the study.
The main contributions of our work are: (1) the definition and
classification of deceptive methods in visualization; (2) the
empirical confirmation and measurement of some of the well-
known graphical distortion techniques; and (3) the empirical
analysis of the effect of personal attributes on the deceptive
effect.
We believe this is an important first step toward a better char-
acterization and understanding of how visualizations impact
their readers. By studying the deceptive effect we expand
some of the recent research on the cognitive and social effects
of visualization, including research on bias [16], memorabil-
ity [7, 9], literacy [11], and persuasion [27].
RELATED WORKS
The fact that it is possible to "lie" with statistics and visual
representations has been known for a very long time in areas
related to data analysis and representation. The 1950s classic
"How to Lie with Statistics" introduced numerous methods
through statistical communication can lead to misinformation
[15]. In the 1980s, Tufte developed the concept of graphical
integrity and the lie factor to describe how visual representa-
tion can distort information and deceive the reader [38]. Sim-
ilar in spirit, and building upon them, are two more recent
books on the same topics: "How to Lie with Charts" [17],
which focuses mostly on the use of charts in business envi-
ronments, and "How to Lie with Maps" [24], with focuses on
geographical visual representations. While all of these works
expose common deception patterns and provide guidelines to
spot and avoid them, we are not aware of studies that test the
deception effect in a controlled experiment.
Visualization researchers have, however, studied how visual
encoding can distort information at level of perception. Par-
ticularly relevant is research on visual encoding which estab-
lishes how data is perceived and compared when represented
with different visual channels such as position, size, color,
angle. Bertin [8] introduced the concept of visual encod-
ing and visual channels and provided guidelines on how to
best use them. Cleveland and McGill in their famous exper-
iments on graphical perception discovered that some visual
channels lead to more accurate comparisons of quantitative
information than others [13] (e.g., position along a common
scale being the best one and area a poor one). Color, if not
used properly, can lead to numerous distortions. The semi-
nal work “How not to lie with visualization" [33], the more
recent "Rainbow color map (still) considered harmful" [10]
and numerous experiments on the topic show how poor color
selection can lead to numerous distortions [12, 21]. The per-
ception of correlation in scatter plots and parallel coordinates
can also be problematic: when asked to estimate correlations,
participants typically underestimate the positive correlations
and overestimate the negative correlations in parallel coordi-
nate plots [22, 31]. Mapping a numeric quantity to the radius
of a circle rather than area is another popular technique that
leads to distortion, and the use of area size has been found to
be problematic also when rectangles are used [19].
All these studies focus mostly on visual effectiveness and are
based on the careful selection and testing of specific percep-
tual tasks. For instance, Cleveland and McGill asked par-
ticipants to compare two bars marked with a dot in a bar
chart. While this is of course useful, we are more interested in
studying deception effects at the message level of visualiza-
tion because it better simulates what happens in the real world
when a reader is presented with a new chart (e.g., in a news-
paper, book, or presentation). Therefore, rather than asking
participants to compare two bars in a chart, our studies ask
participants to compare quantities between real-world objects
using the domain language, as described in the User Study
section. This, as explained in graph comprehension theory
[36], is a crucial difference as part of the tasks the reader will
need to perform is the translation between the question and
the mapping between the domain concepts and the graphical
representation (what Pinker calls graphical schemata [30]).
Since the deceptive effect may well happen during this trans-
lation and mapping phase of graph comprehension, we deter-
mined it would be important to run studies that focus on the
message level of data visualization.
DECEPTION AND DECEPTIVE VISUALIZATIONS
In data visualization, the classic adage “do not lie to your
users" [34] is fading fast. Designers and communicators are
utilizing the power of data visualization to augment their mes-
sages, which in turn has an effect on how users perceive the
original message. While we are not aware of any study which
discusses the trade-off between how much augmentation is
good enough, or what is the maximum extent to which a com-
municator may augment through additional information with-
out distorting the underlying message, there are strong pro-
ponents and opponents of deceptive designs that we would
classify as deceptive.
Webster’s Dictionary describes deception as “the act of mak-
ing someone believe something that is not true" [5], which
implies that deception necessarily involves an intent to mis-
lead. However, as Adar et al. [6] maintain, “deceptive(ness)
does not require intent", i.e., one may induce deception with-
out lying. There are numerous examples of data visualiza-
tions and infographics which are deceptive, but which may
not have involved an intent by the communicator to deceive.
For example, while omitting outliers and other data points
to show a best-fit regression line leads to a deceptive visu-
alization with intent by a statistician, such poor practice may
also result from novice use of statistics. Similarly, not follow-
ing best practices of visualization design, such as truncating
the axes, may lead to a deceptive visualization either with or
without intent, depending on the level of sophistication of the
creator.
To shed some more light on the origins of these problems
in one particular field we conducted interviews with experts
working at the intersection of data and human rights. These
interviews suggested that while statistical literacy remains
relatively low within the human rights advocacy world, the
power of data-driven advocacy exerts an increasing pull to use
data visualization. Human rights experts readily agree that
there is little tailored evidence to guide decisions about how
best to design data visualization to support human rights mes-
sages. Further, human rights organizations frequently lack
sufficient personnel with specialist training in data analysis
and visualization. This mismatch between strong incentives
to use visualization and gaps in capacity can lead to the ac-
cidental construction of distorted visualizations, which can in
turn mislead target audiences.
Based on these findings and the definitions introduced in the
previous work [32], we put forth a working definition of de-
ceptive visualization as “a graphical depiction of informa-
tion, designed with or without an intent to deceive, that
may create a belief about the message and/or its compo-
nents, which varies from the actual message". Instead of
focusing on the intent of the creator, we are interested in an-
alyzing aspects of deceptive visualization such as distortion
techniques, deception severity, etc., and thus do not explore
the boundary of intentional vs. unintentional deceptiveness.
In visualization, deception may occur at two levels - the chart
level, where the user reads the chart incorrectly, and/or pro-
cesses an incorrect estimate of the data presented; and the
message level, where users interpret the message incorrectly.
Chart level deceptions occur at the visual encoding level, and
are mostly modulated by the ability or inability of the user to
read the chart correctly. The user’s visual literacy, ability, and
chart familiarity play an important role in neutralizing the de-
ceptive effect of the visualization. Message level deceptions
occur at message interpretation level, and may lead to creat-
ing false beliefs about the message and/or its components.
In the real world, visualizations are usually accompanied by
a message, hence, it is interesting to study how visualizations
lead to a message level deception. As this is the first exper-
imental study that explores visualization deceptiveness, we
chose to narrow our scope to focus only on message level de-
ceptions. There are various ways to visually deceive view-
ers even at the message level, e.g., presentation of delib-
erate misinformation, distractions, information overload, or
through deceptive techniques applied on the level of visual
encoding. In our study, we decided to start from the com-
mon types of graphical distortions without including deceiv-
ing effects that do not stem from choices made at the vi-
sual encoding level (e.g., we excluded deliberate data ma-
nipulation). Starting with complete data, we identified two
broad classes of message level deception - Message Exagger-
ation/Understatement, and Message Reversal, as described
below.
Message Exaggeration/Understatement
This kind of deception happens when the fact is not distorted,
however, but the extent of the presented fact is tweaked, i.e.,
the fact is exaggerated. For example, if a chart compares two
quantities - A and B, where A is bigger than B, but the users
are presented with the fact that A is bigger than B, but the ex-
tent is exaggerated. This type of deception affects the “How
much" type of questions, such as “How much do you think is
quantity A bigger than quantity B?"
Message Reversal
This type of deception happens when a visualization encour-
ages users to interpret the fact in the message incorrectly. For
example, if a chart compares two quantities - A and B, where
A is bigger than B, the users perceive the message as Ais
smaller than B. Thus, users perceive the incorrect message
due to a distorted visualization, even though the actual data is
presented. This type of deception affects the “What" type of
questions, such as “What does the chart show?".
STUDY RATIONALE AND METHODS
Given the limited empirical literature on deceptive visualiza-
tion, we set out here some of the design rationale that made
up the groundwork of this study. The main experiment design
choices that are necessary for this kind of study include the
selection of distortion and affected visualization techniques,
the mechanism to create deceptive visualizations to study the
effect, a way to detect/measure the deceptive effect, and a
measure of additional attributes that might impact users’ re-
sponses.
Selecting Distortion and Visualization Techniques
In “How to lie with Charts" Jones says: “Almost all visualiza-
tions are prone to distortion or lie" [17]. However, distortion
techniques are visualization specific: one type of distortion
technique may not affect all visualizations. For some visu-
alizations, colors play a role in deception, for others trun-
cated axes or missing labels add the deceiving layer. This
research focuses on some of the visualization techniques that
are widely used. Our focus on those techniques that reach
broad audiences allowed us to exclude complex visualization
techniques such as heatmaps or network maps in favor of fo-
cusing on simpler charts used extensively by journalists, hu-
man rights activists, and policy makers. We chose four dis-
tortion techniques: truncated axis, area as quantity, aspect
ratio, and inverted axis that we identified as having been used
in these realms.
Creating Treatments
Based on the distortion and visualization techniques, we cre-
ated two set of treatments or visualizations, one control and
one deceptive, for each of the distortion techniques men-
tioned above. In the following, we present the illustration of
treatments we created for each of the distortion techniques.
Truncated Axis
In the truncated axis visual distortion, one or more of the axes
of a chart are altered by changing the minimum and maximum
values presented on the scale, as shown in Figure 2. Such
alteration of the axis range leads to exaggeration or under-
statement of the quantities presented, thus directly affecting
the user’s response to the “how much" type of questions. For
example, in Figure 2, users’ responses to the question “How
much do you think Y is bigger than X?" is likely to be depen-
dent on the type of chart (control/deceptive) presented. It is
important to note that all chart types that are axis-based are
susceptible to this type of distortion. In this research, we use
this distortion technique with bar-charts only.
Figure 2. Illustration showing Truncated Axis distortion, which leads to
message exaggeration/understatement type of deception.
Area as Quantity
Encoding quantitative data with size has faced serious criti-
cisms in the visualization community, and is a process that
requires careful mapping of data with graphics. Although no
guidelines are available about how to map the actual data with
graphical area, it is believed that a one-to-one mapping be-
tween the data and the graphical area is least prone to distor-
tion. This is also one of the six graphical integrity principles
suggested by Tufte [38]. However, it is not uncommon to see
data mapped with one of the variables in the graphics that af-
fect the graphical representation exponentially. This induces
the message exaggeration/understatement type of deception.
For example, from Figure 3 one may conclude that Y is "a
lot" bigger than X, when the quantity is mapped to the ra-
dius of the circle (in the deceptive visualization). Other area-
based charts are also prone to this type of distortion. When
the quantity is mapped to the area, as in the control condition,
the visualization is less susceptible to deception.
Figure 3. Illustration showing Area as Quantity distortion, which leads
to message exaggeration/understatement type of deception.
Aspect Ratio
This type of distortion primarily affects line-charts as it di-
rectly impacts the rate of increase or decrease of one quantity
over another. While one may argue that aspect ratio may im-
pact other visualizations such as bar-charts where the width-
to-height ratio of the bars may create a similar effect, we ap-
ply this distortion to only line-charts as they appear more fre-
quently with this type of charts. Another way of looking at
this type of distortion is the angle of inclination/declination
of the lines that are affected because of the changes in the
aspect ratio. As shown in fig 4, by widening the scales on
one of the axes, the angle can be distorted. Hence, the rate
at which the quantity appears to increase seems slower in the
deceptive visualization condition as compared to the control
condition. This type of distortion also leads to message exag-
geration/understatement.
Figure 4. Illustration showing altered angle of the trend responsible for
"rate of increase/decrease" due to Aspect Ratio distortion, which leads
to message exaggeration/understatement type of deception.
Inverted Axis
Human beings relate directions with trends, such as: upwards
- increase, downwards - decrease, right - front/progress, left -
back/receding [17]. This directional interpretation makes in-
verted axis one the most common distortion techniques that
leads to reversal of the message, and makes the users suscepti-
ble to drawing false conclusions. In other words, here decep-
tion occurs due to reversal of the message instead of exagger-
ation or understatement. This type of distortion also affects
almost all visualization techniques that are axis-dependent.
Figure 5 shows two chart conditions (control and deceptive)
showing an increase in the quantity on the y-axis by quantity
on the x-axis, however, the latter gives an impression that the
quantity is decreasing because of the inverted y-axis.
Figure 5. Illustration showing Inverted Axis distortion, which leads to
message reversal type of deception.
Detecting Deception
In the scope of this research, we are only concerned with the
deception that occurs through visual misrepresentation of in-
formation, which has an effect at the message level. As men-
tioned earlier, we categorized two types of message level de-
ceptions - Message Exaggeration/Understatement, and Mes-
sage Reversal, as follows:
Message Exaggeration/Understatement
These questions were created to detect whether the par-
ticipants received the message in its original or exagger-
ated/understated form. As mentioned earlier, in this type of
deception the fact of the message is not distorted, but is exag-
gerated. We detect this exaggeration/understatement by ask-
ing the ”how much" type of question with a context, such as
How much better do you think the condition of safe drinking
water access in Silvatown is as compared to that in Wilow-
town?", where the users would reply on a 5-item Likert scale
ranging from slightly better to substantially better. As this
type of deception is unidirectional, i.e., it is extremely rare
that the user would find smaller quantity bigger, the question
and scale were designed to capture the single-tailed effect. To
detect and study the extent of exaggeration/understatement,
we compared the results of the correctly represented and mis-
represented charts through a between-subject analysis.
Message Reversal
These questions were created to detect whether the users re-
ceived the fact in the message correctly or not. Unlike mes-
sage exaggeration where the fact in the message is not dis-
torted, here the deceit occurs at the message level, distort-
ing it. Instead of asking the “how much" question, we ask a
multiple-choice “what" type of question, such as “What can
you say about the access to safe drinking water by the ma-
jority ethnic group in Silvatown, between 1995 and 2010?".
We provide three answer choices to the participants, includ-
ing two interpretations of the message (one correct, and one
incorrect, which the visual distortion would lead to), and "un-
certain" as the third answer choice. A comparison of accu-
racy and between-subject analysis of the response gives an
estimate of whether or not there is an effect of distortion.
Measure of Individual Differences
One of the most important aspects of studying visualization
impact on human perception is understanding the ability of
the target population to read and process the information pre-
sented, and form an interpretation of the underlying message.
We included this aspect by using a two stage process to ana-
lyze participant’s familiarity with basic charts, and their abil-
ity to process the information. To quantify familiarity, we
used a 5-Likert scale question: How familiar are you with
the chart shown below?", with responses ranging from “not
familiar at all" to “very familiar", presuming that higher chart
familiarity means better ability of the participant to detect de-
ception.
Another important variable to determine individual differ-
ences is “Need for Cognition". Need for cognition is a per-
sonality trait studied in social psychology to characterize the
extent to which individuals are inclined towards effortful cog-
nitive activities. This factor goes hand-in-hand with visual
ability in profiling an individual based on his cognitive abil-
ities and desire to process graphical information. Petty and
Cacioppo define it as: the tendency to engage in and enjoy
effortful cognitive endeavors". In our study, we use their short
18-item test [29], which is one of the most popular scales to
measure need for cognition. Apart from these, participants’
demographic information, such as age,gender and educa-
tion were recorded, while we primarily analyzed education as
higher education may lead to higher visual literacy and ability
to reason with statistics and graphs.
USER STUDY
We started our analysis by conducting two pilot studies. The
first study was aimed at identifying whether or not there is
a noticeable effect of distortion techniques on participants’
responses, leading to deception. We used a real world decep-
tive visualization example (Figure 1[a]) and asked questions
to detect any misinterpretation of the presented information.
We found that such distorted visualizations can deceive users.
Figure 6. Various stages of the experiment. Stages 1, 2, 3 and 5 correspond to the individual differences test, while 4 corresponds to deception test.
In the second study, we wanted to understand whether an ar-
tificially generated scenario and data can lead to a similar de-
ceptive effect under the same distortion techniques. Similar
to the first pilot study, the participants were deceived, how-
ever, we did not find any noticeable difference between the
responses when real or artificial scenario/data were used. The
final user study was designed based on the findings of these
two pilot studies.
In the final study, we conducted two types of experiments tar-
geted to assess each of the two deception types - Message
Exaggeration/Understatement, and Message Reversal. In the
first type of experiment, we conducted tests on three decep-
tive visualization cases, one for each of the three distortion
techniques - "truncated axis", "aspect ratio", and "area", in-
dividually. For the second type of experiment, we conducted
tests on the "inverted axis" type of distortion technique. All
the experiments were performed in a crowd-based setting
with the primary goal to find an effect of distortion on users’
responses, and additionally capture other interesting trends.
The following section describes the experiments in detail.
Experiment Setup
The experiment consisting of four individual visual distor-
tions (three for message exaggeration/understatement type of
deception; the other for message reversal type of deception)
was conducted using Amazon Mechanical Turk (MT). We
chose MT as our primary experimental platform because it al-
lowed us to conduct parallel studies on a diverse subject pool
in an iterative fashion, allowing us to quickly test and refine
our hypotheses. While conducting behavioral research based
on self-reported measures on a crowdsourced platform may
be considered problematic, several researchers have demon-
strated the viability of MT as a reliable data collection plat-
form [18, 26]. For instance, Paolacci et al. [28] compared re-
sults of classic experiments in judgment and decision-making
using traditional and crowdsourcing methods and found that
participants behave consistently.
For our final user study, we recruited 330 unique partici-
pants from MT who self-reported a United States location and
whose previous task approval rate was equal to or exceeded
99%. Each experiment took 5-10 minutes and the participants
were paid US $0.30 for participation.
Procedure
To take part in the study, the participants clicked on the link
provided in the description of our task on MT. The link redi-
rected the participants to a webpage hosted on our internal
servers, where they undertook various stages of the experi-
ment, as shown in Figure 6. Participants were provided a
consent form with the description of the study, the data we
would collect, and the tasks they would need to perform.
Upon agreeing to the terms, we presented a personal infor-
mation form (stage 1) to collect the basic demographic infor-
mation about the participants, such as age, gender and educa-
tion level. The next page presented a visual ability test (stage
2). The test was designed to include visual processing tasks
[23, 20]. Next, the participants were presented a chart famil-
iarity test (stage 3) where we asked how familiar they were
with the specific types of charts: bar-charts, line-charts, and
area-charts. We added a simple chart example to each of the
familiarity questions. The participants provided their answer
on a 5 point Likert scale, ranging from 1 (not familiar at all)
to 5 (very familiar). Once the participants clicked "Proceed",
one of the 6 treatments was assigned to them - control or de-
ceptive from one of the 3 distortion types - in the message
exaggeration/understatement focused experiment, or one of
the 2 treatments - control or deceptive from the inverted axis
distortion type - in the message reversal focused experiment.
On the treatment page (stage 4), we presented a brief
overview about the chart, the actual chart, a message level
deception test question, and an attention check question.
The question to detect message level deception was designed
to ensure that the participant did not have to perform any es-
timation tasks, avoiding the graph comprehension problem.
For the bar-charts and bubble-charts, we asked the partici-
pants to compare the quantities presented, by asking “how
much better do you think is [quantity A] as compared to
[quantity B] in terms of [the context]". Similarly, for the
line-charts, we asked the participants to detect improvement
of a quantity over time, by asking “how much do you think
[quantity A] has improved in terms of [the context] between
[time period]". For the line-area-charts of the message rever-
sal type of deception, we asked “what can you say about the
effect of [quantity A] over [quantity B]?". It is important to
note that in each of the presented treatments, the actual num-
bers/data were presented on the chart as we were interested
in detecting deception due to visual representation even in
the presence of accurate data. Based on the information pre-
sented in the chart, we asked an attention check question to
filter out random clickers. Finally, at stage 5, the participants
responded to a simplified need for cognition scale. Once the
study was successfully complete, the participants were paid
through Amazon Payments. We later used the responses col-
lected at stages 1, 2, 3 and 5 to determine individual differ-
ences, and those collected at stage 4 to detect deception.
In our study design, distortion type is the main independent
variable, whereas user response to the deception test question
is the only dependent variable we take into account for the
statistical analysis purposes. For the individual differences
test, we consider three factors - chart familiarity, visual abil-
ity, and need for cognition, which may play a confounding
role in modulating the user response. Other variables derived
from user response are response accuracy - the percentage
of correct responses - used to detect message reversal type of
deception, and mean/average user response - the average of
user response for the given condition.
RESULTS
We conducted a series of crowdsourced user studies to ex-
plore deceptive visualizations, and to determine how severe
different distortion techniques are in terms of deceiving the
user. We studied two types of deception: Message Exagger-
ation/Understatement and Message Reversal by applying rel-
evant distortion techniques on commonly used visualizations
and asking “How much" and “What" types of questions to de-
tect the two types of deception, respectively. As the two stud-
ies are dissimilar on various aspects, we analyzed the data
separately. In this section, we first present findings on the
message exaggeration/understatement type of deception, and
later on the message reversal type of deception.
Message Exaggeration/Understatement
We chose three distortion techniques - Truncated Axis (bar
chart), Area as Quantity (bubble chart), and Aspect Ratio
(line chart) - to study this type of deception. For each dis-
tortion technique, we created two treatments: control and de-
ceptive. The deceptive visualization examples from the actual
study are shown in Figure 7[a,b,c].
We recruited 250 unique participants for this study and as-
signed one of the six treatments to each of them. As a step
to filter out the noise and retain the data quality, we filtered
out those participants who did not provide a response to the
attention check question or incorrectly answered it. The fi-
nal distribution of the participants by distortion technique and
treatment is shown in Table 1.
Treatment Truncated Axis Area as Quantity Aspect Ratio
Control 37 40 38
Deceptive 43 40 42
Table 1. Distribution of participants (who answered all the attention
check questions correctly) by treatment (Control, Deceptive) and distor-
tion technique (Truncated Axis, Area as Quantity, Aspect Ratio).
Effect of Distortion on Response
Across all chart types, we found a significant effect of dis-
tortion on participants’ responses. We found that participants
who saw the deceptive visualization perceived the underlying
message in its exaggerated form, and responded higher on the
5-Likert scale when asked the “How much" question. Table 2
shows the average participant response when exposed to one
of the 6 treatments, and Figure 8 shows the distribution with
95% confidence interval.
Figure 8. Average participant response with 95% confidence interval,
when exposed to a treatment.
Treatment Distortion Technique Average Response
Line (Deceptive) Aspect Ratio 3.19, 95% CI [2.76, 3.62]
Line (Control) Aspect Ratio 1.39, 95% CI [1.23, 1.55]
Bubble (Deceptive) Area as Quantity 2.71, 95% CI [2.27, 3.15]
Bubble (Control) Area as Quantity 1.71, 95% CI [1.45, 1.98]
Bar (Deceptive) Truncated Axis 2.77, 95% CI [2.26, 3.28]
Bar (Control) Truncated Axis 1.45, 95% CI [1.27, 1.62]
Table 2. Average participant response (with 95% CI) to the "how much"
question, by treatment (Control, Deceptive) and distortion technique
(Truncated Axis, Area as Quantity, Aspect Ratio). (minimum = 1, maxi-
mum=5)
We ran a Mann-Whitney’s U test (one-tailed) to evaluate the
difference in the responses of our 5-item Likert scale ques-
tion, for each of the treatment categories (Line, Bubble, Bar).
We found a significant effect of distortion for all three treat-
ment categories. For the treatments with bar-charts, the "trun-
cated axis" distortion led to difference in the responses be-
tween the control condition and the deceptive visualization
condition. We found the difference in participants’ responses
highly significant (p <0.001) for control/deceptive conditions
across all the distortion types. For the "aspect ratio" distortion
type, the mean ranks of Line (Control) and Line (Deceptive)
were 24.42 and 55.04, respectively; U = 1409, Z = 5.88, p
<0.0001, r = 0.66. Similarly, for the treatments with bubble-
charts where the "area as quantity" distortion plays a role,
the mean ranks of Bubble (Control) and Bubble (Deceptive)
were 32.47 and 48.52, respectively; U = 1121, Z = 3.08, p
= 0.0007, r = 0.34. The treatments that involved bar-charts
(affected by the "truncated axis" distortion), the mean ranks
of Bar (Control) and Bar (Deceptive) were 31.08 and 48.60,
respectively; U = 1144, Z = 3.36, p = 0.0003, r = 0.37.
Message Reversal
We chose a single yet common distortion technique - In-
verted Axis - to study this type of deception. We applied
this distortion technique on a line-area-chart where the rep-
resentation distorted the underlying message. This led the
participants to interpret the fact (“what" type of question)
Figure 7. Deceptive visualization examples with corresponding distortion technique used in the study. Examples (a), (b) and (c) are used for message
exaggeration/understatement technique, and (d) for message reversal.
presented in the message wrong, unlike the previous decep-
tion type where participants would misinterpret only the ex-
tent (“how much" type of question) of the presented fact. In
other words, this distortion technique facilitates an estimation
of accuracy in participants’ responses, hence it can be tested
in a between-subject design. We created only 2 treatments
(control/deceptive) to study this distortion. Each treatment
accompanies a deception test question to calculate response
accuracy - the variable which we later used for all analysis
purposes. The deceptive visualization example from the ac-
tual study is shown in Figure 7[d].
We recruited 80 unique participants for this study and ran-
domly assigned one of the two treatments (control/deceptive)
to them. We applied the same filtration mechanism to elimi-
nate noise from the data. Out of the 80 participants, 78 passed
our filtration criteria and were included in the dataset for fur-
ther analysis. 38 of those participants were shown the decep-
tive visualization and 40 were shown the control condition.
Effect of Distortion on Response
Based on the information presented in Figure 7[d], we asked
the participants “what can you say about the condition of ac-
cess to safe drinking water by majority ethnic group over
time?", and provided a correct interpretation (“improved"),
an incorrect interpretation (“declined") and an uncertain (“I
do not know") answer choice.
Out of the 38 selected participants who saw the deceptive
visualization, 30 responded incorrectly, 7 correctly and one
chose the uncertain response. For the 40 participants who saw
the control condition, 39 responded correctly, 1 responded
incorrectly and no participant reported uncertainty. The re-
sponse distribution by treatment type is shown in Table .
Treatment Selected Correct Incorrect Uncertain
Control 40 39 (97.50%) 1 (2.5%) 0
Deceptive 38 7 (18.42%) 30 (78.95%) 1 (0.02%)
Table 3. Response distribution (Correct, Incorrect, Uncertain) by treat-
ment type for the Inverted Axis distortion.
To test for statistical significance of the differences in re-
sponse we use the Freeman-Halton extension of Fisher’s Ex-
act Test, testing the null hypothesis that distortion has no ef-
fect on the participants’ response. The findings were statis-
tically highly significant (p <0.0001), showing the effect of
distortion on participants’ response, hence, rejecting the null
hypothesis.
Attribute LevelLine chart Bubble chart Bar chart
Education Low 3.93, [3,29, 4.56] 3.07, [2.42, 3.72] 2.73, [1.91, 3.54]
High 2.85, [2.36, 3.33] 2.50, [1.93, 3.05] 2.73, [2.20, 3.24]
Chart Low 2.72, [1.91, 3.53] 2.60, [1.73, 4.49] 4.34, [3.98, 4.69]
Familiarity High 3.41, [2.96, 3.89] 2.71, [2.22, 3.21] 2.42, [1.93, 2.94]
Visual Low 2.88, [1.95, 3.79] 2.72, [1.75, 3.69] 4.07, [3.36, 4.79]
Ability High 3.33, [2.86, 3.79] 2.68, [2.21, 3.18] 2.23, [1.73, 2.74]
Need for Low 3.40, [2.63, 4.20] 3.10, [2.08, 4.15] 3.22, [2.10, 4.32]
Cognition High 3.18, [2.71, 3.68] 2.56, [2.10, 3.03] 2.67, [2.12, 3.20]
Table 4. Average participant response (with 95% CI) across various in-
dividual differences attributes, grouped by the deceptive chart type.
Analyzing Individual Differences
As explained in Section , we collected relevant participant
data (stages 1,2,3 and 5 in Figure 6) to identify individual dif-
ferences. Our main goal was to see whether some of these
personal attributes have an impact on how susceptible a per-
son is to the deceptive effect. Here we provide the results of
our analysis on the four main individual difference proxies we
used: education level (collected as part of the demographic
data), chart familiarity,visual ability, and need for cognition.
Figure 9. Comparison of average user response across deceptive charts,
by the low/high level of individual differences factors.
Individual differences are studied largely in behavioral re-
search where personal attributes of the user have an effect
on his response [25]. Following this line of research, we con-
ducted a quantitative analysis on the effect of individual dif-
ferences on participants’ responses, and also correlation be-
tween each of these factors to test for their independence. To
facilitate our analysis and the communication of the main re-
sults, we binned the values obtained from individual differ-
ences into "high" and "low" values. For example, out of the
5 education levels (primary, secondary, undergraduate, grad-
uate, doctorate), we regarded "primary" and "secondary" as
"low education", and the rest as "high education". Partici-
pants who correctly responded to more than three questions
on the visual ability test were tagged as "high visual ability",
and the remaining ones as "low visual ability" participants.
Similarly, those who self-reported a chart familiarity of 4 or
5 were tagged as "high chart familiarity", and those with 3
or below as "low chart familiarity" participants. Participants
with overall "Need for Cognition" (range = [18, 90]) less than
54 were tagged as "low need for cognition", and greater or
equal to 54 were tagged as "high need for cognition".
The results are presented in Figure 9. Table 4 presents all
the effect sizes for individual differences with confidence in-
tervals. Each chart in Figure 9 represents the effect of one
individual difference factor (education,visual ability,chart
familiarity and need for cognition from left to right). Within
each chart, a line represents the effect size (that is the decep-
tive effect) obtained when the individual difference factor is
low (left side) and high (right side). Line color represents the
distortion effects we tested: line,bubble,bar.
As one can see from the figure, no clear trends exist between
individual differences and participants’ responses, across all
chart types. The only major trend we observed was the ef-
fect of visual ability and chart familiarity on deception when
a bar chart is shown (yellow line). Education also seemed
to have an effect when a line chart was used, although the
confidence intervals overlap. For the other charts, the effect
was not clear as the confidence intervals are wide and over-
lap substantially, and hence are not shown in the figure. As
the two charts provide very similar results, we also performed
a correlation analysis between visual ability and chart famil-
iarity, which revealed low correlation (p = 0.171) between the
two factors. While education and need for cognition seem to
follow a similar trend across chart type, the confidence inter-
vals overlap substantially, making it hard to provide strong
conclusions on these effects. We also performed a correla-
tion analysis between all the individual difference factors and
none of them showed strong correlation. Despite our expec-
tations the analysis on individual differences did not provide
definite conclusions.
DISCUSSION
The main goal of this study was to investigate whether and to
what extent people are deceived by a number of well-known
distortion techniques employed in graphical presentation of
data and statistics. We also set out to see whether this effect
is modulated by a set of individual differences we selected for
the study.
The results confirm that these techniques do lead to major
misinterpretation from the reader’s side and that the effects
are also rather large. When asked to compare two entities or
variables by answering on a scale between 1and 5, the dis-
torted charts lead to responses between 58.5% and 129.5%
bigger than the control condition. Out of the three charts
that use a message exaggeration/understatement technique,
the line chart is the one with the biggest effect, followed by
the bars and then bubble, suggesting that these type of charts
may have a more pronounced effect. The same is true for the
conditions covering message reversal in which the deceptive
condition led to 97.5% incorrect responses whereas the con-
trol condition led to only 18.4% incorrect responses.
Our analysis of the individual differences did not provide any
conclusive information. However, some of the individual dif-
ferences attributes seemed to have en effect for a particular
type of chart. Further research is needed to disentangle the
relationship between deception technique, chart type and in-
dividual differences. More precisely, it is necessary to un-
derstand if the effects depend on the chart type or only on
distortion technique used. Another interesting direction is to
take into account the general graphical literacy of the partic-
ipants, and see whether it correlates with the response. The
visualization literacy test developed by Boy et al. [11], can be
employed for this purpose. While our study includes an ele-
ments that can be considered a proxy for literacy, e.g., chart
familiarity, an objective measure of literacy may lead to more
interesting results.
It is also important to discuss the main implications of this
study. While deception through visualization has been known
and discussed for a long time, our study allows a solid foun-
dation for such discussions. It is important to advance the
science of visualization and to help visualization practition-
ers with visualization principles and guidelines. It is also
necessary to devise strategies to cope with the problem we
identified and discussed in this study. It is necessary to edu-
cate readers and increase their ability to spot problematic and
potentially deceiving information.
At last, we put forth a series of questions regarding real world
visualizations. Are they designed intentionally to deceive?
How can we distinguish between the intentionally and un-
intentionally created misrepresentations? Analyzing the root
causes that lead to the publication of this kind of charts would
lead to a better understanding of the phenomenon and hope-
fully to better solutions to the problem.
CONCLUSION
We presented in this study a first step in empirical analysis of
deceptive visualizations. We started with a formal definition
of deceptive visualization, outlining what deception means in
the context of data visualization, what its types are, and what
kind of distortion techniques are used to induce certain types
of deception. We conducted a series of user studies to pro-
pose, test and establish the hypothesis that visual distortion
indeed has an effect on participants’ responses. Finally, we
examined the effect of individual differences on participants’
responses, followed by a discussion of how severe various
distortion techniques are based on the quantitative analysis of
the collected responses. We believe that future research will
benefit from our work as it provides a foundation for further
exploration of the space of deceptive visualizations.
ACKNOWLEDGMENTS
This work was partially supported by NSF Award IIS-
1149745 and NYU-Poly Seed Fund Grant for Collaborative
Research.
REFERENCES
1. Business Insider. http://read.bi/1esYzVd/, 2014.
2. Flickr. https://www.flickr.com/photos/oversight/
8475012926/in/photostream/, 2014.
3. Media Matters.
http://mediamatters.org/research/2012/10/01/
a-history- of-dishonest- fox-charts/190225/, 2014.
4. Visualising Advocacy.
https://visualisingadvocacy.org/blog/
disinformation-visualization- how-lie- datavis/,
2014.
5. Webster’s Dictionary. http://www.merriam- webster.
com/dictionary/deception/, 2014.
6. Adar, E., Tan, D. S., and Teevan, J. Benevolent
deception in human computer interaction. In
Proceedings of CHI, ACM (2013), 1863–1872.
7. Bateman, S., Mandryk, R. L., Gutwin, C., Genest, A.,
McDine, D., and Brooks, C. Useful junk?: the effects of
visual embellishment on comprehension and
memorability of charts. In Proceedings of CHI,ACM
(2010), 2573–2582.
8. Bertin, J. Semiology of graphics: diagrams, networks,
maps. esri press, 1983.
9. Borkin, M. A., Vo, A. A., Bylinskii, Z., Isola, P.,
Sunkavalli, S., Oliva, A., and Pfister, H. What makes a
visualization memorable? IEEE Transactions on
Visualization and Computer Graphics (Proceedings of
InfoVis 2013) (2013).
10. Borland, D., and Taylor II, R. M. Rainbow color map
(still) considered harmful. IEEE computer graphics and
applications 27, 2 (2007), 14–17.
11. Boy, J., Rensink, R., Bertini, E., and Fekete, J. A
principled way of assessing visualization literacy.
12. Brewer, C. A. Spectral schemes: Controversial color use
on maps. Cartography and Geographic Information
Systems 24, 4 (1997), 203–220.
13. Cleveland, W. S., and McGill, R. Graphical perception:
Theory, experimentation, and application to the
development of graphical methods. Journal of the
American Statistical Association 79, 387 (1984),
531–554.
14. Emerson, J. Visualizing information for advocacy.
Tactical Technology Collective, 2013.
15. Huff, D. How to lie with statistics. WW Norton &
Company, 1954.
16. Hullman, J., Adar, E., and Shah, P. The impact of social
information on visual judgments. In Proceedings of
CHI, ACM (2011), 1461–1470.
17. Jones, G. E. How to lie with charts. LaPuerta Books and
Media, 2011.
18. Kittur, A., Chi, E. H., and Suh, B. Crowdsourcing user
studies with mechanical turk. In Proceedings of CHI,
ACM (2008), 453–456.
19. Kong, N., Heer, J., and Agrawala, M. Perceptual
guidelines for creating rectangular treemaps.
Visualization and Computer Graphics, IEEE
Transactions on 16, 6 (2010), 990–998.
20. Lengler, R. Identifying the competencies of’visual
literacy’-a prerequisite for knowledge visualization. In
IV (2006), 232–236.
21. Levkowitz, H., and Herman, G. T. Color scales for
image data. IEEE Computer Graphics and Applications
12, 1 (1992), 72–80.
22. Li, J., Martens, J.-B., and Van Wijk, J. J. Judging
correlation from scatterplots and parallel coordinate
plots. Information Visualization 9, 1 (2010), 13–30.
23. Micallef, L., Dragicevic, P., and Fekete, J. Assessing the
effect of visualizations on bayesian reasoning through
crowdsourcing. Visualization and Computer Graphics,
IEEE Transactions on 18, 12 (2012), 2536–2545.
24. Monmonier, M. How to lie with maps.
25. Nov, O., and Arazy, O. Personality-targeted design:
theory, experimental procedure, and preliminary results.
In Proceedings of the 2013 conference on Computer
supported cooperative work, ACM (2013), 977–984.
26. Nov, O., Arazy, O., López, C., and Brusilovsky, P.
Exploring personality-targeted ui design in online social
participation systems. In Proceedings of CHI,ACM
(2013), 361–370.
27. Pandey, A., Manivannan, A., Nov, O., Satterthwaite, M.,
and Bertini, E. The persuasive power of data
visualization. Visualization and Computer Graphics,
IEEE Transactions on 20, 12 (Dec 2014), 2211–2220.
28. Paolacci, G., Chandler, J., and Ipeirotis, P. G. Running
experiments on amazon mechanical turk. Judgment and
Decision making 5, 5 (2010), 411–419.
29. Petty, R. E., and Cacioppo, J. T. Communication and
persuasion: Central and peripheral routes to attitude
change. Springer-Verlag New York, 1986.
30. Pinker, S. A theory of graph comprehension. Artificial
intelligence and the future of testing (1990), 73–126.
31. Rensink, R. A., and Baldridge, G. The perception of
correlation in scatterplots. In Computer Graphics Forum,
vol. 29, Wiley Online Library (2010), 1203–1210.
32. Richards, J. Deceptive advertising: Behavioral study of
a legal concept. Routledge, 2013.
33. Rogowitz, B. E., Treinish, L. A., Bryson, S., et al. How
not to lie with visualization. Computers in Physics 10,3
(1996), 268–273.
34. Rubinstein, R., and Hersh, H. The human factor:
Designing computer systems for people. Morgan
Kaufmann Publishers Inc., 1987.
35. Segel, E., and Heer, J. Narrative visualization: Telling
stories with data. IEEE Transactions on Visualization
and Computer Graphics 16, 6 (2010), 1139–1148.
36. Shah, P., and Hoeffner, J. Review of graph
comprehension research: Implications for instruction.
Educational Psychology Review 14, 1 (2002), 47–69.
37. Tal, A., and Wansink, B. Blinded with science: Trivial
graphs and formulas increase ad persuasiveness and
belief in product efficacy. Public Understanding of
Science (2014), 0963662514549688.
38. Tufte, E. R., and Graves-Morris, P. The visual display of
quantitative information, vol. 2. Graphics press
Cheshire, CT, 1983.
... Therefore, it is advisable to use data visualization if it can easily address the purpose of an explanation. However, visualizations should be deployed thoughtfully, as they have the ability to be abused and can successfully misrepresent a message through techniques like exaggeration or understatement (Pandey et al., 2015). ...
Article
Full-text available
Increasingly, laws are being proposed and passed by governments around the world to regulate artificial intelligence (AI) systems implemented into the public and private sectors. Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them. Yet, almost all AI governance documents to date have a significant drawback: they have focused on what to do (or what not to do) with respect to making AI systems transparent, but have left the brunt of the work to technologists to figure out how to build transparent systems. We fill this gap by proposing a stakeholder-first approach that assists technologists in designing transparent, regulatory-compliant systems. We also describe a real-world case study that illustrates how this approach can be used in practice.
... Messaris (1997) notes that judging consequential deception stemming from the visual dimension of advertising is difficult because, unlike written language, visual communication lacks explicit propositional syntax to guide interpretation of elements. researchers have attempted to demonstrate deception through so-called visual lies; one example is a study by Pandey et al. (2015) of the effect of distorted graphic elements in charts and figures that exaggerate quantity differences, relative to the actual numbers being reported. we also know that visual elements of communication can affect processing of erroneous claims and corrective information about those claims (e.g., Aikin et al. 2017). ...
Article
We define scientific misinformation as publicly available information that is misleading or deceptive relative to the best available scientific evidence and that runs contrary to statements by actors or institutions who adhere to scientific principles. Scientific misinformation violates the supposition that claims should be based on scientific evidence and relevant expertise. As such, misinformation is observable and measurable, but research on scientific misinformation to date has often missed opportunities to clearly articulate units of analysis, to consult with experts, and to look beyond convenient sources of misinformation such as social media content. We outline the ways in which scientific misinformation can be thought of as a disorder of public science, identify its specific types and the ways in which it can be measured, and argue that researchers and public actors should do more to connect measurements of misinformation with measurements of effect.
Article
bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Background: Graphs, especially those that are generated automatically, are often subject to mistakes in their processing, framing, and construction, sending unintended messages that neither the viewer nor the author may realize. This article analyzes the eye-tracking data of 57 participants to extend the results of a previous study that investigated how people are deceived by common mistakes and deceptive tactics in data visualizations and titles. Literature review: Previous research has suggested that viewers are susceptible to deception by misleading titles or graph presentations, and that such information can influence how they interpret graphs. Previous eye-tracking research has only measured viewing patterns of nondeceptive graphs. Research questions: 1. How much attention do participants give to various areas of a graph when not given any instruction on what to look for, nor what they might be asked about? 2. Are there differences in how participants view and interpret deceptive versus control graphs about noncontroversial topics? 3. Are there differences in how participants view and interpret graphs about noncontroversial topics paired with control or exaggerated titles? Methodology: This study analyzed view time, fixations, revisits, and time to first fixation for the graph area, title, y-axis, and x-axis of four line graphs. Qualitative responses were also coded and analyzed. Results: Among other significant findings, this study found that participants spent significantly less time looking at both line graph axes for graphs with a rhetorically exaggerated title than those with a control title. Participants also fixated on and revisited deceptive graphs more so than control graphs, and fixated and revisited the title and x-axis of control graphs significantly more than deceptive graphs. Qualitative results contribute further patterns. Discussion: Findings suggest that graphs with exaggerated titles make viewers less attentive to the axes, but deceptive graphs cause viewers to examine the lines of the graphs themselves in greater detail. Conclusion: Subtle changes in the makeup of graphics can significantly change how viewers examine such visualizations. It is critical to better understand how these changes influence viewing and how they might be leveraged to ultimately impact understanding.
Article
Full-text available
The public display of election poll results is often manipulated to influence voter predictions about the race. Narrow scaling is one such manipulation that involves truncating the chart’s vertical axis such that its range extends closely around the values of the bars. This manipulation exacerbates the visual difference between bars, making the margin appear larger than an unbiased representation would suggest. The current research examines whether narrow scaling of a bar chart depicting the degree of support for political candidate affects voters’ predictions about election outcomes. In three experiments, conducted during the 2022 U.S. gubernatorial and senate elections, we displayed published polls to potential voters using a wide- or a narrow-scaled bar chart. We found that when the scale is narrow voters are more likely to predict that the leading candidate in the poll will win the election and by a larger margin. This scaling bias occurs despite voters’ relative skepticism about narrow-scaled polls. We further find that the scaling effect is attenuated when the poll margin is relatively large and enhanced when numerical value labels are removed from the graphic display.
Article
Journalistic interactive visualizations (JIVs) – such as scrollytellings, interactive infographics, and clickable data visualizations – have an epistemic potential to efficiently and intricately mediate rich journalistic knowledge. In practice, however, they usually mediate and oversimplify limited knowledge. To understand why and where, across the production line, JIVs lose their epistemic potential, we map the production process of JIVs in three leading Israeli news organizations based on a combination of in-depth and reconstruction interviews with 22 JIV producers. Findings point to nine prominent bifurcations, where the production course can shape JIVs so that they mediate rich and intricate or limited and simplistic knowledge. Overall, these findings reflect a misfit between an “industrial” production process and a “post-industrial” news product.
Article
Full-text available
Data visualization has been used extensively to inform users. However, little research has been done to examine the effects of data visualization in influencing users or in making a message more persuasive. In this study, we present experimental research to fill this gap and present an evidence-based analysis of persuasive visualization. We built on persuasion research from psychology and user interfaces literature in order to explore the persuasive effects of visualization. In this experimental study we define the circumstances under which data visualization can make a message more persuasive, propose hypotheses, and perform quantitative and qualitative analyses on studies conducted to test these hypotheses. We compare visual treatments with data presented through barcharts and linecharts on the one hand, treatments with data presented through tables on the other, and then evaluate their persuasiveness. The findings represent a first step in exploring the effectiveness of persuasive visualization.
Article
Full-text available
We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.
Article
Contents: Preface. Introduction. The Law's View of Deception as a Legal Concept. The Law's View of Deceptiveness as a Behavioral Concept. Behavioral Researchers' View of Deceptiveness as a Behavioral Concept. A Proposed Theory and Definition of Deceptiveness. A Design for the Measurement of Deceptiveness. Pilot Study. Summary. Appendices: Original Advertisements. "True" Memoranda. "False" Memoranda. No-Attribute-Information Control Stimuli. Instructions and Questionnaires.