Conference PaperPDF Available

”I am Definitely Manipulated, Even When I am Aware of it. It’s Ridiculous!” - Dark Patterns from the End-User Perspective

"I am Definitely Manipulated, Even When I am Aware of it. It’s
Ridiculous!" - Dark Paerns from the End-User Perspective
Kerstin Bongard-Blanchy
University of Luxembourg
Esch sur Alzette, Luxembourg
Arianna Rossi
SnT, University of Luxembourg
Luxembourg, Luxembourg
Salvador Rivas
University of Luxembourg
Esch sur Alzette, Luxembourg
Sophie Doublet
University of Luxembourg
Esch sur Alzette, Luxembourg
Vincent Koenig
University of Luxembourg
Esch sur Alzette, Luxembourg
Gabriele Lenzini
SnT, University of Luxembourg
Luxembourg, Luxembourg
Online services pervasively employ manipulative designs (i.e., dark
patterns) to inuence users to purchase goods and subscriptions,
spend more time on-site, or mindlessly accept the harvesting of
their personal data. To protect users from the lure of such designs,
we asked: are users aware of the presence of dark patterns? If so, are
they able to resist them? By surveying 406 individuals, we found
that they are generally aware of the inuence that manipulative
designs can exert on their online behaviour. However, being aware
does not equip users with the ability to oppose such inuence. We
further nd that respondents, especially younger ones, often recog-
nise the "darkness" of certain designs, but remain unsure of the ac-
tual harm they may suer. Finally, we discuss a set of interventions
(e.g., bright patterns, design frictions, training games, applications
to expedite legal enforcement) in the light of our ndings.
Security and privacy Social aspects of security and pri-
;Usability in security and privacy;
Human-centered com-
puting Empirical studies in HCI; Graphical user interfaces.
dark patterns, online manipulation, digital nudging, consumer pro-
tection, user experience, user interface
ACM Reference Format:
Kerstin Bongard-Blanchy, Arianna Rossi, Salvador Rivas, Sophie Doublet,
Vincent Koenig, and Gabriele Lenzini. 2021. "I am Denitely Manipulated,
Even When I am Aware of it. It’s Ridiculous!" - Dark Patterns from the End-
User Perspective. In ACM DIS Conference on Designing Interactive Systems,
June 28– July 2, 2021, Virtual event, USA. ACM, New York, NY, USA, 14 pages.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
DIS 2021, June 28– July 2, 2021, Virtual event, USA
©2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8476-6/21/0.
The pervasiveness of manipulative practices in online services is
increasingly under the limelight. Thanks to information technolo-
gies, manipulative practices can be implemented at low costs, at
large scale, with unprecedented sophistication [
] and high ef-
fectiveness [
] in dynamic, interactive, intrusive, and adaptive
environments [
]. Such online practices endeavour to inuence
purchase decisions, nudge people to spend considerable amounts
of time on a service (thus intensifying data collection to fuel the
so-called attention economy [
]), and trick users into accepting
privacy-invasive features, thereby undermining their right to the
protection of their personal information and exposing them to
privacy harms. Online manipulation does not only erode legal pro-
tections, but it also deprives unaware individuals of their capacity
for independent decision-making [
]. Therefore, the phenomenon
is scrutinised by a growing number of practitioners and researchers,
with the aim of exposing it and devising countermeasures.
This article focuses on a specic form of online manipulation:
dark patterns. They are dened as “design choices that benet an
online service by coercing, steering or deceiving users into making
decisions that, if fully informed and capable of selecting alternatives,
they would not make” [
]. Dark patterns direct user behaviour
towards choices that may oer advantages like ease of use, free ser-
vices and immediate gratication. However, such choices may have
an adverse impact on individual welfare (e.g., invasion of privacy,
nancial loss, behavioural addiction) and collective welfare (e.g.,
harm to competition, erosion of consumer trust) [
]. Dark patterns
are believed to work because they exploit cognitive biases and hu-
man bounded rationality [
], and a growing body of research
demonstrates the inuence of dark patterns on online behaviours.
Although experts voice their apprehension, users’ awareness re-
garding dark patterns is still an under-researched topic and has
only been recently the object of dedicated studies [28, 46].
The study presented in this article seeks to ll this gap, by deter-
mining whether dark patterns exploit (1) users’ lack of awareness
or concern; (2) users’ incapability of recognising dark patterns (the
so-called “dark-pattern blindness” [
]); or (3) users’ inability to
resist dark patterns, despite their awareness and ability to recog-
nise them. By investigating the user perspective, we aim to identify
requirements for eective countermeasures, that can either act on
the individual or on factors lying outside the individual. If users are
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
unaware or unconcerned about dark patterns, one solution consists
in strengthening their motivation to counteract them (e.g., using
warnings to increase the salience of risks). Suppose users are con-
cerned about the risks deriving from dark patterns, but they are
nevertheless unable to withstand them. In that case, their ability
to resist needs to be improved (e.g., adding friction designs that
disrupt automatic behaviour), while stronger environmental protec-
tions should also be leveraged (e.g., steep nes against companies
employing dark patterns).
This article makes the following contributions to understand
users’ awareness of manipulative designs online: (1) It reveals that
users are able to recognise dark patterns, but they are only vaguely
aware of the entailed concrete harm. It furthermore hints that a
higher ability to discern manipulative designs is positively related
the capacity to self-protect. Moreover, the study shows that people
under 40 and with higher education than high school diplomas
are more likely to recognise dark patterns. (2) The ndings guide
designers, educators, developers, and regulators to draft appropriate
interventions to counteract manipulative designs online, both in
terms of intervention scope and measure.
Previous work has investigated the presence of dark patterns in on-
line services [
], even through automated means like
web scraping [
]. Various categories and denitions have
been proposed to characterise the phenomenon in general [
but also specically to account for video-games [
], ubiquitous
computing [
], automated systems [
] and home robots [
]; or
in the context of data privacy [
] and e-commerce [
]. Exam-
ples of dark pattern implementations have been gathered in online
collections [
] to create knowledge, raise awareness,
propose alternatives and build training corpora for algorithms. Al-
though there is a consensus that dark patterns can employ and
even combine coercive, deceptive, and nudging strategies, clear
boundaries between (inadmissible) manipulative designs and other
(admissible) designs (e.g., digital nudges helping users reach praise-
worthy goals, like adopting more secure behaviour online) are yet
to be set. Mathur et al
. [48]
recently sought to bring coherence to
the existing jungle of denitions and attributes by establishing that
some dark patterns modify the decision space in an asymmetric,
restrictive, unequal or covert manner. In contrast, others manipu-
late the information ow through deception or the concealment
of information. They also map out the normative considerations
that underpin the problematic nature of dark patterns with respect
to other designs. They diminish the individual and the collective
welfare, weaken regulatory objectives and undermine individual
A growing number of studies demonstrates the eect of dark
patterns on online behaviour and strives to nd the causes of their
eectiveness, especially of those extorting consent in cookie di-
alogues [
]. It has been proposed [
] that
human innate cognitive limitations (e.g., cognitive biases, bounded
rationality) are skilfully exploited by online services to direct users
toward choices they may regret [
]. Examples are the status quo
bias that benets from the human tendency to stick with the default
option and the bandwagon eect that leverages herd behaviour.
Certain cognitive biases might interfere with risk assessment [
and can hereby explain ill-decision making. For example, hyper-
bolic discounting causes people to overvalue current rewards (e.g.,
accomplish a task), while they inadequately discount the cost of
future risks [
] (e.g., privacy invasion). The optimism bias [
might make individuals underestimate their disposition to online
Some scholars investigated whether individuals are able to iden-
tify dark patterns. Di Geronimo et al
. [20]
introduced the notion
of "dark pattern-blindness", to explain why most respondents (i.e.,
well-educated, of various origins) in their study were not able to
recognise dark patterns in mobile applications. However, when
the study participants were informed of the potential presence of
dark patterns in the context at hand, they became more capable
of spotting them. Luguri and Strahilevitz
showed that mild
(i.e., more subtle) dark patterns go more easily unnoticed than ag-
gressive ones and that less-educated individuals are signicantly
more likely to be inuenced than more educated subjects. Shaw
noticed how the overuse of scarcity and social proof messages
on travel websites makes consumers ignore them even on other
types of websites. On a similar note, M. Bhoot et al
. [44]
found that
the ability to identify a dark pattern is correlated with its frequency
of occurrence and the frustration it provokes. If the interface is ap-
pealing, respondents tend to experience less frustration and hardly
notice manipulative attempts. The experimental data shows that
certain design attributes can inuence people’s capacity of spotting
and resisting dark patterns.
The attitudes of various stakeholders towards dark patterns have
been explored, too. Design practitioners [
] have been
the rst to voice their concerns over these questionable practices.
Similar considerations have been developed by regulators [
and consumer organisations [
]. Several studies [
] have analysed practitioners’ ethical values and their conict
with other stakeholders’ interests. However, only recently it has
been inquired whether dark patterns are a source of concern for end-
users. Maier and Harr
found a raising awareness and a general
sense of annoyance among Swedish students. They also uncovered
resignation, as their respondents believed it impossible to avoid
online manipulation, and they acknowledged that the benets (e.g.,
free service) outweigh the negative consequences. In an analogous
study [
], English-speaking and Mandarin-speaking respondents
evoked a general impression of manipulation in digital products.
They were able to identify what makes them grow suspicious, even
though they lacked a specic vocabulary to indicate the source of
that feeling.
In terms of solutions, Graßl et al
. [29]
used design nudges (so-
called bright patterns) to reverse the direction of dark patterns
and steer users’ consent decisions towards the privacy-friendly
option (e.g., pre-selection of "Do not agree" option). They also rec-
ommended long-term boosts that help users acquire procedural
rules because the repeated use of analytic thinking converts into
protection heuristics (e.g., every time I encounter a consent request,
I take the time to read the information before making a choice).
Based on a survey among impulse buyers, Moser et al
. [53]
posed friction designs that counteract dark pattern mechanisms in
purchase decisions (e.g., disabling urgency and scarcity messages).
M. Bhoot et al
. [44]
and Mathur et al
. [47]
suggested a plug-in
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
or browser extension that automatically detects dark patterns on
websites and noties the user. Leiser
discussed the regulatory
tools that can be leveraged to prohibit and ne these practices. In
parallel, Maier and Harr
assumed that dark patterns diminish
customers’ trust in and the credibility of a brand in the long term,
leading customers to stop using the service.
The spectrum of possible dark pattern design implementations is
vast, ranging from coercive designs that constrain user options to
nudges that subtly play on the visual prominence of one choice over
another. Thus, it is impossible to identify one single intervention
that could free the web from all dark patterns. Drafting appropriate
interventions is hence a design problem in itself. Before working
on the solutions, it is however indispensable to understand which
user issue the interventions aim to solve.
Individuals may execute a threat appraisal [
] that makes them
believe that dark patterns do not inict serious harm. They may
also think that they are invulnerable - or at least less vulnerable
than others, as it is customary in online risk appraisal [
]. We
therefore asked:
Are users aware of and concerned about the inuence of
manipulative interface designs on their behaviour?
Di Geronimo et al
. [20]
concluded that individuals are subject to
’dark pattern-blindness’. It is also assumed that manipulation is a
hidden inuence, while coercion is not [
], and that nudges work
only when people are unaware of the inuence that is exerted on
them [41]. This is why we asked:
RQ2 Are users able to recognise manipulative interface designs?
It is further assumed that the transparency (i.e., the visibility)
of an inuence is a crucial dimension for its acceptability, because
it gives the opportunity to control the inuence [
], for instance
by resisting. However, transparency may not be sucient to con-
trast the inuence: resignation, benets (e.g., free service) [
], the
cognitive costs of opposing dark patterns and other factors might
undermine the ability to resist. This is why we sought to relate
awareness, ability to detect and inuence of dark patterns on users,
by asking:
Are users likely to be inuenced by manipulative interface
designs despite being aware of, concerned about, and capable
of recognising manipulative interface designs?
Additionally, lower educational levels seem to be correlated with
a greater inuence on consumer behaviour [
]. Therefore, we
explored if level of education, age, and use frequency of online ser-
vices are signicantly associated with the three research questions.
4.1 Study design
To investigate the three research questions, we designed an on-
line survey on LimeSurvey,
administered through Prolic.
Figure 1: Sequence of the questions in the survey.
were rst asked about their general mindset concerning manipula-
tive designs online, followed by ratings of their online behaviour,
before being exposed to specic dark pattern designs (Figure 1). In
addition, demographic data regarding their gender, age, and educa-
tion was gathered. All questions were mandatory, except for a nal
general feedback eld. The three parts of the survey are detailed in
the following.
4.1.1 Part 1: Awareness and concern. The rst part of the survey
addressed participants’ awareness and concerns about online de-
signs’ potential inuence. Six statements were displayed in pairs
(one pair per page) opposing general perspective (“people/others”)
and personal perspective (“my/me”):
The design of websites or applications can inuence [peo-
ple’s/my] choices and behaviours.
Websites or applications that are designed to manipulate
users can cause harm to [people/me].
I am worried about the inuence of manipulative websites
and applications on [people’s/my] choices and behaviours.
Participants were instructed to rate their agreement on a 5-point
Likert scale (from -2 = strongly disagree to 2 = strongly agree). If
they gave an armative or undecided answer (0, 1 or 2), they
were furthermore invited to cite examples of experienced inuence,
potential harm and related worries after each statement pair.
4.1.2 Part 2: Use frequency of online services and disposition to
manipulation. The second part served to complement the demo-
graphic data. To obtain a proxy of participants’ exposure to online
services, participants had to rate the frequency of their engagement
with eight common services (How often do you: play online games /
order products online / use social media / etc.?). As a second indicator,
In the second and third question of part 1, the term "manipulate/manipulative" was
chosen to indicate dark patterns in a commonly understandable manner, while in the
rst question "inuence" was preferred to avoid negative priming. Similarly, we used
both terms in Part 3: Spot the dark pattern. In certain cases, we deliberately chose the
term manipulation because inuence of design can be very widely interpreted and
lead to answers that are not relevant for the research at hand (as the answers to the
rst open question illustrate).
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
we sought to obtain the participants’ disposition to be inuenced
by manipulative designs online. To this end, participants had to
indicate their usual behaviour in eight situations in which web
services commonly employ manipulative strategies (While using
online services: I reserve a service quickly when there are only a few
items left. / I keep the default permissions when I install an app. / etc.).
We randomised the item order for both tasks and kept the phrasing
neutral to avoid that the action would be perceived as undesirable
4.1.3 Part 3: Spot the dark paern. The third part served to evalu-
ate the participants’ capability to recognise dierent dark pattern
types. Ten interfaces of existent online services were displayed in
random order. The interfaces had been redesigned in a uniform style
and freed of any reference to a real brand. One example without
any dark pattern was included as control condition. The other nine
examples contained dark patterns that impact individual welfare,
causing nancial harm, data privacy harm, and time and attention-
related harm. Within these categories, authors one and two gath-
ered numerous examples of existing interfaces with dark patterns
from reports [
], online collections
and personal screenshots. The
interface selection represented a mix of popular and less known
brands of various services, such as e-commerce websites, dating
apps, and social media. Table 1 provides the denition of the dark
patterns embedded in the interfaces shown in Fig. 2.
Each example was displayed for 10 to 40 seconds, depending on
its textual complexity. The participants were asked if they noticed
any design element that might inuence their behaviour. It was
made explicit beforehand that not all examples contained such
elements. This indication and the time constraint served to limit
excessive searching that does not occur in a regular use context.
Once the image disappeared, the participants saw a thumbnail of
the interface and a text eld. They had to describe the manipulative
element (i.e., the means of the inuence) and the presumable service
intention (i.e., its ends) to employ that element. After going through
all ten interfaces, the participants were given an explanation about
the contained dark pattern(s). The explanations also pointed to
potential benets of these designs for users (e.g., ease of use). To
conclude, the participants had to rate on a 5-point Likert scale (from
-2 = strongly disagree to 2 = strongly agree) if they believed it likely
to be inuenced by the displayed designs and if they considered
the strategy employed by the online service acceptable.
4.2 Participants
The survey collected responses from 413 participants. The data of
seven individuals that gave gibberish answers were excluded from
data analysis, leaving a sample size of 406. Prolic allowed to gather
a representative sample of the UK population in terms of age, gender
and ethnic origin.
Since this option was only available for the UK
and the US, the former was selected to address participants living in
a uniformly regulated digital ecosystem. The demographics of the
participants were as follows: 193 male, 200 female, 13 non-disclosed.
Their age ranged from 18 to 81 years (mean 45.2, SD 15.5): Silent
Generation (75-92 years) = 3, Baby Boomers (56-74 years) = 130,
Generation X (40-55 years) = 112, Generation Y / Millennials (24-39
years) = 119, Generation Z / Zoomers (<24 years) = 42. Concerning
the level of education, 106 had a high school diploma or lower,
236 vocational training or a Bachelor’s degree, and 64 were post-
graduates. Several iterations with 16 pre-test participants served
to enhance the comprehensibility of the questions and to reduce
the duration to max. 30 minutes to avoid participants’ fatigue. The
survey was published and completed on Prolic on July 7, 2020. All
participants were compensated with 3.75£, a price indicated as fair
by Prolic6.
4.3 Ethical and Legal Considerations
The study adheres to the University of Luxembourg’s research
ethics guidelines and the European Federation of Psychologists’
Associations’ code of ethics
. In addition, the authorisation of the
University’s Ethics Review Board was obtained prior to the study.
The survey gathered answers anonymously, and the questions did
not inquire about information that would allow the identication
of participants.
4.4 Data analysis
4.4.1 Awareness of and concern about the influence of manipulative
online designs (RQ1). First, we calculated the mean, median and
mode scores for the awareness ratings. We then computed a two-
sided sign test to verify if the delta between the ratings referring to
the participants (i.e., personal perspective) and their ratings refer-
ring to people in general (i.e., general perspective) was signicant.
Bivariate Pearson correlations were furthermore used to analyse
how the personal awareness ratings correlate with the demographic
data. The qualitative answers on awareness, harm, and worry were
coded in MAXQDA
through an inductive approach. Researcher
one coded 10 per cent (41 participants) of the sample and developed
a set of codes. Researcher two coded the same set with the possibil-
ity to add codes. Non-agreement cases were discussed and codes
adapted. The same procedure was repeated with another set of 41
participants. Since the inter-coder agreement reached 0.81 (Kappa
Brennan & Prediger), researcher one nalised the coding for the
whole data set. The codes included use cases, inuence objectives,
inuence types, harm types, concern types and types of victims.
4.4.2 Dark paern detection (RQ2). The open answers to each
example in the third part of the survey were coded through a de-
ductive approach, by assigning a score depending on whether the
participant identied the manipulative design element(s) correctly
(no=0 / partly=0.5 / yes=1). Each example showed one main dark
pattern, identied by the authors following the sources where the
examples were obtained. Further plausible manipulative design
elements found by the respondents were inductively included in
the pool of correct or partially correct answers. Researchers one
and two coded the answers of 10 per cent of the sample and devel-
oped the codebook in a shared document. Non-agreement cases
were discussed and the codebook consequently adapted. The same
procedure was repeated with another random set of 41 participants
by researchers one and four. The inter-rater agreement reached
https://researcher-help.proli What- is-your-
7 code/meta-code/
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
Figure 2: The interfaces designs tested in this study.
a kappa of 0.77 (Kappa Brennan & Prediger). Given the substan-
tial level of agreement, researcher one nalised the coding for the
whole data set. The quantitative data analysis was undertaken in
(v.16.1). The dark pattern detection scores for each partici-
pant were summed (ranging from 0 to 9). Since it is not possible
to draw a distinct line between high and low detection scores, we
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
Type Denition
High-demand message Indicating that a product is in high demand and likely to sell out soon [47]
Limited-time message Indicating that a deal will expire soon without specifying a deadline [47]
Conrmshaming Using shame to steer users towards making a certain choice [47]
Trick question Using confusing language to steer users towards making a certain choice [47]
Loss-gain framing
A selective disclosure of information that positively frames the consequences of an
action, while omitting the entailed risks [6]
An option is selected by default prior to user interaction, even though it is against her
interest or may have unintended consequences [28]
False hierarchy
Visual or interactive prominence of one option over others, whereas available choices
should be evenly leveled rather than hierarchical [28]
Hidden information Disguising relevant information (options, actions) as irrelevant [28]
Auto-play Automatically loading one video when the previous ends [3]
Bundled consent
Gathering consent for multiple settings through a single action (our own denition but
see [62] )
Forced consent
Coercing users into accepting xed legal terms in exchange for access to the service
Table 1: Denitions of the dark patterns selected for this study.
could not transform detection outcomes into binary variables. An
OLS regression was hence chosen to control for signicant dier-
ences deriving from age, educational level, use frequency of online
services, and disposition to be inuenced by online designs.
4.4.3 Likelihood to fall for dark paerns (RQ3). Similar to the dark
pattern detection scores, we summed the participants’ ratings about
their likelihood to be inuenced by the proposed designs. We then
ran an OLS regression to estimate the strength of association of
participants’ likelihood to be inuenced with awareness (personal
perspective), dark pattern detection, acceptability, and demographic
data. Linear regression was again chosen because inuence out-
comes for the totality of the dark pattern examples could not be
transformed into binary variables.
The following section presents the results with regard to the three
research questions introduced in Sec. 3, starting with the ndings
on people’s awareness of the inuence of manipulative designs
online, followed by people’s capacity to detect dark patterns. As a
nal step, both are examined as indicators for people’s likelihood
to be inuenced by manipulative designs. For each part, dier-
ences associated with age, educational degree, use frequency of
online services, as well as disposition to online manipulation, are
5.1 People’s awareness of the inuence of
online designs on their choices and
The rst research question asked if people are aware of the inuence
of designs on their choices and behaviours in online services.
Awareness of inuence. Regarding the ratings of the three ques-
tion pairs in part one of the survey (Sec. 4.1.1), the results reported
in Table 2 show that the participants were aware that online designs
can inuence their choices and behaviours (Me: mean = 1.05 SD
0.78, median = 1, mode = 1). The qualitative analysis of the answers
reveals that the participants strongly associated inuential online
designs with well-known brands such as Amazon (mentioned by
103 participants), Netix (40), Facebook (38), Instagram (22), eBay
(17), Twitter (15), Youtube (13). They acknowledged that online
designs may shape their spending behaviour (64 mentions), as well
as their content consumption (45) and service choice (23), mainly
through personalised contents and recommendations (78), as well
as special oers (41). Some participants evoked social inuence (22)
as an eective strategy. Only a small number of participants cited
specic design elements like visual appeal (24) or layout (17) as
inuencing factors. Some of them pointed to a website’s or app’s
ease of use as a factor inuencing whether they use it or not (43).
Awareness of potential harm. The participants were uncertain
if manipulative designs online can cause them harm (Me: mean =
0.00 SD 1.10, median = 0, mode = -1), as shown in Table 2. The most
frequently cited service category whose inuence was considered
harmful is social media (mentioned by 24 participants). The most
prominent harm identied by participants was harm to themselves
(135) both of psychological and physical nature. This was followed
by mentions of nancial harm (89), such as debt and unreasonable
spending (51). Fewer participants evoked cybersecurity threats (31)
or harm to their privacy (16). Some mentioned the dangers related
to misleading information (28), and how these might inuence
people’s opinions, values and attitudes (19) and cause damage to
society (13).
Worries about manipulative designs. As shown by the results in
Table 2, the respondents were undecided and showed the tendency
not to worry about being manipulated by online designs (Me: -0.29
SD 1.07, median -1, mode -1): "I am not so personally worried about
being manipulated, because I know myself well enough to question
things and not get manipulated."(P78). However, regardless of their
own age, they evoked apprehension for vulnerable people (men-
tioned by 17 participants) and specically for young people (14),
the elderly (12), and children (11). For these people, they worried
about the inuence on spending behaviour (51), leading to nancial
losses (89). They furthermore expressed worry about the presence
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
of false or misleading information (45), coupled with their under-
standing that online services only serve pre-ltered information
(22) which inuence people’s opinions (41) and impede informed
choices (20). Such ill-formed decisions might eventually cause harm
to society (29) as well as to people’s physical and mental health (23).
There were also worries about cybersecurity threats (26). Finally,
respondents found it worrisome that it is challenging to discern
manipulative attempts (23), especially for vulnerable individuals.
Personal versus general perspective. Several comments referring
to concerns highlight that the participants were more worried for
other people than themselves: “I consider myself very aware of these
sort of things but someone else who has not a lot of internet experience
or online shopping or believes whatever they see or are told will follow
everything.”(P215). The results conrm this impression: the partici-
pants rated awareness, harm, and worry signicantly higher when
referring to people in general, as opposed to themselves (Table 2).
Awareness by individual characteristics. The results displayed
in Figure 3 show an inverse correlation between people’s age and
their awareness of online design’s inuence on themselves (r=-0.20,
n=406, p=0.00). As some participants pointed out: “Being elderly
I nd it relatively easy to avoid being manipulated by these strate-
gies. Technology has a place in my life but not an important place. I
am not easily taken in.”(P250). Participants with higher education
showed a slightly higher awareness of the inuence of designs on
their choices and behaviours (r=0.11, n=406, p=0.03), awareness of
potential harm on themselves (r=0.12, n=406, p=0.01), as well as
worry about the potential inuence on themselves (r=0.11, n=406,
p=0.02). Furthermore, the correlations indicate that those who use
online services more frequently considered it more likely that ma-
nipulative designs inuence their behaviour (r=0.25, n=406, p=0.00).
Individuals with a higher disposition to be inuenced also showed
a higher awareness of their own likelihood of being inuenced by
manipulative designs (r=0.29, n=406, p=0.00). At the same time,
they were also more worried about the inuence on their choices
and behaviour (r=0.14, n=406, p=0.01).
Summary of RQ1. On average, respondents are aware of online
design’s inuence on their behaviour, especially on the type of
online content they consume and the digital services they use.
However, they are unsure if they can be harmed personally, even
though they can name specic examples (e.g., frustration, anxiety,
debt, loss of self-condence). They are undecided on whether they
should worry and are more concerned about other people than
5.2 People’s ability to detect dark patterns
Research question two sought to investigate whether individuals
are able to recognise dark patterns. The results show that, when
asked to look for elements that can inuence users’ choices and
behaviour, 59% of the participants were able to identify ve or more
dark patterns out of the nine interfaces correctly. One fourth recog-
nised the dark patterns in seven, eight, or all nine interfaces (Figure
4). As can be seen in Table 3, the interfaces including the dark
pattern types trick question, pre-selection, loss-gain framing, hidden
information and bundled+forced consent were only recognised by
Figure 3: Correlation matrix for participants’ awareness rat-
ings and their individual characteristics, n=406, p < 0.05 with
dark background.
half or less of the participants, while the majority of the partici-
pants correctly identied the interfaces containing a high-demand /
limited time message and conrmshaming.
Figure 4: Frequency of dark pattern detection
Dark pattern detection and individual characteristics. Using OLS
regression analysis to model the inter-relationship between the de-
tection of dark patterns and associated factors (Figure 5), it emerges
that younger people could identify a higher number of dark patterns
than the older Baby Boomer+ generation
net of education, use
frequency, and disposition: Millenials/Gen Y: coef. .60 (95%CI: 0.04
to 1.16); Zoomers/Gen Z coef. 1.09 (95%CI: 0.35 to 1.83). Generation
X is not better or worse than the older Baby Boomer+ generation,
as indicated by the regression results coef. 0.28 (95%CI: -0.25 to
0.81). Regarding education, participants with a high school degree
or lower detected less dark patterns (coef. -0.80 (95%CI: -1.26 to
-0.33)), compared to participants with a Bachelor’s degree or vo-
cational training. However, participants with higher degrees than
Bachelor were neither better nor worse at identifying manipulative
design strategies (coef. 0.32 (95%CI: -0.24 to 0.88)). This suggests
that the Bachelor/vocational training level is a threshold below
which recognition rates are lower. The regression analysis also
indicates a slight positive correlation between online use frequency
and the number of dark patterns detected (coef. 0.04 (95%CI: 0.00 to
0.09)), but no signicant correlation for disposition to manipulation
and dark pattern detection (coef. 0.03 (95%CI: -0.01 to 0.09)).
Due to the low number of Silent Generation participants in the survey, the reference
category combines the Baby Boomer generation (people born between 1946-1964) and
the older “Silent Generation” (people born between 1928-1945).
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
1. Aware of inuence 2. Aware of potential harm 3. Worried about
Me 1.05 (SD 0.78)/1/1 0.00 (SD 1.10)/0/-1 -0.29 (SD 1.07)/-1/-1
People 1.30 (SD 0.59)/1/1 0.66 (SD 0.97)/1/1 0.60 (SD 1.01)/1/1
Sign test
(n=115, x>=98, p=0.5)=0.0000 (n=183, x>=175, p=0.5)=0.0000 (n=233,x>=222, p=0.5)=0.0000
Table 2: Mean/median/mode scores of participants’ rating of their 1) awareness of the inuence and 2) the potential harm
caused by manipulative designs online and 3) their degree of worry, with regard to themselves and people in general; values
ranging from -2 strongly disagree to 2 strongly agree; the last row shows the p values for the two-sided sign tests between Me
and People ratings.
Dark pattern detected Inuential Acceptable
Dark pattern name no partly yes
a) Trick question / Pre-selection 51% 9% 40% -0.14(SD1.22)/0/1 -0.73(SD1.12)/-1/-1
b) Loss-gain framing / Conrmshaming 27% 19% 53% -0.03(SD1.26)/0/1 -0.79(SD1.06)/-1/-1
c) Pre-selection / Loss-gain framing 51% 11% 38% 0.00(SD1.20)/0/1 -0.50(SD1.08)/-1/-1
d) Hidden information / Trick question 64% 20% 16% 0.36(SD1.21)/1/1 -1.17(SD1.06)/-2/-2
e) Bundled / Forced consent 50% 36% 14% 0.36(SD1.15)/1/1 -0.64(SD1.06)/-1/-1
f) High-demand / Limited-time message 5% 11% 84% 0.39(SD1.25)/1/1 -0.39(SD1.11)/0/-1
g) Conrmshaming 27% 2% 71% -0.85(SD1.15)/-1/-2 -0.22(SD1.10)/0/1
h) Hidden information / False hierarchy 42% 4% 54% -0.49(SD1.28)/-1/-1 -1.00(SD1.09)/-1/-2
i) Auto-play 40% 11% 49% 0.72(SD1.05)/1/1 0.74(SD0.87)/1/1
Table 3: Dark pattern detection percentages for the 9 dark pattern interfaces in the survey and mean/median/mode scores of
participants’ evaluation of their likeliness to be inuenced by the dark pattern and the acceptability of the strategy (from -2
=strongly disagree to 2 = strongly agree).
Figure 5: Plot from linear regression with robust stan-
dard errors for dark pattern detection; n=406, Adj 𝑅2=0.097,
Dark pattern detection and awareness. The correlation matrix
(Figure 3) shows a positive correlation between recognition of dark
patterns and awareness of manipulative designs’ inuence (r=0.13,
n=406, p=0.01). However, many participants were surprised that
they were unable to recognise certain manipulative designs:
"I like to think I am pretty ’switched on’ when it comes to avoiding
being manipulated online and this highlighted to me how much I sign
away at the click of the button!"(P150)
"There were a few elements that I missed, but were obvious when
pointed out. It shows how easy it is to be manipulated, even when one
thinks they are aware."(P358)
Summary of RQ2. We conclude that people are able to recog-
nise dark patterns, but there is variation across dark pattern types.
Younger age (<40 y.), as well as education levels above high-school
degree, are positively correlated with this ability.
5.3 People’s likelihood to be inuenced by
dark patterns
The third research question sought to investigate if higher aware-
ness, as well as a higher capability to detect manipulative designs,
make people less likely to be inuenced. We ran an OLS regression
analysis to model the inter-relationship between the participants’
self-reported inuence-likelihood rating and the associated factors
(Figure 6).
Likelihood of being inuenced and the capacity of dark pattern
detection. The data shows a slight inverse correlation between par-
ticipants’ dark pattern detection capability and their inuence-
likelihood rating (coef. -0.58 (95%CI: -0.90 to -0.26)). This indicates
that people who recognise manipulative designs more easily con-
sider themselves slightly less likely to be inuenced by them.
Likelihood of being inuenced and dark pattern acceptability. In
all tested designs, participants were on average uncertain whether
the dark patterns would inuence their behaviour and whether they
nd them acceptable (Table 3). Those who considered these designs
more acceptable also reported being slightly more inuenced in
their behaviour (coef. 0.16 (95%CI: 0.04 to 0.28)). Interestingly, a
design’s admissibility was not necessarily related to its inuence
strength. For example, about half of the participants recognised
both (h) hidden information / false hierarchy and (i) auto-play. After
receiving an explanation about what could be considered manipula-
tive in both interfaces, the participants tended to nd (i) auto-play
more inuential than (h) hidden information / false hierarchy. How-
ever, they deemed (i) auto-play more acceptable than (h) hidden
information / false hierarchy.
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
Figure 6: Plot from linear regression with robust standard
errors for likelihood for being inuenced by dark patterns;
n=406, Adj 𝑅2=0.3039, BIC=2723.14; AW = awareness, WOR =
Likelihood of being inuenced and awareness. Participant com-
ments at the closure of the survey reect that awareness is not a
signicant predictor for participants’ likelihood to be inuenced
by manipulative designs.
"I think I’m aware of most manipulative practices but there are
certain applications like video streaming and booking accommodation
where I am denitely manipulated, even when I am aware of it. It’s
"I feel I am quite aware of some of the subtleties of advertising and
suggestion but there were elements I hadn’t even considered may be
unconsciously inuencing my choices."(P222)
Indeed, only respondents who strongly believed that online de-
signs can inuence them, also deemed it likely to be inuenced by
the designs in the tested interfaces (coef. 2.52 (95%CI: 0.24 to 4.81)).
Conversely, people who strongly disagreed with being worried
about the inuence of online designs on themselves considered it
also unlikely to fall for the designs in the tested interfaces (coef.
-3.16 (95%CI: -5.77 to -0.54)).
Likelihood of being inuenced and individual characteristics. The
regression analysis shows no signicant correlation between age,
education, or online use frequency and the self-reported likeli-
hood to be inuenced by manipulative designs. However, there is
a positive correlation between participants’ disposition to online
manipulation and their ratings concerning the inuence likelihood
(coef. 0.51 (95%CI: 0.36 to 0.66)), which hints at a certain degree of
inability to resist manipulative designs.
Summary RQ3. We conclude that people who recognise manipu-
lative designs with more ease report, on average, a lower likelihood
of being inuenced by them. However, whether people are very
aware of online manipulative attempts or not makes, on average,
no dierence in terms of their likelihood to be inuenced by such
We discuss the results in light of the interventions that could be
put in place to counteract dark patterns. Interventions can aim to
(i.e., intervention scope) a) raise awareness of the existence and
the risks of dark patterns, b) facilitate their detection, c) bolster the
resistance towards them, or d) eliminate them from online services.
Interventions can act on the user or the environment (i.e., inter-
vention measures): educational interventions favour users’ agency,
regulatory interventions tend to protect the user, technical and
design interventions are situated in-between. This distinction can
serve to identify the actors (e.g., design practitioners, researchers,
educators, regulators) that should implement the interventions and
devise appropriate evaluation indicators. The resulting matrix is
shown in Figure 7.
6.1 Raising awareness
The results indicate that people are generally cognizant that digital
services can exert a detrimental inuence on their users, but fail to
understand how manipulative designs can concretely harm them.
Individuals’ lack of sucient concern does not impact their ability
to spot dark patterns. However, it may impact their motivation
to counter them, like taking a few extra steps to select the less
privacy-invasive option in consent dialogues. Moreover, people are
more worried about the danger represented by dark patterns for
other people than for themselves, thus conrming previous assump-
tions [
]. Warnings [
] are a design intervention that can make
threats salient and concrete (e.g., about nancial losses following
mindless purchasing decisions) and counterbalance the tendency
to underestimate online threats due to hyperbolic discounting and
optimism bias. However, warnings become rapidly ineective as
users get habituated (i.e., warning fatigue [
]) and need to mutate
continuously to continue capturing users’ attention [8].
6.2 Facilitating detection
The study results on dark pattern detection show signicant varia-
tions across the proposed designs. A majority of users recognised
conrmshaming and the high-demand / limited-time message (sim-
ilar to [
]), whilst dark patterns based on deception strategies (e.g.,
trick question, loss-gain framing and hidden information), together
with the pre-selection nudge and forced consent, were scarcely
recognised. Although such ndings only concern a specic imple-
mentation of the dark pattern and cannot be generalised to the cat-
egory, they may suggest that certain dark patterns are intrinsically
more dicult to spot. For instance, the omission of information is a
shrewd deceptive strategy. It requires users to have a correct mental
model of expectations, coupled with high cognitive activation [
to notice the absence of certain elements. In our facial recognition
example which was based on loss-gain framing, many respondents
were simply unaware of the unbalance in the presentation of the
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
Figure 7: Intervention spaces for counteracting dark patterns.
arguments ("I don’t think this one is manipulative, it’s just explaining
the benets of using face recognition, and the possible drawbacks
of not using it"(P169)) and of the entailed risks ("Turning on Fa-
cial Recognition, is it good or bad??"(P141)). This probably explains
why this dark pattern was rarely identied. When respondents
mentioned possible risks, their answers revealed wrong mental
models about the drawbacks of facial recognition, like installation
of malware, target advertisement, or unlawful surveillance. In such
cases, educational measures such as training on cause-and-eect
data privacy scenarios can act successfully to sharpen manipula-
tion detection abilities. As for what concerns the poor detection
of pre-selection and forced consent, a plausible explanation is that
users have grown accustomed to such designs. However, a dedi-
cated study could demonstrate which design attributes [
] make
certain dark patterns harder to spot.
It would also be useful to investigate folk models about dark
patterns: mental models that are not necessarily accurate in the
real world and lead to erroneous decision-making [
]. Comple-
mentarily, it should be further researched which attributes trigger
users’ scepticism towards interfaces and activate a more elaborate
mode of thought (i.e., counterfactual thinking [
]) that disposes
them to recognise potential manipulation attempts. For instance,
respondents in M. Bhoot et al
. [44]
indicated sudden interruptions
and excessive ads as elements activating scepticism. However, such
research also found that users hardly notice a manipulation attempt
when the interface is appealing. This result is in line with previous
work, demonstrating that people base their online trust judgements
on cues, such as the visual ones [
]. On this note, what has been
learnt in anti-phishing research about the cues that evoke distrust
in professional-looking e-mails can be of use to determine how
to activate the "critical persuasion insight" [
, p. 114]. Many com-
ments of our respondents hinted that activities like "spot the dark
pattern" can serve as an eye-opener: "This has been a great survey
and it has certainly made me more aware of certain things that I have
not noticed in the past. I will be keeping an eye out for such things
going forward" (P214). Similar gamied experiences
into major digital services (e.g., a Facebook game) could strengthen
the motivation to learn how to spot dark patterns in real settings,
without the cognitive cost of transferring skills learnt in training
to the context of digital services.
Concerning technical interventions, algorithms and applications
that automatically identify, ag, and even classify potentially illegal
practices at large scale should be developed on the model of [
], to expedite watchdogs’ supervising tasks and provide proof to
consumer advocates. Such tools need a large pool of reliable data
to carry out the recognition and categorisation of manipulative
attempts that are challenging even for humans. To this end, we are
currently assembling a corpus of dark pattern interfaces published
on Reddit12 and Twitter13 by social media users.
6.3 Bolstering resistance
Even though there are no signicant correlations in milder rat-
ings, those respondents who declared to be very likely inuenced
also showed a higher awareness of this possibility and greater con-
cern. This hints that awareness of one’s own vulnerability does
not automatically trigger better self-defence against manipulative
inuences. Design interventions can enhance users’ appraisal of
the eort it takes to cope with certain dark patterns – for example,
indicating the time it takes to unsubscribe when it is an overly com-
plex procedure (see e.g., Amazon Prime [
]). Reframing the costs
of falling prey to dark patterns in personally relevant terms, as pro-
posed by Moser et al
. [53]
to contrast impulse buying, may also be
See e.g., a game where users need to navigate
ambiguous options and distrust obvious buttons in cookie dialogues.
12 and
13The tweets containing hashtags like #darkpattern
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
considered – for instance, by converting the time spent on innite
scrolling into other pleasurable activities. User research can deter-
mine what is valuable for users, as it emerges from our respondents
(e.g., P278: "It [is] such a waste of precious time, that can be used in
reading, personal time, ecercise [sic] some more benetial [sic] for the
individual".) Friction designs can disrupt automatic behaviour with
positive eects by introducing small obstacles that create a brief
moment of reection and stimulate more mindful interactions [
Already in use to induce more secure online behaviour [
] and
proposed to counter irrational spending behaviour [
], friction
designs are now widespread on streaming services (e.g., YouTube,
Netix) to counter binge-watching. Similar nudges could oppose
innite scrolling, defaults, and mindless consent to data sharing
and extensive online tracking.
The study participants who recognised more dark patterns also
reported a lower likelihood of being inuenced by them. This sug-
gests that the ability to recognise a threat is intimately related to
the ability to protect oneself [
]. The disclaimer that manipula-
tive elements may be contained in the interfaces had the eect of
activating participants’ counterfactual thinking and encouraged a
more reective way of processing information. An educational in-
tervention like long-term boosts can build manipulation-protection
abilities and empower people to apply them without having to re-
sort to deliberate thinking in the long-run [
], like the procedural
rules proposed in Graßl et al
. [29]
: "every time I encounter a cookie
consent dialogue, I search for the ’refuse all’ button".
However, the cost of resisting dark patterns varies depending
on whether they employ coercive, nudging, or deceptive strategies,
and on their specic implementations. Coercive dark patterns (e.g.,
forced consent) are inescapable: if someone desires to use a service
that integrates a coercive design, they do not have the possibility
of avoiding it (i.e., ‘take-it-or-leave-it’). Dark patterns preventing
individuals from accomplishing a task are similarly daunting, since
opposing them would come with a high cognitive cost. For instance,
only a (motivated) minority of website visitors is willing to take
additional steps to adjust their preferences on cookie dialogues [
Nudging strategies may be implemented variously and thereby
exert more or less inuence on users (e.g., mild vs aggressive dark
patterns [
]). Deceptive strategies, on the other hand, can be re-
sisted only through the activation of counterfactual thinking.
6.4 Eliminating dark patterns from digital
Dark patterns are present in more than 10% of global shopping
websites [
], in almost 90% of cookie consent dialogues of the
top 10000 websites in the UK [
] and more than 95% of the 200
most popular apps [
]. To respond to such pervasiveness, technical
solutions that ease autonomous decision making can be devised,
like Do Not Track
, the add-on extension Consent-O-Matic
browser plug-ins that disable other plug-ins, e.g., those that create
scarcity messages.
Digital nudges that counteract dark patterns (i.e., bright patterns)
by, for example, making the privacy-savvy option more salient,
modify the environment where users make choices. There is a rich
15 US/refox/addon/consent-o- matic/.
literature concerning design nudges that enhance privacy decision
making [
], including personalised nudges adapted to individual
decision making styles [
]. Graßl et al
. [29]
found that bright pat-
terns nourish users’ perception of lack of control, though, as they
act on unreective behaviour in the same way as dark patterns.
Coercive and deceptive dark patterns (e.g., forced consent, trick
questions), though, cannot be defeated through digital nudges. A
complementary design intervention consists of promoting good
practices through the publication of design guidelines [
] and
the involvement of companies in problem-solving activities on con-
crete case studies [
]. Ethical design tool-kits
can be employed
to foresee the consequences of certain designs comprehensively.
At the same time, persuasive technology heuristics (e.g., [
]) may
be adapted to assess the potential manipulative eects of digital
products even before their release. Building on such initiatives and
], we plan to develop a standardised transparency impact
assessment process for interface design.
However, given the omnipresence of dark patterns on online
services, it is somewhat unrealistic to expect businesses to imple-
ment such interventions on their own: economic incentives and
regulatory interventions should complement other proposed ac-
tions. Legal safeguards should apply more stringently, as many
dark patterns are unlawful in the EU under consumer law (e.g.,
omission of information, obstruction to subscription cancellation)
] and the data protection regime (e.g., forced consent,
loss-gain framing) [
]. Sti penalties can furthermore act as a
deterrent: in France, for instance, Google has received nes for a
total of 150 million
due to invalid consent design and elicitation
]. Empirical research demonstrating the presence, diusion
and eects of manipulative designs might have an impact on legal
enforcement: cookie consent dialogues increasingly oer privacy-
by-default options as a result of case law (e.g., the landmark case
Planet49 [
]) and, conceivably, of intense academic scrutiny. The
threat of more stringent regulations (e.g., the US Social Media Addic-
tion Reduction Technology Act) and public pressure (e.g., derived
from the popularity of documentaries like “The social dilemma”)
may even encourage self-regulation.
6.5 Targeting interventions - older vs younger
Our results highlight that older generations are not only less able to
recognise manipulative attempts, but they are also less aware that
their choices and behaviour can be inuenced. This could be prob-
lematic, as perceived vulnerability to harm is a key factor to trigger
self-protection [
]. The combination of lack of awareness and lack
of capability makes dark patterns’ eects particularly dangerous for
older adults, as they struggle to adapt their learned self-protection
abilities to evolving (digital) environments [
], echoing ndings
about online misinformation [
]. That said, it is arguably easier to
dene ad hoc protections addressed to younger populations (e.g.,
like the ICO’s “Age Appropriate Design” code of practice [
than to older ones: how would targeted safeguards for over 40s
be enacted and received? Moreover, our ndings do not suggest
any signicant correlation with the likelihood to be inuenced,
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
although it would be worthwhile to expand research in this direc-
tion. The study results show that an age lower than 40 years and
an education level higher than high school diploma constitute a
critical threshold for recognising dark patterns and could indicate
that the other part of the population is sensibly less likely to be
aware of manipulative attempts online.
The choice of a large representative sample of the UK population
for this survey has the objective of generalizing the ndings to the
whole population. However, given that Prolic is an online platform,
participants are probably more accustomed to online designs than
the average (e.g., the distribution of the participants’ online use
frequency shows a positive skew). Therefore, our results might
overestimate what a less tech-savvy UK population is aware of
when it comes to manipulative designs online. It would also be
interesting to nd out whether the study would achieve dierent
conclusions in other countries. It should be further mentioned
that the participants’ likelihood to be inuenced by manipulative
designs was derived from a self-reported measure and does not
necessarily reect actual behaviour, which makes the measure an
approximation and invites further research.
Following the ratings of awareness, harm, worry in part one of
the survey, the participants were invited to cite examples. Whilst
we explicitly asked about the inuence of manipulative designs,
people also cited manipulative content (e.g., fake news), signalling
that it is not obvious for people to distinguish form from content.
We thus assume that the participants’ awareness of and concern
about the inuence caused by manipulative designs may be lower
than indicated by the results. Concerning the dark pattern detec-
tion activity, we are aware that explicitly searching manipulative
design elements does not entirely correspond to a real use situation.
We sought to counterbalance the eect through the time limit and
the allusion that some interfaces would not contain manipulative
elements. That said, we also estimate that such settings could cor-
respond to a real-world scenario where people remark something
odd and activate counterfactual thinking [6].
Manipulative designs are a growing threat in the online environ-
ment. Practitioners and researchers from multiple domains (HCI,
computer science, law, etc.) currently seek to expose and counteract
their inuence on user behaviour. Yet, to shield users eectively, it
is essential to understand their capabilities when confronted with
manipulative designs. This study shows that individuals are aware
of manipulative designs’ potential inuence on their behaviour
and rather capable of recognising such designs. While an inverse
correlation between dark pattern design recognition and partici-
pants’ likelihood to be inuenced was found, the level of awareness
did not play a signicant role in predicting their ability to resist
manipulative designs. This nding implies that raising awareness
on the issue is not sucient to shield users from the inuence of
dark patterns.
Our discussion presented a palette of interventions (i.e., design,
technical, educational, and regulatory measures) meant to heighten
people’s awareness of manipulative design practices, ease their
detection, strengthen people’s resistance to them, or root them out.
We believe that design measures (like frictions and bright patterns)
and technical solutions (like automated dark pattern detection ap-
plications) should be further investigated, together with assessment
tools, economic incentives and regulatory solutions.
To complement scholars’ and authorities’ views on the issue,
we suggest exploring established dark pattern attributes in combi-
nation with the user perspective as part of future work. Only by
understanding which (combinations of) attributes are commonly
perceived as unrecognisable, irresistible and/or unacceptable by
the users, we can devise appropriate interventions. Moreover, the
exploration of user perceptions can help establish what is deemed
legitimate and what is not by end-users without taking a normative
stance. Looking at dark patterns from the user perspective shows
that they are a problem with many variables. As such, they require
that a variety of actors teams up to devise a kaleidoscope of inter-
ventions. Designers should be on the front line to help tame the
monster they contributed creating.
This publication is a rst step of the project Decepticon (grant no.
IS/14717072) supported by the Luxembourg National Research Fund
(FNR). We would like to thank the anonymous reviewers of DIS
2021 and CHI 2021 for their helpful comments and all those who
have helped us rene the study design.
Alessandro Acquisti, Idris Adjerid, Rebecca Balebako, Laura Brandimarte, Lor-
rie Faith Cranor, Saranga Komanduri, Pedro Giovanni Leon, Norman Sadeh,
Florian Schaub, Manya Sleeper, et al
2017. Nudges for privacy and security:
Understanding and assisting users’ choices online. ACM Computing Surveys
(CSUR) 50, 3 (2017), 1–41.
Devdatta Akhawe and Adrienne Porter Felt. 2013. Alice in Warningland: A Large-
Scale Field Study of Browser Security Warning Eectiveness. In 22nd USENIX
Security Symposium (USENIX Security 13). USENIX Association, Washington,
D.C., 257–272.
Adam Alter. 2017. Irresistible: The rise of addictive technology and the business of
keeping us hooked. Penguin Publishing Group.
Susanne Barth and Menno DT De Jong. 2017. The privacy paradox–Investigating
discrepancies between expressed privacy concerns and actual online behavior–A
systematic literature review. Telematics and informatics 34, 7 (2017), 1038–1058.
Christoph Bösch, Benjamin Erb, Frank Kargl, Henning Kopp, and Stefan Pfatthe-
icher. 2016. Tales from the dark side: Privacy dark strategies and privacy dark
patterns. Proceedings on Privacy Enhancing Technologies 2016, 4 (2016), 237–254.
David M. Boush, Marian Friestad, and Peter Wright. 2009. Deception In The
Marketplace: The Psychology of Deceptive Persuasion and consumer self-protection
(rst edition ed.). Rouledge.
Nadia M. Brashier and Daniel L. Schacter. 2020. Aging in an Era of Fake News.
Current Directions in Psychological Science 29, 3 (Jun 2020), 316–323. https:
Cristian Bravo-Lillo, Lorrie Cranor, Saranga Komanduri, Stuart Schechter, and
Manya Sleeper. 2014. Harder to ignore? Revisiting pop-up fatigue and approaches
to prevent it. In 10th Symposium On Usable Privacy and Security (
[9] Harry Brignull. 2010. Dark Patterns.
John Brownlee. 2016. Why Dark Patterns Won’t Go Away. https://www. wont-go- away
[11] Régis Chatellier, Georey Delcroix, Estelle Hary, and Camille Girard-Chanudet.
2019. Shaping choices in the digital world. From dark patterns to data protection:
the inuence of ux/ui design on user empowerment. Technical Report. CNIL.
Shruthi Sai Chivukula and Colin M GRAY. 2020. Co-Evolving Towards Evil
Design Outcomes: Mapping Problem and Solution Process Moves. In Proceedings
of the Design Research Society Conference.
Shruthi Sai Chivukula, Colin M Gray, and Jason A Brier. 2019. Analyzing value
discovery in design decisions through ethicography. In Proceedings of the 2019
CHI Conference on Human Factors in Computing Systems. 1–12.
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
Shruthi Sai Chivukula, Chris Rhys Watkins, Rhea Manocha, Jingle Chen, and
Colin M Gray. 2020. Dimensions of UX Practice that Shape Ethical Awareness. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.
CNIL. [n.d.]. Données & Design. Co-building user journeys compliant with the
GDPR and respectful of privacy.
CNIL. 2019. Deliberation of the Restricted Committee SAN-2019-001 of 21 January
2019 pronouncing a nancial sanction against GOOGLE LLC. https://www.cnil.
fr/sites/default/les/atoms/les/san-2019- 001.pdf
CNIL. 2020. Cookies: nancial penalties of 60 million euros against the company
GOOGLE LLC and of 40 million euros against the company GOOGLE IRELAND
LIMITED.nancial- penalties-60- million- euros-
against-company- google-llc- and- 40-million- euros-google-ireland
Gregory Conti and Edward Sobiesk. 2010. Malicious interface design: exploiting
the user. In Proceedings of the 19th international conference on World wide web -
WWW ’10. ACM Press, 271.
Anna L. Cox, Sandy J.J. Gould, Marta E. Cecchinato, Ioanna Iacovides, and Ian Ren-
free. 2016. Design Frictions for Mindful Interactions: The Case for Microbound-
aries. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human
Factors in Computing Systems (CHI EA ’16). Association for Computing Machinery,
Linda Di Geronimo, Larissa Braz, Enrico Fregnan, Fabio Palomba, and Alberto
Bacchelli. 2020. UI dark patterns and where to nd them: a study on mobile
applications and user perception. In Proceedings of the 2020 CHI Conference on
Human Factors in Computing Systems. 1–14.
Verena Distler, Gabriele Lenzini, Carine Lallemand, and Vincent Koenig. 2020.
The Framework of Security-Enhancing Friction: How UX Can Help Users Behave
More Securely. In New Security Paradigms Workshop 2020. ACM, 45–58. https:
[22] EDPB. 2020. Guidelines 05/2020 on consent under Regulation 2016/679.
Trine Falbe, Kim Andersen, and Martin Michael Frederiksen. 2020. The ethical
design handbook. Smashing Media AG.
Forbrukerrådet. 2021. You can log out, but you can never leave.
How Amazon manipulates consumers to keep them subscribed to Amazon
prime. https:// 01-14-
you-can- log-out- but-you-can- never-leave- nal.pdf
Colin M. Gray, Jingle Chen, Shruthi Sai Chivukula, and Liyang Qu. 2020. End
User Accounts of Dark Patterns as Felt Manipulation. arXiv:2010.11046 [cs] (Oct
2020). arXiv: 2010.11046.
Colin M Gray and Shruthi Sai Chivukula. 2019. Ethical mediation in UX practice.
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
Colin M Gray, Shruthi Sai Chivukula, and Ahreum Lee. 2020. What Kind of Work
Do" Asshole Designers" Create? Describing Properties of Ethical Concern on
Reddit. In Proceedings of the 2020 ACM on Designing Interactive Systems Conference.
Colin M Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L Toombs.
2018. The dark (patterns) side of UX design. In Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems. 1–14.
Paul Graßl, Hanna Schraenberger, Frederik Zuiderveen Borgesius, and Moniek
Buijzen. 2021. Dark and Bright Patterns in Cookie Consent Requests. Journal of
Digital Social Research 3, 11 (Feb 2021), 1–38.
Saul Greenberg, Sebastian Boring, Jo Vermeulen, and Jakub Dostal. 2014. Dark
patterns in proxemic interactions: a critical perspective. In Proceedings of the 2014
conference on Designing interactive systems. 523–532.
Pelle Guldborg Hansen and Andreas Maaløe Jespersen. 2013. Nudge and the ma-
nipulation of choice: A framework for the responsible use of the nudge approach
to behaviour change in public policy. European Journal of Risk Regulation 4, 1
(2013), 3–28.
Ralph Hertwig and Till Grüne-Yano. 2017. Nudging and Boosting: Steering or
Empowering Good Decisions. Perspectives on Psychological Science 12, 6 (Nov
2017), 973–986.
Soheil Human and Florian Cech. 2020. A Human-centric Perspective on Digital
Consenting: The Case of GAFAM. In Human Centred Intelligent Systems. Springer,
Arushi Jaiswal. 2018. Dark patterns in UX: how designers should be responsible
for their actions. patterns-in- ux-design-7009a83b233c
Georgios Kampanos and Siamak F. Shahandashti. 2021. Accept All: The Landscape
of Cookie Banners in Greece and the UK. arXiv:2104.05750 [cs] (Apr 2021). arXiv: 2104.05750.
Lexie Kane. 2019. The Attention Economy.
Julie A. Kientz, Eun Kyoung Choe, Brennen Birch, Robert Maharaj, Amanda
Fonville, Chelsey Glasson, and Jen Mundt. 2010. Heuristic evaluation of persua-
sive health technologies. In Proceedings of the ACM international conference on
Health informatics - IHI ’10. ACM Press, 555.
[38] UXP2 Lab. 2018. The dark side of UX Design.
C. Lacey and C. Caudwell. 2019. Cuteness as a ‘Dark Pattern’ in Home Robots. In
2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
M. R. Leiser. 2020. “Dark Patterns”: The Case for Regulatory Pluralism.
SSRN Repository (Jun 2020). DOI:
Tim-Benjamin Lembcke, Nils Engelbrecht, Alfred Benedikt Brendel, Bernd Her-
renkind, and Lutz M. Kolbe. 2019. Towards a Unied Understanding of Digital
Nudging by Addressing its Analog Roots.. In PACIS.
Marco Lippi, Przemysław Pałka, Giuseppe Contissa, Francesca Lagioia, Hans-
Wolfgang Micklitz, Giovanni Sartor, and Paolo Torroni. 2019. CLAUDETTE:
an automated detector of potentially unfair clauses in online terms of service.
Articial Intelligence and Law 27, 2 (2019), 117–139.
Jamie Luguri and Lior Strahilevitz. 2019. Shining a light on dark patterns. U of
Chicago, Public Law Working Paper 719 (2019).
Aditi M. Bhoot, Mayuri A. Shinde, and Wricha P. Mishra. 2020. Towards the
Identication of Dark Patterns: An Analysis Based on End-User Reactions. In
IndiaHCI ’20: Proceedings of the 11th Indian Conference on Human-Computer
Interaction (IndiaHCI 2020). Association for Computing Machinery, 24–33. https:
Dominique Machuletz and Rainer Böhme. 2020. Multiple purposes, multiple
problems: A user study of consent dialogs after GDPR. Proceedings on Privacy
Enhancing Technologies 2020, 2 (2020), 481–498.
Maximilian Maier and Rikard Harr. 2020. Dark Design Patterns: An End-user
Perspective. Human Technology 16, 2 (2020), 170–199.
Arunesh Mathur, Gunes Acar, Michael J Friedman, Elena Lucherini, Jonathan
Mayer, Marshini Chetty, and Arvind Narayanan. 2019. Dark patterns at scale:
Findings from a crawl of 11K shopping websites. Proceedings of the ACM on
Human-Computer Interaction 3, CSCW (2019), 1–32.
Arunesh Mathur, Jonathan Mayer, and Mihir Kshirsagar. 2021. What Makes a Dark
Pattern... Dark? Design Attributes, Normative Considerations, and Measurement
Methods. arXiv:2101.04843 [cs] (Jan 2021).
3445610 arXiv: 2101.04843.
Arunesh Mathur, Angelina Wang, Carsten Schwemmer, Maia Hamin, Brandon M.
Stewart, and Arvind Narayanan. 2020. Manipulative tactics are the norm in
political emails: Evidence from 100K emails from the 2020 U.S. election cycle.
Célestin Matte, Nataliia Bielova, and Cristiana Santos. 2020. Do Cookie Banners
Respect my Choice?: Measuring Legal Compliance of Banners from IAB Europe’s
Transparency and Consent Framework. In 2020 IEEE Symposium on Security and
Privacy (SP). IEEE, 791–809.
Christian Meske and Ireti Amojo. 2020. Ethical Guidelines for the Construction
of Digital Nudges. In 53rd Hawaii International Conference on Systems Sciences
(HICSS). 3928–3937. arXiv: 2003.05249.
Sarah Theres Völkel Daniel Buschek Michael and Malin Eiband Chromik. 2019.
Dark Patterns of Explainability, Transparency, and User Control for Intelligent
Systems. In 2nd Workshop on Explainable Smart Systems at the ACM Conference
on Intelligent User Interfaces (IUI’19).
Carol Moser, Sarita Y. Schoenebeck, and Paul Resnick. 2019. Impulse Buying:
Design Practices and Consumer Needs. In Proceedings of the 2019 CHI Conference
on Human Factors in Computing Systems - CHI ’19. ACM Press, 1–15. https:
Netherlands Authority for Consumers & Markets. 2020. ACM Guidelines
on the Protection of the Online Consumer. Boundaries of Online Persua-
sion.les/documents/2020-02/acm- guidelines-
on-the- protection-of- the- online-consumer.pdf
Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, and Lalana Kagal.
2020. Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating
their inuence. In Proceedings of the 2020 CHI Conference on Human Factors in
Computing Systems. 1–13.
Court of Justice of the European Union. 2019. C-673/17 - Planet49. Judgment of the
Court (Grand Chamber) of 1 October 2019 Bundesverband der Verbraucherzen-
tralen und Verbraucherverbände - Verbraucherzentrale Bundesverband e.V. v
Planet49 GmbH.
Information Commissioner Oce. 2020. Age appropriate design: a code of
practice for online services. to-
data-protection/key- data-protection- themes/age-appropriate-design- a-code-
of-practice- for-online- services/
Stigler Committee on Digital Platforms. 2019. “Final Report,” Stigler Center for the
Study of the Economy and the State. (Sep 2019). https://research.chicagobooth.
edu/stigler/media/news/committee-on- digital-platforms-nal- report
Eyal Peer, Serge Egelman, Marian Harbach, Nathan Malkin, Arunesh Mathur,
and Alisa Frik. 2020. Nudge Me Right: Personalizing Online Security Nudges
to People’s Decision-Making Styles. Computers in Human Behavior 109, Au-
gust 2020, 106347 (2020). DOI:
Forbruker Radet. 2018. Deceived by design. How tech companies use dark patterns
to discourage us from exercising our rights to privacy. Technical Report.
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
Ronald W. Rogers and Steven Prentice-Dunn. 1997. Protection motivation theory
(d. s. gochman ed.). Plenum Press, 113–132.
Cristiana Santos, Nataliia Bielova, and Célestin Matte. 2020. Are cookie banners
indeed compliant with the law?:. Technology and Regulation 2020 (Dec 2020),
Tali Sharot. 2011. The optimism bias. Current biology 21, 23 (2011), R941–R945.
Simon Shaw. 2019. Consumers Are Becoming Wise to Your Nudge. https:
// becoming-wise-to- your-nudge/
Dwayne D. Sijun Wang, Sharon E. Beatty, and William Foxx. 2004. Signaling the
Trustworthiness of Small Online Retailers. Journal of Interactive Marketing ( John
Wiley and Sons) 18, 1 (2004), 53–69.
Natasha Singer. 2016. When Websites Won’t Take No for an Answer. The
New York Times (May 2016).
personaltech/when-websites- wont-take- no- for-an- answer.html
Than Htut Soe, Oda Elise Nordberg, Frode Guribye, and Marija Slavkovik. 2020.
Circumvention by design–dark patterns in cookie consents for online news
outlets. In NordiCHI 2020.
Daniel Susser, Beate Roessler, and Helen Nissenbaum. 2019. Technology, Auton-
omy, and Manipulation.
Daniel Susser, Beate Roessler, and Helen F. Nissenbaum. 2018. Online Manipula-
tion: Hidden Inuences in a Digital World. Georgetown Law Technology Review 1
(2018), 1–45.
[70] Unknown. [n.d.]. Conrmshaming. https://con
[71] Unknown. 2019. Dark pattern games.
Christine Utz, Martin Degeling, Sascha Fahl, Florian Schaub, and Thorsten Holz.
2019. (Un) informed Consent: Studying GDPR Consent Notices in the Field. In
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications
Security. 973–990.
Ari Ezra Waldman. 2020. Cognitive biases, dark patterns, and the ‘privacy
paradox’. Current opinion in psychology 31 (2020), 105–109.
Rick Wash. 2010. Folk Models of Home Computer Security. In Proceedings of the
Sixth Symposium on Usable Privacy and Security (Redmond, Washington, USA)
(SOUPS ’10). Association for Computing Machinery, New York, NY, USA, Article
11, 16 pages.
Christopher Rhys Watkins, Colin M Gray, Austin L Toombs, and Paul Parsons.
2020. Tensions in Enacting a Design Philosophy in UX Practice. In Proceedings of
the 2020 ACM on Designing Interactive Systems Conference. 2107–2118.
Ryan West. 2008. The psychology of security. Commun. ACM 51, 4 (2008), 34–40.
José P Zagal, Staan Björk, and Chris Lewis. 2013. Dark Patterns in the Design
of Games. In Proceedings of the 8th International Conference on the Foundations of
Digital Games (FDG 2013). 39–46.
... The negative outcomes from exposure to these design elements result when users recognise them (Luguri and Strahilevitz 2021), which seems to be the case for scarcity cues. Indeed, according to Bongard-Blanchy et al. (2021), users are well aware of these cues' potentially manipulative impact on their decisions. Still, it remains unclear which dimensions of customer trust are relevant in the context of scarcity cues. ...
... Our findings suggest that both types of scarcity cues are considered less benevolent when compared to the control condition. Expanding on Bongard-Blanchy et al. (2021), customers are well aware of the potentially malicious intentions of online vendors when (i) the e-commerce site is completely fictional; ...
Full-text available
Digitalization has become a sophisticated medium of daily activities in recent days. The issue with digitalization has created an avenue of debatable topic between ethics versus profit. Although digitalization is making life easier, it is also simultaneously manipulating peoples mind towards profit maximization objective rather that becoming socially responsible considering the ethical practices. This study explore whether technology is manipulating human behavior and is ethical practices being applied. Interview technique was applied to collect data. Altogether 10 respondents were interviewed among these, 5 were the developers and 5 were the users. Coding was applied and data triangulation method was used for analysis. The results suggest that there are multiple ways to attract and divert peoples mind through internet web designing that developers invent to manipulate the viewers. Artificial intelligence and Internet technologies have developed a method of deep faking to manipulate people's minds. The findings suggest that manipulative features are added for pursuing people's behaviours. These manipulative features forcibly attract the users to unknowingly use the application program even if the users do not wish to. Most developers feels that ethical practices need to be maintained towards safeguarding the privacy of people and information, however, the practice of designing triggers towards manipulation. Manipulation of peoples mind are programmed and designed by managerial implicative objective of organization to gain more profit over offering ethical practices.
Deceptive Design, or “Dark Patterns” in UI/UX are becoming increasingly prevalent and sophisticated in nature. As a result, users frequently encounter them in contexts such as e-commerce and social media whether it’s unbeknownst to the user or not. While previous research has explored user perception and attitudes towards “Dark Patterns”, little research has been conducted to investigate its potential impacts on a user’s mental health. The purpose of this paper is to identify vulnerable demographics of users who may face harmful outcomes as a result of interacting with deceptive design. From this, we generate a set of research questions that are intended to generate discourse and further investigation into the possible impacts of Dark Patterns on these users.
Over 95% of mobile games found on the Android Play Store are free to download and play which typically means that income for the publishers is generated through monetization mechanisms included within the gameplay. It is already established that monetization within mobile games often makes use of deceptive design (sometimes called ‘dark design’) in relation to aspects such as advertising and game-related purchasing. The limited spending power of young people often means that children and teenagers play these ‘free’ games extensively and are therefore regularly experiencing in-game monetization attempts developed by adults to target adult players. Monetization typically plays a key role in gameplay and associated gameplay experience in free games. We asked young people (n = 62) aged 12–13 years how they thought developers should monetize free mobile games. Findings show that participants were able to suggest novel mechanisms for monetization, new monetization possibilities developers could consider, and ways in which the experience of monetization mechanisms for players could be improved. We hope this work can help prompt discussion around participatory approaches for monetization and focus attention on the user experience of monetization techniques within mobile games.KeywordsChildrenAdolescentsTeenagersMobile GamesDeceptive DesignDark DesignDeceptive Design PatternsMonetization
Full-text available
One way to reduce privacy risks for consumers when using the internet is to inform them better about the privacy practices they will encounter. Tailored privacy information provision could outperform the current practice where information system providers do not much more than posting unwieldy privacy notices. Paradoxically, this would require additional collection of data about consumers’ privacy preferences—which constitute themselves sensitive information so that sharing them may expose consumers to additional privacy risks. This chapter presents insights on how this paradoxical interplay can be outmaneuvered. We discuss different approaches for privacy preference elicitation, the data required, and how to best protect the sensitive data inevitably to be shared with technical privacy-preserving mechanisms. The key takeaway of this chapter is that we should put more thought into what we are building and using our systems for to allow for privacy through human-centered design instead of static, predefined solutions which do not meet consumer needs.
Full-text available
Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers—first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world.
Conference Paper
Full-text available
Cookie banners are devices implemented by websites to allow users to manage their privacy settings with respect to the use of cookies. They are part of a user's daily web browsing experience since legislation in Europe requires websites to show such notices. In this paper, we carry out a large-scale study of more than 17,000 websites including more than 7,500 cookie banners in Greece and the UK to determine compliance and tracking transparency levels. Our analysis shows that although more than 60% of websites store third-party cookies in both countries, only less than 50% show a cookie notice and hence a substantial proportion do not comply with the law even at the very basic level. We find only a small proportion of the surveyed websites providing a direct opt-out option, with an overwhelming majority either nudging users towards privacy-intrusive choices or making cookie rejection much harder than consent. Our results differ significantly in some cases from previous smaller-scale studies and hence underline the importance of large-scale studies for a better understanding of the big picture in cookie practices.
Full-text available
Dark patterns are (evil) design nudges that steer people’s behaviour through persuasive interface design. Increasingly found in cookie consent requests, they possibly undermine principles of EU privacy law. In two preregistered online experiments we investigated the effects of three common design nudges (default, aesthetic manipulation, obstruction) on users’ consent decisions and their perception of control over their personal data in these situations. In the first experiment (N = 228) we explored the effects of design nudges towards the privacy-unfriendly option (dark patterns). The experiment revealed that most participants agreed to all consent requests regardless of dark design nudges. Unexpectedly, despite generally low levels of perceived control, obstructing the privacy-friendly option led to more rather than less perceived control. In the second experiment (N = 255) we reversed the direction of the design nudges towards the privacy-friendly option, which we title “bright patterns”. This time the obstruction and default nudges swayed people effectively towards the privacy-friendly option, while the result regarding perceived control stayed the same compared to Experiment 1. Overall, our findings suggest that many current implementations of cookie consent requests do not enable meaningful choices by internet users, and are thus not in line with the intention of the EU policymakers. We also explore how policymakers could address the problem.
We collect and analyze a corpus of more than 300,000 political emails sent during the 2020 US election cycle. These emails were sent by over 3000 political campaigns and organizations including federal and state level candidates as well as Political Action Committees. We find that in this corpus, manipulative tactics—techniques using some level of deception or clickbait—are the norm, not the exception. We measure six specific tactics senders use to nudge recipients to open emails. Three of these tactics—“dark patterns”—actively deceive recipients through the email user interface, for example, by formatting “from:” fields so that they create the false impression the message is a continuation of an ongoing conversation. The median active sender uses such tactics 5% of the time. The other three tactics, like sensationalistic clickbait—used by the median active sender 37% of the time—are not directly deceptive, but instead, exploit recipients’ curiosity gap and impose pressure to open emails. This can further expose recipients to deception in the email body, such as misleading claims of matching donations. Furthermore, by collecting emails from different locations in the US, we show that senders refine these tactics through A/B testing. Finally, we document disclosures of email addresses between senders in violation of privacy policies and recipients’ expectations. Cumulatively, these tactics undermine voters’ autonomy and welfare, exacting a particularly acute cost for those with low digital literacy. We offer the complete corpus of emails at for journalists and academics, which we hope will support future work.
Cookie banners are devices implemented by websites to allow users to manage their privacy settings with respect to the use of cookies. They are part of a user’s daily web browsing experience since legislation in Europe requires websites to show such notices. In this paper, we carry out a large-scale study of more than 17,000 websites including more than 7,500 cookie banners in Greece and the UK to determine compliance and tracking transparency levels. Our analysis shows that although more than 60% of websites store third-party cookies in both countries, only less than 50% show a cookie notice and hence a substantial proportion do not comply with the law even at the very basic level. We find only a small proportion of the surveyed websites providing a direct opt-out option, with an overwhelming majority either nudging users towards privacy-intrusive choices or making cookie rejection much harder than consent. Our results differ significantly in some cases from previous smaller-scale studies and hence underline the importance of large-scale studies for a better understanding of the big picture in cookie practices.