Content uploaded by Kerstin Bongard-Blanchy
Author content
All content in this area was uploaded by Kerstin Bongard-Blanchy on Jul 02, 2021
Content may be subject to copyright.
"I am Definitely Manipulated, Even When I am Aware of it. It’s
Ridiculous!" - Dark Paerns from the End-User Perspective
Kerstin Bongard-Blanchy
kerstin.bongard-blanchy@uni.lu
University of Luxembourg
Esch sur Alzette, Luxembourg
Arianna Rossi
arianna.rossi@uni.lu
SnT, University of Luxembourg
Luxembourg, Luxembourg
Salvador Rivas
salvador.rivas@uni.lu
University of Luxembourg
Esch sur Alzette, Luxembourg
Sophie Doublet
sophie.doublet@uni.lu
University of Luxembourg
Esch sur Alzette, Luxembourg
Vincent Koenig
vincent.koenig@uni.lu
University of Luxembourg
Esch sur Alzette, Luxembourg
Gabriele Lenzini
gabriele.lenzini@uni.lu
SnT, University of Luxembourg
Luxembourg, Luxembourg
ABSTRACT
Online services pervasively employ manipulative designs (i.e., dark
patterns) to inuence users to purchase goods and subscriptions,
spend more time on-site, or mindlessly accept the harvesting of
their personal data. To protect users from the lure of such designs,
we asked: are users aware of the presence of dark patterns? If so, are
they able to resist them? By surveying 406 individuals, we found
that they are generally aware of the inuence that manipulative
designs can exert on their online behaviour. However, being aware
does not equip users with the ability to oppose such inuence. We
further nd that respondents, especially younger ones, often recog-
nise the "darkness" of certain designs, but remain unsure of the ac-
tual harm they may suer. Finally, we discuss a set of interventions
(e.g., bright patterns, design frictions, training games, applications
to expedite legal enforcement) in the light of our ndings.
CCS CONCEPTS
•Security and privacy →Social aspects of security and pri-
vacy
;Usability in security and privacy;
•Human-centered com-
puting →Empirical studies in HCI; Graphical user interfaces.
KEYWORDS
dark patterns, online manipulation, digital nudging, consumer pro-
tection, user experience, user interface
ACM Reference Format:
Kerstin Bongard-Blanchy, Arianna Rossi, Salvador Rivas, Sophie Doublet,
Vincent Koenig, and Gabriele Lenzini. 2021. "I am Denitely Manipulated,
Even When I am Aware of it. It’s Ridiculous!" - Dark Patterns from the End-
User Perspective. In ACM DIS Conference on Designing Interactive Systems,
June 28– July 2, 2021, Virtual event, USA. ACM, New York, NY, USA, 14 pages.
https://doi.org/10.1145/3461778.3462086
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
DIS 2021, June 28– July 2, 2021, Virtual event, USA
©2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8476-6/21/0.
https://doi.org/10.1145/3461778.3462086
1 INTRODUCTION
The pervasiveness of manipulative practices in online services is
increasingly under the limelight. Thanks to information technolo-
gies, manipulative practices can be implemented at low costs, at
large scale, with unprecedented sophistication [
48
] and high ef-
fectiveness [
69
] in dynamic, interactive, intrusive, and adaptive
environments [
68
]. Such online practices endeavour to inuence
purchase decisions, nudge people to spend considerable amounts
of time on a service (thus intensifying data collection to fuel the
so-called attention economy [
36
]), and trick users into accepting
privacy-invasive features, thereby undermining their right to the
protection of their personal information and exposing them to
privacy harms. Online manipulation does not only erode legal pro-
tections, but it also deprives unaware individuals of their capacity
for independent decision-making [
68
]. Therefore, the phenomenon
is scrutinised by a growing number of practitioners and researchers,
with the aim of exposing it and devising countermeasures.
This article focuses on a specic form of online manipulation:
dark patterns. They are dened as “design choices that benet an
online service by coercing, steering or deceiving users into making
decisions that, if fully informed and capable of selecting alternatives,
they would not make” [
47
]. Dark patterns direct user behaviour
towards choices that may oer advantages like ease of use, free ser-
vices and immediate gratication. However, such choices may have
an adverse impact on individual welfare (e.g., invasion of privacy,
nancial loss, behavioural addiction) and collective welfare (e.g.,
harm to competition, erosion of consumer trust) [
48
]. Dark patterns
are believed to work because they exploit cognitive biases and hu-
man bounded rationality [
5
,
73
], and a growing body of research
demonstrates the inuence of dark patterns on online behaviours.
Although experts voice their apprehension, users’ awareness re-
garding dark patterns is still an under-researched topic and has
only been recently the object of dedicated studies [28, 46].
The study presented in this article seeks to ll this gap, by deter-
mining whether dark patterns exploit (1) users’ lack of awareness
or concern; (2) users’ incapability of recognising dark patterns (the
so-called “dark-pattern blindness” [
20
]); or (3) users’ inability to
resist dark patterns, despite their awareness and ability to recog-
nise them. By investigating the user perspective, we aim to identify
requirements for eective countermeasures, that can either act on
the individual or on factors lying outside the individual. If users are
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
unaware or unconcerned about dark patterns, one solution consists
in strengthening their motivation to counteract them (e.g., using
warnings to increase the salience of risks). Suppose users are con-
cerned about the risks deriving from dark patterns, but they are
nevertheless unable to withstand them. In that case, their ability
to resist needs to be improved (e.g., adding friction designs that
disrupt automatic behaviour), while stronger environmental protec-
tions should also be leveraged (e.g., steep nes against companies
employing dark patterns).
This article makes the following contributions to understand
users’ awareness of manipulative designs online: (1) It reveals that
users are able to recognise dark patterns, but they are only vaguely
aware of the entailed concrete harm. It furthermore hints that a
higher ability to discern manipulative designs is positively related
the capacity to self-protect. Moreover, the study shows that people
under 40 and with higher education than high school diplomas
are more likely to recognise dark patterns. (2) The ndings guide
designers, educators, developers, and regulators to draft appropriate
interventions to counteract manipulative designs online, both in
terms of intervention scope and measure.
2 RELATED WORK
Previous work has investigated the presence of dark patterns in on-
line services [
18
,
20
,
33
,
50
,
67
], even through automated means like
web scraping [
35
,
47
,
55
]. Various categories and denitions have
been proposed to characterise the phenomenon in general [
9
,
28
],
but also specically to account for video-games [
77
], ubiquitous
computing [
30
], automated systems [
52
] and home robots [
39
]; or
in the context of data privacy [
5
,
11
] and e-commerce [
47
]. Exam-
ples of dark pattern implementations have been gathered in online
collections [
9
,
38
,
49
,
70
,
71
] to create knowledge, raise awareness,
propose alternatives and build training corpora for algorithms. Al-
though there is a consensus that dark patterns can employ and
even combine coercive, deceptive, and nudging strategies, clear
boundaries between (inadmissible) manipulative designs and other
(admissible) designs (e.g., digital nudges helping users reach praise-
worthy goals, like adopting more secure behaviour online) are yet
to be set. Mathur et al
. [48]
recently sought to bring coherence to
the existing jungle of denitions and attributes by establishing that
some dark patterns modify the decision space in an asymmetric,
restrictive, unequal or covert manner. In contrast, others manipu-
late the information ow through deception or the concealment
of information. They also map out the normative considerations
that underpin the problematic nature of dark patterns with respect
to other designs. They diminish the individual and the collective
welfare, weaken regulatory objectives and undermine individual
autonomy.
A growing number of studies demonstrates the eect of dark
patterns on online behaviour and strives to nd the causes of their
eectiveness, especially of those extorting consent in cookie di-
alogues [
29
,
33
,
45
,
55
,
67
,
72
]. It has been proposed [
5
,
73
] that
human innate cognitive limitations (e.g., cognitive biases, bounded
rationality) are skilfully exploited by online services to direct users
toward choices they may regret [
58
]. Examples are the status quo
bias that benets from the human tendency to stick with the default
option and the bandwagon eect that leverages herd behaviour.
Certain cognitive biases might interfere with risk assessment [
4
]
and can hereby explain ill-decision making. For example, hyper-
bolic discounting causes people to overvalue current rewards (e.g.,
accomplish a task), while they inadequately discount the cost of
future risks [
73
] (e.g., privacy invasion). The optimism bias [
63
]
might make individuals underestimate their disposition to online
manipulation.
Some scholars investigated whether individuals are able to iden-
tify dark patterns. Di Geronimo et al
. [20]
introduced the notion
of "dark pattern-blindness", to explain why most respondents (i.e.,
well-educated, of various origins) in their study were not able to
recognise dark patterns in mobile applications. However, when
the study participants were informed of the potential presence of
dark patterns in the context at hand, they became more capable
of spotting them. Luguri and Strahilevitz
[43]
showed that mild
(i.e., more subtle) dark patterns go more easily unnoticed than ag-
gressive ones and that less-educated individuals are signicantly
more likely to be inuenced than more educated subjects. Shaw
[64]
noticed how the overuse of scarcity and social proof messages
on travel websites makes consumers ignore them even on other
types of websites. On a similar note, M. Bhoot et al
. [44]
found that
the ability to identify a dark pattern is correlated with its frequency
of occurrence and the frustration it provokes. If the interface is ap-
pealing, respondents tend to experience less frustration and hardly
notice manipulative attempts. The experimental data shows that
certain design attributes can inuence people’s capacity of spotting
and resisting dark patterns.
The attitudes of various stakeholders towards dark patterns have
been explored, too. Design practitioners [
9
,
10
,
23
,
34
,
66
] have been
the rst to voice their concerns over these questionable practices.
Similar considerations have been developed by regulators [
11
,
54
]
and consumer organisations [
24
,
60
]. Several studies [
12
–
14
,
26
–
28
,
75
] have analysed practitioners’ ethical values and their conict
with other stakeholders’ interests. However, only recently it has
been inquired whether dark patterns are a source of concern for end-
users. Maier and Harr
[46]
found a raising awareness and a general
sense of annoyance among Swedish students. They also uncovered
resignation, as their respondents believed it impossible to avoid
online manipulation, and they acknowledged that the benets (e.g.,
free service) outweigh the negative consequences. In an analogous
study [
25
], English-speaking and Mandarin-speaking respondents
evoked a general impression of manipulation in digital products.
They were able to identify what makes them grow suspicious, even
though they lacked a specic vocabulary to indicate the source of
that feeling.
In terms of solutions, Graßl et al
. [29]
used design nudges (so-
called bright patterns) to reverse the direction of dark patterns
and steer users’ consent decisions towards the privacy-friendly
option (e.g., pre-selection of "Do not agree" option). They also rec-
ommended long-term boosts that help users acquire procedural
rules because the repeated use of analytic thinking converts into
protection heuristics (e.g., every time I encounter a consent request,
I take the time to read the information before making a choice).
Based on a survey among impulse buyers, Moser et al
. [53]
pro-
posed friction designs that counteract dark pattern mechanisms in
purchase decisions (e.g., disabling urgency and scarcity messages).
M. Bhoot et al
. [44]
and Mathur et al
. [47]
suggested a plug-in
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
or browser extension that automatically detects dark patterns on
websites and noties the user. Leiser
[40]
discussed the regulatory
tools that can be leveraged to prohibit and ne these practices. In
parallel, Maier and Harr
[46]
assumed that dark patterns diminish
customers’ trust in and the credibility of a brand in the long term,
leading customers to stop using the service.
3 RESEARCH GAPS AND RESEARCH
QUESTIONS
The spectrum of possible dark pattern design implementations is
vast, ranging from coercive designs that constrain user options to
nudges that subtly play on the visual prominence of one choice over
another. Thus, it is impossible to identify one single intervention
that could free the web from all dark patterns. Drafting appropriate
interventions is hence a design problem in itself. Before working
on the solutions, it is however indispensable to understand which
user issue the interventions aim to solve.
Individuals may execute a threat appraisal [
61
] that makes them
believe that dark patterns do not inict serious harm. They may
also think that they are invulnerable - or at least less vulnerable
than others, as it is customary in online risk appraisal [
76
]. We
therefore asked:
RQ1
Are users aware of and concerned about the inuence of
manipulative interface designs on their behaviour?
Di Geronimo et al
. [20]
concluded that individuals are subject to
’dark pattern-blindness’. It is also assumed that manipulation is a
hidden inuence, while coercion is not [
69
], and that nudges work
only when people are unaware of the inuence that is exerted on
them [41]. This is why we asked:
RQ2 Are users able to recognise manipulative interface designs?
It is further assumed that the transparency (i.e., the visibility)
of an inuence is a crucial dimension for its acceptability, because
it gives the opportunity to control the inuence [
31
], for instance
by resisting. However, transparency may not be sucient to con-
trast the inuence: resignation, benets (e.g., free service) [
46
], the
cognitive costs of opposing dark patterns and other factors might
undermine the ability to resist. This is why we sought to relate
awareness, ability to detect and inuence of dark patterns on users,
by asking:
RQ3
Are users likely to be inuenced by manipulative interface
designs despite being aware of, concerned about, and capable
of recognising manipulative interface designs?
Additionally, lower educational levels seem to be correlated with
a greater inuence on consumer behaviour [
43
]. Therefore, we
explored if level of education, age, and use frequency of online ser-
vices are signicantly associated with the three research questions.
4 APPROACH
4.1 Study design
To investigate the three research questions, we designed an on-
line survey on LimeSurvey,
1
administered through Prolic.
2
People
1https://www.limesurvey.org/
2https://www.prolic.co/
Figure 1: Sequence of the questions in the survey.
were rst asked about their general mindset concerning manipula-
tive designs online, followed by ratings of their online behaviour,
before being exposed to specic dark pattern designs (Figure 1). In
addition, demographic data regarding their gender, age, and educa-
tion was gathered. All questions were mandatory, except for a nal
general feedback eld. The three parts of the survey are detailed in
the following.
4.1.1 Part 1: Awareness and concern. The rst part of the survey
addressed participants’ awareness and concerns about online de-
signs’ potential inuence. Six statements were displayed in pairs
(one pair per page) opposing general perspective (“people/others”)
and personal perspective (“my/me”):
•
The design of websites or applications can inuence [peo-
ple’s/my] choices and behaviours.
•
Websites or applications that are designed to manipulate
3
users can cause harm to [people/me].
•
I am worried about the inuence of manipulative websites
and applications on [people’s/my] choices and behaviours.
Participants were instructed to rate their agreement on a 5-point
Likert scale (from -2 = strongly disagree to 2 = strongly agree). If
they gave an armative or undecided answer (0, 1 or 2), they
were furthermore invited to cite examples of experienced inuence,
potential harm and related worries after each statement pair.
4.1.2 Part 2: Use frequency of online services and disposition to
manipulation. The second part served to complement the demo-
graphic data. To obtain a proxy of participants’ exposure to online
services, participants had to rate the frequency of their engagement
with eight common services (How often do you: play online games /
order products online / use social media / etc.?). As a second indicator,
3
In the second and third question of part 1, the term "manipulate/manipulative" was
chosen to indicate dark patterns in a commonly understandable manner, while in the
rst question "inuence" was preferred to avoid negative priming. Similarly, we used
both terms in Part 3: Spot the dark pattern. In certain cases, we deliberately chose the
term manipulation because inuence of design can be very widely interpreted and
lead to answers that are not relevant for the research at hand (as the answers to the
rst open question illustrate).
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
we sought to obtain the participants’ disposition to be inuenced
by manipulative designs online. To this end, participants had to
indicate their usual behaviour in eight situations in which web
services commonly employ manipulative strategies (While using
online services: I reserve a service quickly when there are only a few
items left. / I keep the default permissions when I install an app. / etc.).
We randomised the item order for both tasks and kept the phrasing
neutral to avoid that the action would be perceived as undesirable
behaviour.
4.1.3 Part 3: Spot the dark paern. The third part served to evalu-
ate the participants’ capability to recognise dierent dark pattern
types. Ten interfaces of existent online services were displayed in
random order. The interfaces had been redesigned in a uniform style
and freed of any reference to a real brand. One example without
any dark pattern was included as control condition. The other nine
examples contained dark patterns that impact individual welfare,
causing nancial harm, data privacy harm, and time and attention-
related harm. Within these categories, authors one and two gath-
ered numerous examples of existing interfaces with dark patterns
from reports [
60
], online collections
4
and personal screenshots. The
interface selection represented a mix of popular and less known
brands of various services, such as e-commerce websites, dating
apps, and social media. Table 1 provides the denition of the dark
patterns embedded in the interfaces shown in Fig. 2.
Each example was displayed for 10 to 40 seconds, depending on
its textual complexity. The participants were asked if they noticed
any design element that might inuence their behaviour. It was
made explicit beforehand that not all examples contained such
elements. This indication and the time constraint served to limit
excessive searching that does not occur in a regular use context.
Once the image disappeared, the participants saw a thumbnail of
the interface and a text eld. They had to describe the manipulative
element (i.e., the means of the inuence) and the presumable service
intention (i.e., its ends) to employ that element. After going through
all ten interfaces, the participants were given an explanation about
the contained dark pattern(s). The explanations also pointed to
potential benets of these designs for users (e.g., ease of use). To
conclude, the participants had to rate on a 5-point Likert scale (from
-2 = strongly disagree to 2 = strongly agree) if they believed it likely
to be inuenced by the displayed designs and if they considered
the strategy employed by the online service acceptable.
4.2 Participants
The survey collected responses from 413 participants. The data of
seven individuals that gave gibberish answers were excluded from
data analysis, leaving a sample size of 406. Prolic allowed to gather
a representative sample of the UK population in terms of age, gender
and ethnic origin.
5
Since this option was only available for the UK
and the US, the former was selected to address participants living in
a uniformly regulated digital ecosystem. The demographics of the
participants were as follows: 193 male, 200 female, 13 non-disclosed.
Their age ranged from 18 to 81 years (mean 45.2, SD 15.5): Silent
Generation (75-92 years) = 3, Baby Boomers (56-74 years) = 130,
Generation X (40-55 years) = 112, Generation Y / Millennials (24-39
4E.g., https://www.reddit.com/r/darkpatterns/
5https://researcher-help.prolic.co/hc/en-gb/articles/360019238413
years) = 119, Generation Z / Zoomers (<24 years) = 42. Concerning
the level of education, 106 had a high school diploma or lower,
236 vocational training or a Bachelor’s degree, and 64 were post-
graduates. Several iterations with 16 pre-test participants served
to enhance the comprehensibility of the questions and to reduce
the duration to max. 30 minutes to avoid participants’ fatigue. The
survey was published and completed on Prolic on July 7, 2020. All
participants were compensated with 3.75£, a price indicated as fair
by Prolic6.
4.3 Ethical and Legal Considerations
The study adheres to the University of Luxembourg’s research
ethics guidelines and the European Federation of Psychologists’
Associations’ code of ethics
7
. In addition, the authorisation of the
University’s Ethics Review Board was obtained prior to the study.
The survey gathered answers anonymously, and the questions did
not inquire about information that would allow the identication
of participants.
4.4 Data analysis
4.4.1 Awareness of and concern about the influence of manipulative
online designs (RQ1). First, we calculated the mean, median and
mode scores for the awareness ratings. We then computed a two-
sided sign test to verify if the delta between the ratings referring to
the participants (i.e., personal perspective) and their ratings refer-
ring to people in general (i.e., general perspective) was signicant.
Bivariate Pearson correlations were furthermore used to analyse
how the personal awareness ratings correlate with the demographic
data. The qualitative answers on awareness, harm, and worry were
coded in MAXQDA
8
through an inductive approach. Researcher
one coded 10 per cent (41 participants) of the sample and developed
a set of codes. Researcher two coded the same set with the possibil-
ity to add codes. Non-agreement cases were discussed and codes
adapted. The same procedure was repeated with another set of 41
participants. Since the inter-coder agreement reached 0.81 (Kappa
Brennan & Prediger), researcher one nalised the coding for the
whole data set. The codes included use cases, inuence objectives,
inuence types, harm types, concern types and types of victims.
4.4.2 Dark paern detection (RQ2). The open answers to each
example in the third part of the survey were coded through a de-
ductive approach, by assigning a score depending on whether the
participant identied the manipulative design element(s) correctly
(no=0 / partly=0.5 / yes=1). Each example showed one main dark
pattern, identied by the authors following the sources where the
examples were obtained. Further plausible manipulative design
elements found by the respondents were inductively included in
the pool of correct or partially correct answers. Researchers one
and two coded the answers of 10 per cent of the sample and devel-
oped the codebook in a shared document. Non-agreement cases
were discussed and the codebook consequently adapted. The same
procedure was repeated with another random set of 41 participants
by researchers one and four. The inter-rater agreement reached
6
https://researcher-help.prolic.co/hc/en-gb/articles/360009223533- What- is-your-
pricing-
7http://ethics.efpa.eu/metaand-model- code/meta-code/
8https://www.maxqda.com/
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
Figure 2: The interfaces designs tested in this study.
a kappa of 0.77 (Kappa Brennan & Prediger). Given the substan-
tial level of agreement, researcher one nalised the coding for the
whole data set. The quantitative data analysis was undertaken in
Stata
9
(v.16.1). The dark pattern detection scores for each partici-
pant were summed (ranging from 0 to 9). Since it is not possible
to draw a distinct line between high and low detection scores, we
9https://www.stata.com/
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
Type Denition
High-demand message Indicating that a product is in high demand and likely to sell out soon [47]
Limited-time message Indicating that a deal will expire soon without specifying a deadline [47]
Conrmshaming Using shame to steer users towards making a certain choice [47]
Trick question Using confusing language to steer users towards making a certain choice [47]
Loss-gain framing
A selective disclosure of information that positively frames the consequences of an
action, while omitting the entailed risks [6]
Pre-selection
An option is selected by default prior to user interaction, even though it is against her
interest or may have unintended consequences [28]
False hierarchy
Visual or interactive prominence of one option over others, whereas available choices
should be evenly leveled rather than hierarchical [28]
Hidden information Disguising relevant information (options, actions) as irrelevant [28]
Auto-play Automatically loading one video when the previous ends [3]
Bundled consent
Gathering consent for multiple settings through a single action (our own denition but
see [62] )
Forced consent
Coercing users into accepting xed legal terms in exchange for access to the service
[11]
Table 1: Denitions of the dark patterns selected for this study.
could not transform detection outcomes into binary variables. An
OLS regression was hence chosen to control for signicant dier-
ences deriving from age, educational level, use frequency of online
services, and disposition to be inuenced by online designs.
4.4.3 Likelihood to fall for dark paerns (RQ3). Similar to the dark
pattern detection scores, we summed the participants’ ratings about
their likelihood to be inuenced by the proposed designs. We then
ran an OLS regression to estimate the strength of association of
participants’ likelihood to be inuenced with awareness (personal
perspective), dark pattern detection, acceptability, and demographic
data. Linear regression was again chosen because inuence out-
comes for the totality of the dark pattern examples could not be
transformed into binary variables.
5 RESULTS
The following section presents the results with regard to the three
research questions introduced in Sec. 3, starting with the ndings
on people’s awareness of the inuence of manipulative designs
online, followed by people’s capacity to detect dark patterns. As a
nal step, both are examined as indicators for people’s likelihood
to be inuenced by manipulative designs. For each part, dier-
ences associated with age, educational degree, use frequency of
online services, as well as disposition to online manipulation, are
addressed.
5.1 People’s awareness of the inuence of
online designs on their choices and
behaviour
The rst research question asked if people are aware of the inuence
of designs on their choices and behaviours in online services.
Awareness of inuence. Regarding the ratings of the three ques-
tion pairs in part one of the survey (Sec. 4.1.1), the results reported
in Table 2 show that the participants were aware that online designs
can inuence their choices and behaviours (Me: mean = 1.05 SD
0.78, median = 1, mode = 1). The qualitative analysis of the answers
reveals that the participants strongly associated inuential online
designs with well-known brands such as Amazon (mentioned by
103 participants), Netix (40), Facebook (38), Instagram (22), eBay
(17), Twitter (15), Youtube (13). They acknowledged that online
designs may shape their spending behaviour (64 mentions), as well
as their content consumption (45) and service choice (23), mainly
through personalised contents and recommendations (78), as well
as special oers (41). Some participants evoked social inuence (22)
as an eective strategy. Only a small number of participants cited
specic design elements like visual appeal (24) or layout (17) as
inuencing factors. Some of them pointed to a website’s or app’s
ease of use as a factor inuencing whether they use it or not (43).
Awareness of potential harm. The participants were uncertain
if manipulative designs online can cause them harm (Me: mean =
0.00 SD 1.10, median = 0, mode = -1), as shown in Table 2. The most
frequently cited service category whose inuence was considered
harmful is social media (mentioned by 24 participants). The most
prominent harm identied by participants was harm to themselves
(135) both of psychological and physical nature. This was followed
by mentions of nancial harm (89), such as debt and unreasonable
spending (51). Fewer participants evoked cybersecurity threats (31)
or harm to their privacy (16). Some mentioned the dangers related
to misleading information (28), and how these might inuence
people’s opinions, values and attitudes (19) and cause damage to
society (13).
Worries about manipulative designs. As shown by the results in
Table 2, the respondents were undecided and showed the tendency
not to worry about being manipulated by online designs (Me: -0.29
SD 1.07, median -1, mode -1): "I am not so personally worried about
being manipulated, because I know myself well enough to question
things and not get manipulated."(P78). However, regardless of their
own age, they evoked apprehension for vulnerable people (men-
tioned by 17 participants) and specically for young people (14),
the elderly (12), and children (11). For these people, they worried
about the inuence on spending behaviour (51), leading to nancial
losses (89). They furthermore expressed worry about the presence
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
of false or misleading information (45), coupled with their under-
standing that online services only serve pre-ltered information
(22) which inuence people’s opinions (41) and impede informed
choices (20). Such ill-formed decisions might eventually cause harm
to society (29) as well as to people’s physical and mental health (23).
There were also worries about cybersecurity threats (26). Finally,
respondents found it worrisome that it is challenging to discern
manipulative attempts (23), especially for vulnerable individuals.
Personal versus general perspective. Several comments referring
to concerns highlight that the participants were more worried for
other people than themselves: “I consider myself very aware of these
sort of things but someone else who has not a lot of internet experience
or online shopping or believes whatever they see or are told will follow
everything.”(P215). The results conrm this impression: the partici-
pants rated awareness, harm, and worry signicantly higher when
referring to people in general, as opposed to themselves (Table 2).
Awareness by individual characteristics. The results displayed
in Figure 3 show an inverse correlation between people’s age and
their awareness of online design’s inuence on themselves (r=-0.20,
n=406, p=0.00). As some participants pointed out: “Being elderly
I nd it relatively easy to avoid being manipulated by these strate-
gies. Technology has a place in my life but not an important place. I
am not easily taken in.”(P250). Participants with higher education
showed a slightly higher awareness of the inuence of designs on
their choices and behaviours (r=0.11, n=406, p=0.03), awareness of
potential harm on themselves (r=0.12, n=406, p=0.01), as well as
worry about the potential inuence on themselves (r=0.11, n=406,
p=0.02). Furthermore, the correlations indicate that those who use
online services more frequently considered it more likely that ma-
nipulative designs inuence their behaviour (r=0.25, n=406, p=0.00).
Individuals with a higher disposition to be inuenced also showed
a higher awareness of their own likelihood of being inuenced by
manipulative designs (r=0.29, n=406, p=0.00). At the same time,
they were also more worried about the inuence on their choices
and behaviour (r=0.14, n=406, p=0.01).
Summary of RQ1. On average, respondents are aware of online
design’s inuence on their behaviour, especially on the type of
online content they consume and the digital services they use.
However, they are unsure if they can be harmed personally, even
though they can name specic examples (e.g., frustration, anxiety,
debt, loss of self-condence). They are undecided on whether they
should worry and are more concerned about other people than
themselves.
5.2 People’s ability to detect dark patterns
Research question two sought to investigate whether individuals
are able to recognise dark patterns. The results show that, when
asked to look for elements that can inuence users’ choices and
behaviour, 59% of the participants were able to identify ve or more
dark patterns out of the nine interfaces correctly. One fourth recog-
nised the dark patterns in seven, eight, or all nine interfaces (Figure
4). As can be seen in Table 3, the interfaces including the dark
pattern types trick question, pre-selection, loss-gain framing, hidden
information and bundled+forced consent were only recognised by
Figure 3: Correlation matrix for participants’ awareness rat-
ings and their individual characteristics, n=406, p < 0.05 with
dark background.
half or less of the participants, while the majority of the partici-
pants correctly identied the interfaces containing a high-demand /
limited time message and conrmshaming.
Figure 4: Frequency of dark pattern detection
Dark pattern detection and individual characteristics. Using OLS
regression analysis to model the inter-relationship between the de-
tection of dark patterns and associated factors (Figure 5), it emerges
that younger people could identify a higher number of dark patterns
than the older Baby Boomer+ generation
10
net of education, use
frequency, and disposition: Millenials/Gen Y: coef. .60 (95%CI: 0.04
to 1.16); Zoomers/Gen Z coef. 1.09 (95%CI: 0.35 to 1.83). Generation
X is not better or worse than the older Baby Boomer+ generation,
as indicated by the regression results coef. 0.28 (95%CI: -0.25 to
0.81). Regarding education, participants with a high school degree
or lower detected less dark patterns (coef. -0.80 (95%CI: -1.26 to
-0.33)), compared to participants with a Bachelor’s degree or vo-
cational training. However, participants with higher degrees than
Bachelor were neither better nor worse at identifying manipulative
design strategies (coef. 0.32 (95%CI: -0.24 to 0.88)). This suggests
that the Bachelor/vocational training level is a threshold below
which recognition rates are lower. The regression analysis also
indicates a slight positive correlation between online use frequency
and the number of dark patterns detected (coef. 0.04 (95%CI: 0.00 to
0.09)), but no signicant correlation for disposition to manipulation
and dark pattern detection (coef. 0.03 (95%CI: -0.01 to 0.09)).
10
Due to the low number of Silent Generation participants in the survey, the reference
category combines the Baby Boomer generation (people born between 1946-1964) and
the older “Silent Generation” (people born between 1928-1945).
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
1. Aware of inuence 2. Aware of potential harm 3. Worried about
Me 1.05 (SD 0.78)/1/1 0.00 (SD 1.10)/0/-1 -0.29 (SD 1.07)/-1/-1
People 1.30 (SD 0.59)/1/1 0.66 (SD 0.97)/1/1 0.60 (SD 1.01)/1/1
Sign test
(n=115, x>=98, p=0.5)=0.0000 (n=183, x>=175, p=0.5)=0.0000 (n=233,x>=222, p=0.5)=0.0000
Table 2: Mean/median/mode scores of participants’ rating of their 1) awareness of the inuence and 2) the potential harm
caused by manipulative designs online and 3) their degree of worry, with regard to themselves and people in general; values
ranging from -2 strongly disagree to 2 strongly agree; the last row shows the p values for the two-sided sign tests between Me
and People ratings.
Dark pattern detected Inuential Acceptable
Dark pattern name no partly yes
a) Trick question / Pre-selection 51% 9% 40% -0.14(SD1.22)/0/1 -0.73(SD1.12)/-1/-1
b) Loss-gain framing / Conrmshaming 27% 19% 53% -0.03(SD1.26)/0/1 -0.79(SD1.06)/-1/-1
c) Pre-selection / Loss-gain framing 51% 11% 38% 0.00(SD1.20)/0/1 -0.50(SD1.08)/-1/-1
d) Hidden information / Trick question 64% 20% 16% 0.36(SD1.21)/1/1 -1.17(SD1.06)/-2/-2
e) Bundled / Forced consent 50% 36% 14% 0.36(SD1.15)/1/1 -0.64(SD1.06)/-1/-1
f) High-demand / Limited-time message 5% 11% 84% 0.39(SD1.25)/1/1 -0.39(SD1.11)/0/-1
g) Conrmshaming 27% 2% 71% -0.85(SD1.15)/-1/-2 -0.22(SD1.10)/0/1
h) Hidden information / False hierarchy 42% 4% 54% -0.49(SD1.28)/-1/-1 -1.00(SD1.09)/-1/-2
i) Auto-play 40% 11% 49% 0.72(SD1.05)/1/1 0.74(SD0.87)/1/1
Table 3: Dark pattern detection percentages for the 9 dark pattern interfaces in the survey and mean/median/mode scores of
participants’ evaluation of their likeliness to be inuenced by the dark pattern and the acceptability of the strategy (from -2
=strongly disagree to 2 = strongly agree).
Figure 5: Plot from linear regression with robust stan-
dard errors for dark pattern detection; n=406, Adj 𝑅2=0.097,
BIC=1754.781.
Dark pattern detection and awareness. The correlation matrix
(Figure 3) shows a positive correlation between recognition of dark
patterns and awareness of manipulative designs’ inuence (r=0.13,
n=406, p=0.01). However, many participants were surprised that
they were unable to recognise certain manipulative designs:
"I like to think I am pretty ’switched on’ when it comes to avoiding
being manipulated online and this highlighted to me how much I sign
away at the click of the button!"(P150)
"There were a few elements that I missed, but were obvious when
pointed out. It shows how easy it is to be manipulated, even when one
thinks they are aware."(P358)
Summary of RQ2. We conclude that people are able to recog-
nise dark patterns, but there is variation across dark pattern types.
Younger age (<40 y.), as well as education levels above high-school
degree, are positively correlated with this ability.
5.3 People’s likelihood to be inuenced by
dark patterns
The third research question sought to investigate if higher aware-
ness, as well as a higher capability to detect manipulative designs,
make people less likely to be inuenced. We ran an OLS regression
analysis to model the inter-relationship between the participants’
self-reported inuence-likelihood rating and the associated factors
(Figure 6).
Likelihood of being inuenced and the capacity of dark pattern
detection. The data shows a slight inverse correlation between par-
ticipants’ dark pattern detection capability and their inuence-
likelihood rating (coef. -0.58 (95%CI: -0.90 to -0.26)). This indicates
that people who recognise manipulative designs more easily con-
sider themselves slightly less likely to be inuenced by them.
Likelihood of being inuenced and dark pattern acceptability. In
all tested designs, participants were on average uncertain whether
the dark patterns would inuence their behaviour and whether they
nd them acceptable (Table 3). Those who considered these designs
more acceptable also reported being slightly more inuenced in
their behaviour (coef. 0.16 (95%CI: 0.04 to 0.28)). Interestingly, a
design’s admissibility was not necessarily related to its inuence
strength. For example, about half of the participants recognised
both (h) hidden information / false hierarchy and (i) auto-play. After
receiving an explanation about what could be considered manipula-
tive in both interfaces, the participants tended to nd (i) auto-play
more inuential than (h) hidden information / false hierarchy. How-
ever, they deemed (i) auto-play more acceptable than (h) hidden
information / false hierarchy.
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
Figure 6: Plot from linear regression with robust standard
errors for likelihood for being inuenced by dark patterns;
n=406, Adj 𝑅2=0.3039, BIC=2723.14; AW = awareness, WOR =
worry.
Likelihood of being inuenced and awareness. Participant com-
ments at the closure of the survey reect that awareness is not a
signicant predictor for participants’ likelihood to be inuenced
by manipulative designs.
"I think I’m aware of most manipulative practices but there are
certain applications like video streaming and booking accommodation
where I am denitely manipulated, even when I am aware of it. It’s
ridiculous!"(P139)
"I feel I am quite aware of some of the subtleties of advertising and
suggestion but there were elements I hadn’t even considered may be
unconsciously inuencing my choices."(P222)
Indeed, only respondents who strongly believed that online de-
signs can inuence them, also deemed it likely to be inuenced by
the designs in the tested interfaces (coef. 2.52 (95%CI: 0.24 to 4.81)).
Conversely, people who strongly disagreed with being worried
about the inuence of online designs on themselves considered it
also unlikely to fall for the designs in the tested interfaces (coef.
-3.16 (95%CI: -5.77 to -0.54)).
Likelihood of being inuenced and individual characteristics. The
regression analysis shows no signicant correlation between age,
education, or online use frequency and the self-reported likeli-
hood to be inuenced by manipulative designs. However, there is
a positive correlation between participants’ disposition to online
manipulation and their ratings concerning the inuence likelihood
(coef. 0.51 (95%CI: 0.36 to 0.66)), which hints at a certain degree of
inability to resist manipulative designs.
Summary RQ3. We conclude that people who recognise manipu-
lative designs with more ease report, on average, a lower likelihood
of being inuenced by them. However, whether people are very
aware of online manipulative attempts or not makes, on average,
no dierence in terms of their likelihood to be inuenced by such
designs.
6 DISCUSSION
We discuss the results in light of the interventions that could be
put in place to counteract dark patterns. Interventions can aim to
(i.e., intervention scope) a) raise awareness of the existence and
the risks of dark patterns, b) facilitate their detection, c) bolster the
resistance towards them, or d) eliminate them from online services.
Interventions can act on the user or the environment (i.e., inter-
vention measures): educational interventions favour users’ agency,
regulatory interventions tend to protect the user, technical and
design interventions are situated in-between. This distinction can
serve to identify the actors (e.g., design practitioners, researchers,
educators, regulators) that should implement the interventions and
devise appropriate evaluation indicators. The resulting matrix is
shown in Figure 7.
6.1 Raising awareness
The results indicate that people are generally cognizant that digital
services can exert a detrimental inuence on their users, but fail to
understand how manipulative designs can concretely harm them.
Individuals’ lack of sucient concern does not impact their ability
to spot dark patterns. However, it may impact their motivation
to counter them, like taking a few extra steps to select the less
privacy-invasive option in consent dialogues. Moreover, people are
more worried about the danger represented by dark patterns for
other people than for themselves, thus conrming previous assump-
tions [
76
]. Warnings [
29
] are a design intervention that can make
threats salient and concrete (e.g., about nancial losses following
mindless purchasing decisions) and counterbalance the tendency
to underestimate online threats due to hyperbolic discounting and
optimism bias. However, warnings become rapidly ineective as
users get habituated (i.e., warning fatigue [
2
]) and need to mutate
continuously to continue capturing users’ attention [8].
6.2 Facilitating detection
The study results on dark pattern detection show signicant varia-
tions across the proposed designs. A majority of users recognised
conrmshaming and the high-demand / limited-time message (sim-
ilar to [
64
]), whilst dark patterns based on deception strategies (e.g.,
trick question, loss-gain framing and hidden information), together
with the pre-selection nudge and forced consent, were scarcely
recognised. Although such ndings only concern a specic imple-
mentation of the dark pattern and cannot be generalised to the cat-
egory, they may suggest that certain dark patterns are intrinsically
more dicult to spot. For instance, the omission of information is a
shrewd deceptive strategy. It requires users to have a correct mental
model of expectations, coupled with high cognitive activation [
6
],
to notice the absence of certain elements. In our facial recognition
example which was based on loss-gain framing, many respondents
were simply unaware of the unbalance in the presentation of the
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
Figure 7: Intervention spaces for counteracting dark patterns.
arguments ("I don’t think this one is manipulative, it’s just explaining
the benets of using face recognition, and the possible drawbacks
of not using it"(P169)) and of the entailed risks ("Turning on Fa-
cial Recognition, is it good or bad??"(P141)). This probably explains
why this dark pattern was rarely identied. When respondents
mentioned possible risks, their answers revealed wrong mental
models about the drawbacks of facial recognition, like installation
of malware, target advertisement, or unlawful surveillance. In such
cases, educational measures such as training on cause-and-eect
data privacy scenarios can act successfully to sharpen manipula-
tion detection abilities. As for what concerns the poor detection
of pre-selection and forced consent, a plausible explanation is that
users have grown accustomed to such designs. However, a dedi-
cated study could demonstrate which design attributes [
48
] make
certain dark patterns harder to spot.
It would also be useful to investigate folk models about dark
patterns: mental models that are not necessarily accurate in the
real world and lead to erroneous decision-making [
74
]. Comple-
mentarily, it should be further researched which attributes trigger
users’ scepticism towards interfaces and activate a more elaborate
mode of thought (i.e., counterfactual thinking [
6
]) that disposes
them to recognise potential manipulation attempts. For instance,
respondents in M. Bhoot et al
. [44]
indicated sudden interruptions
and excessive ads as elements activating scepticism. However, such
research also found that users hardly notice a manipulation attempt
when the interface is appealing. This result is in line with previous
work, demonstrating that people base their online trust judgements
on cues, such as the visual ones [
65
]. On this note, what has been
learnt in anti-phishing research about the cues that evoke distrust
in professional-looking e-mails can be of use to determine how
to activate the "critical persuasion insight" [
6
, p. 114]. Many com-
ments of our respondents hinted that activities like "spot the dark
pattern" can serve as an eye-opener: "This has been a great survey
and it has certainly made me more aware of certain things that I have
not noticed in the past. I will be keeping an eye out for such things
going forward" (P214). Similar gamied experiences
11
integrated
into major digital services (e.g., a Facebook game) could strengthen
the motivation to learn how to spot dark patterns in real settings,
without the cognitive cost of transferring skills learnt in training
to the context of digital services.
Concerning technical interventions, algorithms and applications
that automatically identify, ag, and even classify potentially illegal
practices at large scale should be developed on the model of [
42
,
47
,
55
], to expedite watchdogs’ supervising tasks and provide proof to
consumer advocates. Such tools need a large pool of reliable data
to carry out the recognition and categorisation of manipulative
attempts that are challenging even for humans. To this end, we are
currently assembling a corpus of dark pattern interfaces published
on Reddit12 and Twitter13 by social media users.
6.3 Bolstering resistance
Even though there are no signicant correlations in milder rat-
ings, those respondents who declared to be very likely inuenced
also showed a higher awareness of this possibility and greater con-
cern. This hints that awareness of one’s own vulnerability does
not automatically trigger better self-defence against manipulative
inuences. Design interventions can enhance users’ appraisal of
the eort it takes to cope with certain dark patterns – for example,
indicating the time it takes to unsubscribe when it is an overly com-
plex procedure (see e.g., Amazon Prime [
24
]). Reframing the costs
of falling prey to dark patterns in personally relevant terms, as pro-
posed by Moser et al
. [53]
to contrast impulse buying, may also be
11
See e.g. https://cookieconsentspeed.run/, a game where users need to navigate
ambiguous options and distrust obvious buttons in cookie dialogues.
12
https://www.reddit.com/r/assholedesign/ and https://www.reddit.com/r/darkpatterns/
13The tweets containing hashtags like #darkpattern
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
considered – for instance, by converting the time spent on innite
scrolling into other pleasurable activities. User research can deter-
mine what is valuable for users, as it emerges from our respondents
(e.g., P278: "It [is] such a waste of precious time, that can be used in
reading, personal time, ecercise [sic] some more benetial [sic] for the
individual".) Friction designs can disrupt automatic behaviour with
positive eects by introducing small obstacles that create a brief
moment of reection and stimulate more mindful interactions [
19
].
Already in use to induce more secure online behaviour [
21
] and
proposed to counter irrational spending behaviour [
53
], friction
designs are now widespread on streaming services (e.g., YouTube,
Netix) to counter binge-watching. Similar nudges could oppose
innite scrolling, defaults, and mindless consent to data sharing
and extensive online tracking.
The study participants who recognised more dark patterns also
reported a lower likelihood of being inuenced by them. This sug-
gests that the ability to recognise a threat is intimately related to
the ability to protect oneself [
61
]. The disclaimer that manipula-
tive elements may be contained in the interfaces had the eect of
activating participants’ counterfactual thinking and encouraged a
more reective way of processing information. An educational in-
tervention like long-term boosts can build manipulation-protection
abilities and empower people to apply them without having to re-
sort to deliberate thinking in the long-run [
32
], like the procedural
rules proposed in Graßl et al
. [29]
: "every time I encounter a cookie
consent dialogue, I search for the ’refuse all’ button".
However, the cost of resisting dark patterns varies depending
on whether they employ coercive, nudging, or deceptive strategies,
and on their specic implementations. Coercive dark patterns (e.g.,
forced consent) are inescapable: if someone desires to use a service
that integrates a coercive design, they do not have the possibility
of avoiding it (i.e., ‘take-it-or-leave-it’). Dark patterns preventing
individuals from accomplishing a task are similarly daunting, since
opposing them would come with a high cognitive cost. For instance,
only a (motivated) minority of website visitors is willing to take
additional steps to adjust their preferences on cookie dialogues [
72
].
Nudging strategies may be implemented variously and thereby
exert more or less inuence on users (e.g., mild vs aggressive dark
patterns [
43
]). Deceptive strategies, on the other hand, can be re-
sisted only through the activation of counterfactual thinking.
6.4 Eliminating dark patterns from digital
services
Dark patterns are present in more than 10% of global shopping
websites [
47
], in almost 90% of cookie consent dialogues of the
top 10000 websites in the UK [
55
] and more than 95% of the 200
most popular apps [
20
]. To respond to such pervasiveness, technical
solutions that ease autonomous decision making can be devised,
like Do Not Track
14
, the add-on extension Consent-O-Matic
15
or
browser plug-ins that disable other plug-ins, e.g., those that create
scarcity messages.
Digital nudges that counteract dark patterns (i.e., bright patterns)
by, for example, making the privacy-savvy option more salient,
modify the environment where users make choices. There is a rich
14https://allaboutdnt.com/
15https://addons.mozilla.org/en- US/refox/addon/consent-o- matic/.
literature concerning design nudges that enhance privacy decision
making [
1
], including personalised nudges adapted to individual
decision making styles [
59
]. Graßl et al
. [29]
found that bright pat-
terns nourish users’ perception of lack of control, though, as they
act on unreective behaviour in the same way as dark patterns.
Coercive and deceptive dark patterns (e.g., forced consent, trick
questions), though, cannot be defeated through digital nudges. A
complementary design intervention consists of promoting good
practices through the publication of design guidelines [
54
,
57
] and
the involvement of companies in problem-solving activities on con-
crete case studies [
15
]. Ethical design tool-kits
16
can be employed
to foresee the consequences of certain designs comprehensively.
At the same time, persuasive technology heuristics (e.g., [
37
]) may
be adapted to assess the potential manipulative eects of digital
products even before their release. Building on such initiatives and
[
48
,
51
], we plan to develop a standardised transparency impact
assessment process for interface design.
However, given the omnipresence of dark patterns on online
services, it is somewhat unrealistic to expect businesses to imple-
ment such interventions on their own: economic incentives and
regulatory interventions should complement other proposed ac-
tions. Legal safeguards should apply more stringently, as many
dark patterns are unlawful in the EU under consumer law (e.g.,
omission of information, obstruction to subscription cancellation)
[
24
,
40
,
54
] and the data protection regime (e.g., forced consent,
loss-gain framing) [
22
,
60
]. Sti penalties can furthermore act as a
deterrent: in France, for instance, Google has received nes for a
total of 150 million
€
due to invalid consent design and elicitation
[
16
,
17
]. Empirical research demonstrating the presence, diusion
and eects of manipulative designs might have an impact on legal
enforcement: cookie consent dialogues increasingly oer privacy-
by-default options as a result of case law (e.g., the landmark case
Planet49 [
56
]) and, conceivably, of intense academic scrutiny. The
threat of more stringent regulations (e.g., the US Social Media Addic-
tion Reduction Technology Act) and public pressure (e.g., derived
from the popularity of documentaries like “The social dilemma”)
may even encourage self-regulation.
6.5 Targeting interventions - older vs younger
generations
Our results highlight that older generations are not only less able to
recognise manipulative attempts, but they are also less aware that
their choices and behaviour can be inuenced. This could be prob-
lematic, as perceived vulnerability to harm is a key factor to trigger
self-protection [
61
]. The combination of lack of awareness and lack
of capability makes dark patterns’ eects particularly dangerous for
older adults, as they struggle to adapt their learned self-protection
abilities to evolving (digital) environments [
6
], echoing ndings
about online misinformation [
7
]. That said, it is arguably easier to
dene ad hoc protections addressed to younger populations (e.g.,
like the ICO’s “Age Appropriate Design” code of practice [
57
])
than to older ones: how would targeted safeguards for over 40s
be enacted and received? Moreover, our ndings do not suggest
any signicant correlation with the likelihood to be inuenced,
16E.g., https://ethicalos.org.
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
although it would be worthwhile to expand research in this direc-
tion. The study results show that an age lower than 40 years and
an education level higher than high school diploma constitute a
critical threshold for recognising dark patterns and could indicate
that the other part of the population is sensibly less likely to be
aware of manipulative attempts online.
7 LIMITATIONS
The choice of a large representative sample of the UK population
for this survey has the objective of generalizing the ndings to the
whole population. However, given that Prolic is an online platform,
participants are probably more accustomed to online designs than
the average (e.g., the distribution of the participants’ online use
frequency shows a positive skew). Therefore, our results might
overestimate what a less tech-savvy UK population is aware of
when it comes to manipulative designs online. It would also be
interesting to nd out whether the study would achieve dierent
conclusions in other countries. It should be further mentioned
that the participants’ likelihood to be inuenced by manipulative
designs was derived from a self-reported measure and does not
necessarily reect actual behaviour, which makes the measure an
approximation and invites further research.
Following the ratings of awareness, harm, worry in part one of
the survey, the participants were invited to cite examples. Whilst
we explicitly asked about the inuence of manipulative designs,
people also cited manipulative content (e.g., fake news), signalling
that it is not obvious for people to distinguish form from content.
We thus assume that the participants’ awareness of and concern
about the inuence caused by manipulative designs may be lower
than indicated by the results. Concerning the dark pattern detec-
tion activity, we are aware that explicitly searching manipulative
design elements does not entirely correspond to a real use situation.
We sought to counterbalance the eect through the time limit and
the allusion that some interfaces would not contain manipulative
elements. That said, we also estimate that such settings could cor-
respond to a real-world scenario where people remark something
odd and activate counterfactual thinking [6].
8 CONCLUSIONS
Manipulative designs are a growing threat in the online environ-
ment. Practitioners and researchers from multiple domains (HCI,
computer science, law, etc.) currently seek to expose and counteract
their inuence on user behaviour. Yet, to shield users eectively, it
is essential to understand their capabilities when confronted with
manipulative designs. This study shows that individuals are aware
of manipulative designs’ potential inuence on their behaviour
and rather capable of recognising such designs. While an inverse
correlation between dark pattern design recognition and partici-
pants’ likelihood to be inuenced was found, the level of awareness
did not play a signicant role in predicting their ability to resist
manipulative designs. This nding implies that raising awareness
on the issue is not sucient to shield users from the inuence of
dark patterns.
Our discussion presented a palette of interventions (i.e., design,
technical, educational, and regulatory measures) meant to heighten
people’s awareness of manipulative design practices, ease their
detection, strengthen people’s resistance to them, or root them out.
We believe that design measures (like frictions and bright patterns)
and technical solutions (like automated dark pattern detection ap-
plications) should be further investigated, together with assessment
tools, economic incentives and regulatory solutions.
To complement scholars’ and authorities’ views on the issue,
we suggest exploring established dark pattern attributes in combi-
nation with the user perspective as part of future work. Only by
understanding which (combinations of) attributes are commonly
perceived as unrecognisable, irresistible and/or unacceptable by
the users, we can devise appropriate interventions. Moreover, the
exploration of user perceptions can help establish what is deemed
legitimate and what is not by end-users without taking a normative
stance. Looking at dark patterns from the user perspective shows
that they are a problem with many variables. As such, they require
that a variety of actors teams up to devise a kaleidoscope of inter-
ventions. Designers should be on the front line to help tame the
monster they contributed creating.
ACKNOWLEDGMENTS
This publication is a rst step of the project Decepticon (grant no.
IS/14717072) supported by the Luxembourg National Research Fund
(FNR). We would like to thank the anonymous reviewers of DIS
2021 and CHI 2021 for their helpful comments and all those who
have helped us rene the study design.
REFERENCES
[1]
Alessandro Acquisti, Idris Adjerid, Rebecca Balebako, Laura Brandimarte, Lor-
rie Faith Cranor, Saranga Komanduri, Pedro Giovanni Leon, Norman Sadeh,
Florian Schaub, Manya Sleeper, et al
.
2017. Nudges for privacy and security:
Understanding and assisting users’ choices online. ACM Computing Surveys
(CSUR) 50, 3 (2017), 1–41.
[2]
Devdatta Akhawe and Adrienne Porter Felt. 2013. Alice in Warningland: A Large-
Scale Field Study of Browser Security Warning Eectiveness. In 22nd USENIX
Security Symposium (USENIX Security 13). USENIX Association, Washington,
D.C., 257–272. https://www.usenix.org/conference/usenixsecurity13/technical-
sessions/presentation/akhawe
[3]
Adam Alter. 2017. Irresistible: The rise of addictive technology and the business of
keeping us hooked. Penguin Publishing Group.
[4]
Susanne Barth and Menno DT De Jong. 2017. The privacy paradox–Investigating
discrepancies between expressed privacy concerns and actual online behavior–A
systematic literature review. Telematics and informatics 34, 7 (2017), 1038–1058.
[5]
Christoph Bösch, Benjamin Erb, Frank Kargl, Henning Kopp, and Stefan Pfatthe-
icher. 2016. Tales from the dark side: Privacy dark strategies and privacy dark
patterns. Proceedings on Privacy Enhancing Technologies 2016, 4 (2016), 237–254.
[6]
David M. Boush, Marian Friestad, and Peter Wright. 2009. Deception In The
Marketplace: The Psychology of Deceptive Persuasion and consumer self-protection
(rst edition ed.). Rouledge.
[7]
Nadia M. Brashier and Daniel L. Schacter. 2020. Aging in an Era of Fake News.
Current Directions in Psychological Science 29, 3 (Jun 2020), 316–323. https:
//doi.org/10.1177/0963721420915872
[8]
Cristian Bravo-Lillo, Lorrie Cranor, Saranga Komanduri, Stuart Schechter, and
Manya Sleeper. 2014. Harder to ignore? Revisiting pop-up fatigue and approaches
to prevent it. In 10th Symposium On Usable Privacy and Security (
{
SOUPS
}
2014).
105–111.
[9] Harry Brignull. 2010. Dark Patterns. https://darkpatterns.org/
[10]
John Brownlee. 2016. Why Dark Patterns Won’t Go Away. https://www.
fastcompany.com/3060553/why-dark-patterns- wont-go- away
[11] Régis Chatellier, Georey Delcroix, Estelle Hary, and Camille Girard-Chanudet.
2019. Shaping choices in the digital world. From dark patterns to data protection:
the inuence of ux/ui design on user empowerment. Technical Report. CNIL.
[12]
Shruthi Sai Chivukula and Colin M GRAY. 2020. Co-Evolving Towards Evil
Design Outcomes: Mapping Problem and Solution Process Moves. In Proceedings
of the Design Research Society Conference.
[13]
Shruthi Sai Chivukula, Colin M Gray, and Jason A Brier. 2019. Analyzing value
discovery in design decisions through ethicography. In Proceedings of the 2019
CHI Conference on Human Factors in Computing Systems. 1–12.
Aware but manipulated DIS 2021, June 28– July 2, 2021, Virtual event, USA
[14]
Shruthi Sai Chivukula, Chris Rhys Watkins, Rhea Manocha, Jingle Chen, and
Colin M Gray. 2020. Dimensions of UX Practice that Shape Ethical Awareness. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.
1–13.
[15]
CNIL. [n.d.]. Données & Design. Co-building user journeys compliant with the
GDPR and respectful of privacy. https://design.cnil.fr/
[16]
CNIL. 2019. Deliberation of the Restricted Committee SAN-2019-001 of 21 January
2019 pronouncing a nancial sanction against GOOGLE LLC. https://www.cnil.
fr/sites/default/les/atoms/les/san-2019- 001.pdf
[17]
CNIL. 2020. Cookies: nancial penalties of 60 million euros against the company
GOOGLE LLC and of 40 million euros against the company GOOGLE IRELAND
LIMITED. https://www.cnil.fr/en/cookies-nancial- penalties-60- million- euros-
against-company- google-llc- and- 40-million- euros-google-ireland
[18]
Gregory Conti and Edward Sobiesk. 2010. Malicious interface design: exploiting
the user. In Proceedings of the 19th international conference on World wide web -
WWW ’10. ACM Press, 271. https://doi.org/10.1145/1772690.1772719
[19]
Anna L. Cox, Sandy J.J. Gould, Marta E. Cecchinato, Ioanna Iacovides, and Ian Ren-
free. 2016. Design Frictions for Mindful Interactions: The Case for Microbound-
aries. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human
Factors in Computing Systems (CHI EA ’16). Association for Computing Machinery,
1389–1397. https://doi.org/10.1145/2851581.2892410
[20]
Linda Di Geronimo, Larissa Braz, Enrico Fregnan, Fabio Palomba, and Alberto
Bacchelli. 2020. UI dark patterns and where to nd them: a study on mobile
applications and user perception. In Proceedings of the 2020 CHI Conference on
Human Factors in Computing Systems. 1–14.
[21]
Verena Distler, Gabriele Lenzini, Carine Lallemand, and Vincent Koenig. 2020.
The Framework of Security-Enhancing Friction: How UX Can Help Users Behave
More Securely. In New Security Paradigms Workshop 2020. ACM, 45–58. https:
//doi.org/10.1145/3442167.3442173
[22] EDPB. 2020. Guidelines 05/2020 on consent under Regulation 2016/679.
[23]
Trine Falbe, Kim Andersen, and Martin Michael Frederiksen. 2020. The ethical
design handbook. Smashing Media AG.
[24]
Forbrukerrådet. 2021. You can log out, but you can never leave.
How Amazon manipulates consumers to keep them subscribed to Amazon
prime. https://l.forbrukerradet.no/wp-content/uploads/2021/01/2021- 01-14-
you-can- log-out- but-you-can- never-leave- nal.pdf
[25]
Colin M. Gray, Jingle Chen, Shruthi Sai Chivukula, and Liyang Qu. 2020. End
User Accounts of Dark Patterns as Felt Manipulation. arXiv:2010.11046 [cs] (Oct
2020). http://arxiv.org/abs/2010.11046 arXiv: 2010.11046.
[26]
Colin M Gray and Shruthi Sai Chivukula. 2019. Ethical mediation in UX practice.
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
1–11.
[27]
Colin M Gray, Shruthi Sai Chivukula, and Ahreum Lee. 2020. What Kind of Work
Do" Asshole Designers" Create? Describing Properties of Ethical Concern on
Reddit. In Proceedings of the 2020 ACM on Designing Interactive Systems Conference.
61–73.
[28]
Colin M Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L Toombs.
2018. The dark (patterns) side of UX design. In Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems. 1–14.
[29]
Paul Graßl, Hanna Schraenberger, Frederik Zuiderveen Borgesius, and Moniek
Buijzen. 2021. Dark and Bright Patterns in Cookie Consent Requests. Journal of
Digital Social Research 3, 11 (Feb 2021), 1–38. https://doi.org/10.33621/jdsr.v3i1.54
[30]
Saul Greenberg, Sebastian Boring, Jo Vermeulen, and Jakub Dostal. 2014. Dark
patterns in proxemic interactions: a critical perspective. In Proceedings of the 2014
conference on Designing interactive systems. 523–532.
[31]
Pelle Guldborg Hansen and Andreas Maaløe Jespersen. 2013. Nudge and the ma-
nipulation of choice: A framework for the responsible use of the nudge approach
to behaviour change in public policy. European Journal of Risk Regulation 4, 1
(2013), 3–28.
[32]
Ralph Hertwig and Till Grüne-Yano. 2017. Nudging and Boosting: Steering or
Empowering Good Decisions. Perspectives on Psychological Science 12, 6 (Nov
2017), 973–986. https://doi.org/10.1177/1745691617702496
[33]
Soheil Human and Florian Cech. 2020. A Human-centric Perspective on Digital
Consenting: The Case of GAFAM. In Human Centred Intelligent Systems. Springer,
139–159.
[34]
Arushi Jaiswal. 2018. Dark patterns in UX: how designers should be responsible
for their actions. https://uxdesign.cc/dark- patterns-in- ux-design-7009a83b233c
[35]
Georgios Kampanos and Siamak F. Shahandashti. 2021. Accept All: The Landscape
of Cookie Banners in Greece and the UK. arXiv:2104.05750 [cs] (Apr 2021).
http://arxiv.org/abs/2104.05750 arXiv: 2104.05750.
[36]
Lexie Kane. 2019. The Attention Economy. https://www.nngroup.com/articles/
attention-economy/
[37]
Julie A. Kientz, Eun Kyoung Choe, Brennen Birch, Robert Maharaj, Amanda
Fonville, Chelsey Glasson, and Jen Mundt. 2010. Heuristic evaluation of persua-
sive health technologies. In Proceedings of the ACM international conference on
Health informatics - IHI ’10. ACM Press, 555. https://doi.org/10.1145/1882992.
1883084
[38] UXP2 Lab. 2018. The dark side of UX Design. https://darkpatterns.uxp2.com/
[39]
C. Lacey and C. Caudwell. 2019. Cuteness as a ‘Dark Pattern’ in Home Robots. In
2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
374–381. https://doi.org/10.1109/HRI.2019.8673274
[40]
M. R. Leiser. 2020. “Dark Patterns”: The Case for Regulatory Pluralism.
SSRN Repository (Jun 2020). https://doi.org/10.2139/ssrn.3625637 DOI:
10.2139/ssrn.3625637.
[41]
Tim-Benjamin Lembcke, Nils Engelbrecht, Alfred Benedikt Brendel, Bernd Her-
renkind, and Lutz M. Kolbe. 2019. Towards a Unied Understanding of Digital
Nudging by Addressing its Analog Roots.. In PACIS.
[42]
Marco Lippi, Przemysław Pałka, Giuseppe Contissa, Francesca Lagioia, Hans-
Wolfgang Micklitz, Giovanni Sartor, and Paolo Torroni. 2019. CLAUDETTE:
an automated detector of potentially unfair clauses in online terms of service.
Articial Intelligence and Law 27, 2 (2019), 117–139.
[43]
Jamie Luguri and Lior Strahilevitz. 2019. Shining a light on dark patterns. U of
Chicago, Public Law Working Paper 719 (2019).
[44]
Aditi M. Bhoot, Mayuri A. Shinde, and Wricha P. Mishra. 2020. Towards the
Identication of Dark Patterns: An Analysis Based on End-User Reactions. In
IndiaHCI ’20: Proceedings of the 11th Indian Conference on Human-Computer
Interaction (IndiaHCI 2020). Association for Computing Machinery, 24–33. https:
//doi.org/10.1145/3429290.3429293
[45]
Dominique Machuletz and Rainer Böhme. 2020. Multiple purposes, multiple
problems: A user study of consent dialogs after GDPR. Proceedings on Privacy
Enhancing Technologies 2020, 2 (2020), 481–498.
[46]
Maximilian Maier and Rikard Harr. 2020. Dark Design Patterns: An End-user
Perspective. Human Technology 16, 2 (2020), 170–199.
[47]
Arunesh Mathur, Gunes Acar, Michael J Friedman, Elena Lucherini, Jonathan
Mayer, Marshini Chetty, and Arvind Narayanan. 2019. Dark patterns at scale:
Findings from a crawl of 11K shopping websites. Proceedings of the ACM on
Human-Computer Interaction 3, CSCW (2019), 1–32.
[48]
Arunesh Mathur, Jonathan Mayer, and Mihir Kshirsagar. 2021. What Makes a Dark
Pattern... Dark? Design Attributes, Normative Considerations, and Measurement
Methods. arXiv:2101.04843 [cs] (Jan 2021). https://doi.org/10.1145/3411764.
3445610 arXiv: 2101.04843.
[49]
Arunesh Mathur, Angelina Wang, Carsten Schwemmer, Maia Hamin, Brandon M.
Stewart, and Arvind Narayanan. 2020. Manipulative tactics are the norm in
political emails: Evidence from 100K emails from the 2020 U.S. election cycle.
https://electionemails2020.org.
[50]
Célestin Matte, Nataliia Bielova, and Cristiana Santos. 2020. Do Cookie Banners
Respect my Choice?: Measuring Legal Compliance of Banners from IAB Europe’s
Transparency and Consent Framework. In 2020 IEEE Symposium on Security and
Privacy (SP). IEEE, 791–809.
[51]
Christian Meske and Ireti Amojo. 2020. Ethical Guidelines for the Construction
of Digital Nudges. In 53rd Hawaii International Conference on Systems Sciences
(HICSS). 3928–3937. http://arxiv.org/abs/2003.05249 arXiv: 2003.05249.
[52]
Sarah Theres Völkel Daniel Buschek Michael and Malin Eiband Chromik. 2019.
Dark Patterns of Explainability, Transparency, and User Control for Intelligent
Systems. In 2nd Workshop on Explainable Smart Systems at the ACM Conference
on Intelligent User Interfaces (IUI’19).
[53]
Carol Moser, Sarita Y. Schoenebeck, and Paul Resnick. 2019. Impulse Buying:
Design Practices and Consumer Needs. In Proceedings of the 2019 CHI Conference
on Human Factors in Computing Systems - CHI ’19. ACM Press, 1–15. https:
//doi.org/10.1145/3290605.3300472
[54]
Netherlands Authority for Consumers & Markets. 2020. ACM Guidelines
on the Protection of the Online Consumer. Boundaries of Online Persua-
sion. https://www.acm.nl/sites/default/les/documents/2020-02/acm- guidelines-
on-the- protection-of- the- online-consumer.pdf
[55]
Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, and Lalana Kagal.
2020. Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating
their inuence. In Proceedings of the 2020 CHI Conference on Human Factors in
Computing Systems. 1–13.
[56]
Court of Justice of the European Union. 2019. C-673/17 - Planet49. Judgment of the
Court (Grand Chamber) of 1 October 2019 Bundesverband der Verbraucherzen-
tralen und Verbraucherverbände - Verbraucherzentrale Bundesverband e.V. v
Planet49 GmbH. http://curia.europa.eu/juris/liste.jsf?num=C-673/17
[57]
Information Commissioner Oce. 2020. Age appropriate design: a code of
practice for online services. https://ico.org.uk/for-organisations/guide- to-
data-protection/key- data-protection- themes/age-appropriate-design- a-code-
of-practice- for-online- services/
[58]
Stigler Committee on Digital Platforms. 2019. “Final Report,” Stigler Center for the
Study of the Economy and the State. (Sep 2019). https://research.chicagobooth.
edu/stigler/media/news/committee-on- digital-platforms-nal- report
[59]
Eyal Peer, Serge Egelman, Marian Harbach, Nathan Malkin, Arunesh Mathur,
and Alisa Frik. 2020. Nudge Me Right: Personalizing Online Security Nudges
to People’s Decision-Making Styles. Computers in Human Behavior 109, Au-
gust 2020, 106347 (2020). https://doi.org/10.1016/j.chb.2020.106347 DOI:
10.2139/ssrn.3324907.
[60]
Forbruker Radet. 2018. Deceived by design. How tech companies use dark patterns
to discourage us from exercising our rights to privacy. Technical Report.
DIS 2021, June 28– July 2, 2021, Virtual event, USA Bongard-Blanchy, et al.
[61]
Ronald W. Rogers and Steven Prentice-Dunn. 1997. Protection motivation theory
(d. s. gochman ed.). Plenum Press, 113–132.
[62]
Cristiana Santos, Nataliia Bielova, and Célestin Matte. 2020. Are cookie banners
indeed compliant with the law?:. Technology and Regulation 2020 (Dec 2020),
91–135. https://doi.org/10.26116/techreg.2020.009
[63]
Tali Sharot. 2011. The optimism bias. Current biology 21, 23 (2011), R941–R945.
[64]
Simon Shaw. 2019. Consumers Are Becoming Wise to Your Nudge. https:
//behavioralscientist.org/consumers-are- becoming-wise-to- your-nudge/
[65]
Dwayne D. Sijun Wang, Sharon E. Beatty, and William Foxx. 2004. Signaling the
Trustworthiness of Small Online Retailers. Journal of Interactive Marketing ( John
Wiley and Sons) 18, 1 (2004), 53–69. https://doi.org/10.1002/dir.10071
[66]
Natasha Singer. 2016. When Websites Won’t Take No for an Answer. The
New York Times (May 2016). https://www.nytimes.com/2016/05/15/technology/
personaltech/when-websites- wont-take- no- for-an- answer.html
[67]
Than Htut Soe, Oda Elise Nordberg, Frode Guribye, and Marija Slavkovik. 2020.
Circumvention by design–dark patterns in cookie consents for online news
outlets. In NordiCHI 2020.
[68]
Daniel Susser, Beate Roessler, and Helen Nissenbaum. 2019. Technology, Auton-
omy, and Manipulation. https://papers.ssrn.com/abstract=3420747
[69]
Daniel Susser, Beate Roessler, and Helen F. Nissenbaum. 2018. Online Manipula-
tion: Hidden Inuences in a Digital World. Georgetown Law Technology Review 1
(2018), 1–45.
[70] Unknown. [n.d.]. Conrmshaming. https://conrmshaming.tumblr.com/
[71] Unknown. 2019. Dark pattern games. https://www.darkpattern.games/
[72]
Christine Utz, Martin Degeling, Sascha Fahl, Florian Schaub, and Thorsten Holz.
2019. (Un) informed Consent: Studying GDPR Consent Notices in the Field. In
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications
Security. 973–990.
[73]
Ari Ezra Waldman. 2020. Cognitive biases, dark patterns, and the ‘privacy
paradox’. Current opinion in psychology 31 (2020), 105–109.
[74]
Rick Wash. 2010. Folk Models of Home Computer Security. In Proceedings of the
Sixth Symposium on Usable Privacy and Security (Redmond, Washington, USA)
(SOUPS ’10). Association for Computing Machinery, New York, NY, USA, Article
11, 16 pages. https://doi.org/10.1145/1837110.1837125
[75]
Christopher Rhys Watkins, Colin M Gray, Austin L Toombs, and Paul Parsons.
2020. Tensions in Enacting a Design Philosophy in UX Practice. In Proceedings of
the 2020 ACM on Designing Interactive Systems Conference. 2107–2118.
[76]
Ryan West. 2008. The psychology of security. Commun. ACM 51, 4 (2008), 34–40.
[77]
José P Zagal, Staan Björk, and Chris Lewis. 2013. Dark Patterns in the Design
of Games. In Proceedings of the 8th International Conference on the Foundations of
Digital Games (FDG 2013). 39–46.