ArticlePDF Available

Scaring and Bullying People into Security Won't Work

Authors:

Abstract

Users will pay attention to reliable and credible indicators of a risk they want to avoid. More accurate detection and better security tools are necessary to regain users' attention and respect.
SECURITY & PRIVACY ECONOMICS
Editors: Michael Lesk, lesk@acm.org | Jerey MacKie-Mason, jmm@umich.edu
2 May/June 2015 Copublish ed by the IEEE Computer a nd Reliability So cieties 1540-7993/15/$31.00 © 2015 IEEE
Usable security and privacy
research began more than
15 years ago. In 1999, Alma Whit-
ten and J.D. Tygar explained “W hy
Johnny Can’t Encrypt,”1 and Anne
Adams and I pleaded that, even
though they don’t always com-
ply with security policies, “Users
Are Not the Enemy.2 Today, there
are several specialist conferences
and workshops: publications on
usability security and privacy are
featured in top usability confer-
ences, such as ACM SIGCHI
Conference on Human Factors in
Computing Systems (CHI), and
top security conferences, such as
the IEEE Symposium on Security
and Privacy.
An ongoing topic in usable
security research is security warn-
ings. Security experts despair that
the vast majority of users ignore
warnings—they just “swat” them,
as they do with most dialog boxes.
Over the past six years, continu-
ous eorts have focused on chang-
ing this behavior and geing users
to pay more aention. SSL certi-
cate warnings are a key example:
all browser providers have evolved
their warnings in an aempt to
get users to take them more seri-
ously. For instance, Mozilla Firefox
increased the number of dialog
boxes and clicks users must wade
through to proceed with the con-
nection, even though it might not
be secure. However, this has made
lile dierence to the many users
who decide to ignore the warnings
and proceed. But creating more
elaborate warnings to guide users
toward secure behavior is not nec-
essarily the best course of action, as
it doesn’t align with the principles
of user-centered design.
Refining Warnings
At ACM CHI 2015, two studies
reported on eorts to make more
users heed warnings. Adrienne
Porter Felt and her colleagues at
Google designed a new SSL warn-
ing for Google Chrome, applying
recommendations from current
usable security research: keep warn-
ings brief, use simple language to
describe the specic risk, and illus-
trate the potential consequences of
proceeding.3 e authors hypoth-
esized that if users understand the
risks associated with a warning,
they will heed rather than ignore it.
ey tested these improved
warnings in a series of mini surveys
and found a modest but signicant
(12 percent) improvement in the
number of participants who cor-
rectly identied the potential risks
of proceeding, but no signicant
improvement in the number of par-
ticipants who correctly identied
the data at risk. In addition, com-
pared to existing browser SSL warn-
ings, there was no improvement in
the number of participants who
thought the warning was likely to be
a false positive.
Scaring and Bullying People into Security
Wont Work
Angela Sasse | University College London
Felt and her colleagues reasoned
that if they couldn’t improve users’
understanding, they might still be
able to guide users toward secure
choices. ey applied what they
called opinionated design to make it
harder for participants to circum-
vent warnings, and visual design
techniques to make the secure
course of action look more arac-
tive. In a eld study, this technique
led to a 30 percent increase in the
number of participants who didn’t
proceed upon seeing the warning.
e authors concluded that it’s di-
cult to improve user comprehension
of online risks with simple, brief,
nontechnical, and specic warn-
ings, yet they urge fellow research-
ers to keep trying to develop such
warnings. In the meantime, they
advise designers to use opinionated
design to deter users from proceed-
ing in the face of warnings by mak-
ing them harder to circumvent and
emphasizing the risks associated
with doing so.
In the second paper, Bon-
nie Anderson and her colleagues
examined 25 participants’ brain
responses to warnings using a func-
tional magnetic resonance imaging
(fMRI) scanner.4 Previous studies
using eye tracking showed that users
habituate: the rst time around, a
warning catches their aention, but
aer repeated showings, it does not.
Anderson and her colleagues found
that the brain mirrors this habitu-
ation: when encountering a warn-
ing for the rst time, participants’
visual processing center in the supe-
rior parietal lobes showed elevated
activation levels, but these disap-
peared with repeated showings of
the warning.
e authors hypothesized that
varying a warning’s appearance,
such as its size, color, and text order-
ing, should prevent habituation
and keep participants paying aen-
tion. ey found that participants
indeed showed sustained activa-
tion levels when encountering these
polymorphic warnings; partici-
pants’ aention only decreased on
average aer the 13th variation of
the same warning. ey concluded
that users can’t help but habitu-
ate, and designers should combat
this by creating warnings that force
users to pay aention.
Usability: When
Does “Guiding”
Become Bullying”?
Both teams’ work was motivated by
an honorable intention—to help
users choose the secure option.
But as a security researcher with
a usability background and many
years of studying user behavior in
the lab as well as in real-world set-
tings, I am concerned by the sug-
gestion that we should use design
techniques to force users to keep
paying aention and push them
toward what we deem the secure—
and hence beer—option. It is a
paternalistic, technology-centered
perspective that assumes the secu-
rity experts’ solution is the correct
way to manage a specic threat.
In the case of SSL, the authors
recommended counteracting peo-
ple’s habituation response and
keeping their aention focused on
security. However, habituation is
an evolved response that increases
human eciency in day-to-day
interactions with the environment:
we stop paying aention to signals
we’ve deemed irrelevant. Crying
wolf too oen leads to alarm or alert
fatigue; this has been demonstrated
over many decades in industries
such as construction and mining
and, most recently, with the rapid
increase of monitoring equipment
in hospitals.
In 2013, the US Joint Com-
mission issued an alert about the
widespread phenomenon of alarm
fatigue.5 e main problem was
desensitization to alarms, which led
to sta missing critical events. An
increase in workload and decrease in
patient satisfaction were also noted.
Eminent soware engineer and
usability expert Alan Cooper identi-
ed the use of warnings in soware
as a problem more than a decade
ago.6 He pointed out that warn-
ings should be reserved for genuine
exceptions—events soware devel-
opers couldn’t reasonably anticipate
and make provisions for. Perhaps
on their legal advisors’ suggestion,
most developers have ignored Coo-
per’s recommendation, and the
increasing need for security has led
to a marked further increase in the
number of dialog boxes or warnings
that user have to “swat” today.
Strategies such as opinionated
design and forcibly aracting users’
aention do not align with usabil-
ity. As Cooper pointed out, usabil-
ity’s overall guiding principle is to
support users in reaching their pri-
mary goals as eciently as possible.
Security that routinely diverts the
aention and disrupts the activi-
ties of users in pursuit of these goals
is the thus the antithesis of a user-
centered approach.
And where, in practical terms,
would this approach lead us? A col-
league with whom I discussed the
studies commented: “Even with this
polymorphic approach, users stop
paying aention aer 13 warning
messages. I suppose the next step
is to administer signicant electri-
cal shocks to users as they receive
the warning messages, so that they
are literally jolted into paying aen-
tion.” (e colleague kindly allowed
me to use the quote, but wishes to
remain anonymous.) Scaring, trick-
ing, and bullying users into secure
behaviors is not usable security.
Cost versus Benefit
In 2009, Turing award and von Neu-
mann medal winner Butler Lamp-
son pointed out that7
[t]hings are so bad for usable
security that we need to give
up on perfection and focus on
essentials. e root cause of the
www.computer.org/security
3
problem is economics: we don’t
know the costs either of geing
security or of not having it, so
users quite rationally don’t care
much about it. … To x this we
need to measure the cost of secu-
rity, and especially the time users
spend on it.
Lampson’s observations haven’t
been heeded. User time and eort
are rarely at the forefront of usable
security studies; the focus is on
whether users choose the behavior
that researchers claim to be desir-
able because it’s more secure. Even if
users’ interaction time with specic
security mechanisms, such as a lon-
ger password, is measured, the cumu-
lative longer-term eect of draining
time from individual and organiza-
tional productivity isn’t considered.
Over the past few years,
researchers have declared the task
of recalling and entering 15- or
20-character complex passwords
“usable” because participants in
Mechanical Turk studies were able
to do so. But being able to do some-
thing a couple of times in the arti-
cial constraints of such studies
doesn’t mean the vast majority of
users could—or would want to—
do so regularly in pursuit of their
everyday goals.
Factors such as fatigue as well as
habituation aect performance. In
real-world environments, authen-
tication fatigue isn’t hard to detect:
users reorganize their primary tasks
to minimize exposure to secondary
security tasks, stop using devices
and services with onerous security,
and don’t pursue innovative ideas
because they can’t face any more
“bales with security” that they
anticipate on the path to realizing
those ideas.8 It’s been disheartening
to see that, in many organizations,
users who circumvent security
measures to remain productive
are still seen as the root of the
problem— “the enemy”2—and that
the answer is to educate or threaten
them into behavior security experts
demand—rather than considering
the possibility that security needs to
be redesigned.
A good example is the cur-
rently popular notion that sending
phishing messages to a company’s
employees, and directing them to
pages about the dangers of click-
ing links, is a good way to get their
aention and make them less
likely to click in the future. Telling
employees not to click on links can
work in businesses in which there’s
no need to click embedded links.
But if legitimate business tasks con-
tain embedded links, employees
can’t examine and ponder every
time they encounter a link without
compromising productivity.
In addition, being tricked by
a company’s own security sta is
a negative, adversarial experience
that undermines the trust relation-
ship between the organization and
employees. Security experts who
aim to make security work by “x-
ing” human shortcomings are ignor-
ing key lessons from human factors
and economics.
In modern, busy work environ-
ments, users will continue to cir-
cumvent security tasks that have
a high workload and disrupt pri-
mary activities because they sub-
stantially decrease productivity. No
amount of security education—a
further distraction from primary
tasks—will change that. Rather,
any security measure should pass
a cost–benet test: Is it easy and
quick to do, and does it oer a good
level protection?
Cormac Herley calculated that
the economic cost of the time users
spend on standard security mea-
sures such as passwords, antiphish-
ing tools, and certicate warnings is
billions of dollars in the US alone—
and this when the security benets
of complying with the security
advice are dubious.9 SSL warnings
have overwhelming false-positive
rate—close to 100 percent for many
years9—so users developed alarm
fatigue and learned to ignore
them. In addition, longer (12- to
15- character) passwords, which are
associated with a very real cost in
recall and entry time and increased
failure rates—especially on the now
widely used touchscreens—oer
no improvement in security.10
Fitting the Task
to the Human
e security-centered view assumes
that users want to avoid risk and
harm altogether. However, many
users choose to accept some risks in
pursuit of goals that are important
to them. Security experts assume
that users who don’t choose the
secure option are making a mistake,
and thus preventing mistakes and
educating users are the way forward.
However, a combination of
usability and economics insights
leads to a dierent way of thinking
about usable security:
Usable security starts by recog-
nizing users’ security goals, rather
than by imposing security experts’
views on users.
Usable security acknowledges that
users are focused on their primary
goals—for example, banking,
shopping, or social networking.
Rather than disrupting these pri-
mary tasks and creating a huge
workload for users, security tasks
should cause minimum friction.
Security experts must acknowl-
edge and support human capabili-
ties and limitations. Rather than
trying to “x the human,” experts
should design technology and
security mechanisms that don’t
burden and disrupt users.
Techniques from the human
factors eld can maximize perfor-
mance while ensuring safety and
security. A key principle is design-
ing technology that ts users’ physi-
cal and mental abilities—ing the
task to the human. Rarely should we
4
IEEE Secur ity & Privacy May/June 2015
SECURITY & PRIVACY ECONOMICS
t the human to the task, because
this requires signicant organiza-
tional investment in terms of behav-
ior change through education and
training. Security education and
training are only worthwhile if the
behavior ts with primary tasks.
An organization could train its
employees to become memory art-
ists, enabling them to juggle a large
number of changing PINs and pass-
words. But then employees would
need time for routines and exercises
that reinforce memory and recall.
Changing security policies and
implementing mechanisms that
enable employees to cope with-
out training are more ecient. For
instance, Michelle Steves and Mary
eofanos recommend a shi from
explicit to implicit authentication8;
in most environments, there are
other ways to recognize legitimate
users, including device and loca-
tion information or behavioral bio-
metrics, without disrupting users’
workow. ey also point out that
infrequent authentication requires
dierent mechanisms that com-
plement the workings of human
memory—something Adams and I
recommended aer our rst study
15 years ago2—but this rarely
occurs in practice.
Users will pay aention to
reliable and credible indi-
cators of risks they want to avoid.
Security mechanisms with a high
false- positive rate undermine the
credibility of security and train
users to ignore them. We need more
accurate detection and beer secu-
rity tools if we are to regain users’
aention and respect, rather than
scare, trick, and bully them into
complying with security measures
that obstruct human endeavor.
References
1. A. Whien and D. Tygar, “Why
Johnny Can’t Encrypt: A Usability
Evaluation of PGP 5.0,Proc. 8th
Conf. USENIX Security Symp., vol.
9, 1999, p. 14.
2. A. Adams and M.A. Sasse, “Users
Are Not the Enemy,Comm. ACM,
vol. 42, no. 12, 1999, pp. 40–46.
3. A. Porter Felt et al., “Improving
SSL Warnings: Comprehension
and Adherence,Proc. Conf. Human
Factors and Computing Systems,
2015; hps://adrifelt.github.io/ssl
interstitial-chi.pdf.
4. B.B. Anderson et al., “How Poly-
morphic Warnings Reduce Habitu-
ation in the Brain—Insights from an
f MRI Study,Proc. Conf. Human Fac-
tors and Computing Systems, 2015;
http://neurosecurity.byu.edu
/media/Anderson_et_al._CHI
_2015.pdf.
5. “Medical Device Alarm Safety in
Hospitals,Sentinel Event Alert, no.
50, 8 Apr. 2013; www.pwrnewmedia
.com/2013/joint_commission
/medical_alarm_safety/downloads
/SEA_50_alarms.pdf.
6. A . Cooper, e Inmates Are Running
the Asylum: Why High-Tech Products
Drive Us Crazy and How to Restore
the Sanity, Sams–Pearson, 2004.
7. B. Lampson, “Usable Security: How
to Get It,Comm. ACM, vol. 52, no.
11, 2009, pp. 25–27.
8. M.P. Steves and M.F. eofanos,
Report: Authentication Diary Study,
tech. report NISTIR 7983, Nat’l
Inst. Standards and Technology,
2014.
9. C. Herley, “So Long, and No
anks for the Externalities: e
Rational Rejection of Security
Advice by Users,Proc. 2009 Work-
shop New Security Paradigms, 2009,
pp. 133–144.
10. D. Florencio, C. Herley, and P.C.
van Oorschot, “An Administra-
tor’s Guide to Internet Password
Research, Proc. USENIX Conf.
Large Installation System Adminis-
tration, 2014, pp. 35–52.
Angela Sasse is a professor of human-
centered technology at University
College London. Contact her at
a.sasse@cs.ucl.ac.uk.
www.computer.org/security
5
... In practice, however, the bystander may not be able to actually see any content if they are too far away or the observation angle is too steep. In addition, warning users every time a face is detected without assessing the real shoulder surfing risk leads to a "crying wolf" problem [40], which would minimise the system's effectiveness. ...
... This means that our PrivacyScout is able to more accurately determine events where the user is realistically subject to shoulder surfing risk. This, in turn, means that the user would receive less false alarms, which has positive implications on usability [40]. However, notifying the users or mitigating the shoulder surfing incidents is not within the scope of this work. ...
... [37]) will produce many false positives, i.e. cases identified as shoulder surfing attacks even if there was no risk for the user. An abundance of false positives results in alert fatigue which in turn makes it less likely users will take the warnings seriously, ultimately leading to less security [40]. Thus, any shoulder surfing assessment tool should alert the user only when a real danger is present. ...
Article
Full-text available
One approach to mitigate shoulder surfing attacks on mobile devices is to detect the presence of a bystander using the phone’s front-facing camera. However, a person’s face in the camera’s field of view does not always indicate an attack. To overcome this limitation, in a novel data collection study (N=16), we analysed the influence of three viewing angles and four distances on the success of shoulder surfing attacks. In contrast to prior works that mainly focused on user authentication, we investigated three common types of content susceptible to shoulder surfing: text, photos, and PIN authentications. We show that the vulnerability of text and photos depends on the observer’s location relative to the device, while PIN authentications are vulnerable independent of the observation location. We then present PrivacyScout – a novel method that predicts the shoulder-surfing risk based on visual features extracted from the observer’s face as captured by the front-facing camera. Finally, evaluations from our data collection study demonstrate our method’s feasibility to assess the risk of a shoulder surfing attack more accurately.
... Some warning messages also attempt to make users fear the cyberattack and its consequences (Sasse, 2015). However, such fear appeals are ineffective Sasse, 2015) and could cause those who fall victim to attacks to feel guilt and shame, which will likely degrade their well-being . ...
... Some warning messages also attempt to make users fear the cyberattack and its consequences (Sasse, 2015). However, such fear appeals are ineffective Sasse, 2015) and could cause those who fall victim to attacks to feel guilt and shame, which will likely degrade their well-being . ...
Article
Purpose Nonexperts do not always follow the advice in cybersecurity warning messages. To increase compliance, it is recommended that warning messages use nontechnical language, describe how the cyberattack will affect the user personally and do so in a way that aligns with how the user thinks about cyberattacks. Implementing those recommendations requires an understanding of how nonexperts think about cyberattack consequences. Unfortunately, research has yet to reveal nonexperts’ thinking about cyberattack consequences. Toward that end, the purpose of this study was to examine how nonexperts think about cyberattack consequences. Design/methodology/approach Nonexperts sorted cyberattack consequences based on perceived similarity and labeled each group based on the reason those grouped consequences were perceived to be similar. Participants’ labels were analyzed to understand the general themes and the specific features that are present in nonexperts’ thinking. Findings The results suggested participants mainly thought about cyberattack consequences in terms of what the attacker is doing and what will be affected. Further, the results suggested participants thought about certain aspects of the consequences in concrete terms and other aspects of the consequences in general terms. Originality/value This research illuminates how nonexperts think about cyberattack consequences. This paper also reveals what aspects of nonexperts’ thinking are more or less concrete and identifies specific terminology that can be used to describe aspects that fall into each case. Such information allows one to align warning messages to nonexperts’ thinking in more nuanced ways than would otherwise be possible.
... From the few studies addressing online safety risk management, parents are reported to restrict autistic children's online use via parental apps or switch off and remove device(s) Sasse, 2015). However, this is likely to involve a compromise between reducing the likelihood of a child experiencing risks online and limiting their independence (Livingstone & Haddon, 2009). ...
... These included accounts of monitoring and/or restrictive parental techniques. This supports previous findings that parents of autistic children restrict their online use via parental apps Sasse, 2015). Given that evidence suggests that autistic people like to keep their trusted circle small (Calder et al., 2014), parents may act as an additional protective factor. ...
Article
Full-text available
Background: Many autistic young people use online devices for social connection and to share interests. However, there is limited research regarding autistic online safety behaviours. Compared with non-autistic children, parental surveys have indicated that autistic young people are less likely to block people and/or online sites. To date, no research has explored autistic young people’s perceptions of their online safety experiences. This qualitative research explored autistic young people’s experiences of communicating with others online, as well as their online safety experiences. Method: Semi-structured interviews were conducted with 14 autistic young people aged 11–17 years (M = 14.0, SD = 2.2), including 8 males (M = 13.9, SD = 2.1) and 6 females (M = 14.5, SD = 2.5). These were conducted face to face (n = 1), phone call (n = 2), or via Skype (n = 8) or live web chat (n = 3). Questions explored factors relating to autistic young people’s online safety experiences. Results: Interpretative Phenomenological Analysis was used to analyse the data. In line with previous studies, autistic young people reported being victims of cyberbullying. Young autistic females reported being subject to online sexual harassment. While participants’ online experi- ences varied, there were commonalities, including a desire for more support to block online comments and/or individuals. Conclusions: Our results support previous findings that autistic young people are subject to online harassment and are not confident blocking unwanted contact from others online. Future in- terventions will be more readily accepted and ecologically valid if they address the unique needs of autistic young people.
... There are also risks of habituation and the 'crying wolf' problem if we constantly warn users about potentially 'risky' AR activities. Bullying users into protecting their privacy will not work as shown by usable security researchers [130] -It is important to employ usable and user-centered measures when designing approaches for raising awareness and obtaining consent from AR bystanders [8]. Consequently, how we design these mechanisms in ways that are accessible, usable, and avoid visual or multi-sensory clutter and information overload, remains an open question. ...
Article
Full-text available
Fundamental to Augmented Reality (AR) headsets is their capacity to visually and aurally sense the world around them, necessary to drive the positional tracking that makes rendering 3D spatial content possible. This requisite sensing also opens the door for more advanced AR-driven activities, such as augmented perception, volumetric capture and biometric identification - activities with the potential to expose bystanders to significant privacy risks. Existing Privacy-Enhancing Technologies (PETs) often safeguard against these risks at a low level e.g., instituting camera access controls. However, we argue that such PETs are incompatible with the need for always-on sensing given AR headsets' intended everyday use. Through an online survey (N=102), we examine bystanders' awareness of, and concerns regarding, potentially privacy infringing AR activities; the extent to which bystanders' consent should be sought; and the level of granularity of information necessary to provide awareness of AR activities to bystanders. Our findings suggest that PETs should take into account the AR activity type, and relationship to bystanders, selectively facilitating awareness and consent. In this way, we can ensure bystanders feel their privacy is respected by everyday AR headsets, and avoid unnecessary rejection of these powerful devices by society.
... According to the dual-system thinking theory [8], "warning fatigue" is also likely to influence user behaviour when reacting to permission requests. Users become habituated to frequent warnings and notifications, leading them to simply "click away" permission requests to continue with their primary task, rather than reading, trying to understand the request [9] and taking decisions accordingly. ...
Conference Paper
Full-text available
App permission requests are a control mechanism meant to help users oversee and safeguard access to data and resources on their smartphones. To decide whether to accept or deny such requests and make this consent valid, users need to understand the underlying reasons and judge the relevance of disclosing data in line with their own use of an app. This study investigates people’s certainty about app permission requests via an online survey with 400 representative participants of the UK population. The results demonstrate that users are uncertain about the necessity of granting app permissions for about half of the tested permission requests. This implies substantial privacy risks, which are discussed in the paper, resulting in a call for user-protecting interventions by privacy engineers.
... The usability of authentication and encryption for end-users has improved over the past 20 years. However, in many organizations, people still encounter impossible security tasks and are blamed when they cannot cope [52], [55]. ...
Conference Paper
Full-text available
For software to be secure in practice, users need to be willing and able to appropriately use security features. These features are usually implemented by software professionals during the software development process (SDP), who may be unable to consider the usability of these mechanisms. While research has made progress in supporting developers in creating secure software products, very little attention has been paid to whether and how these security features are made usable. In a semi-structured interview study with 25 software professionals (software developers, designers, architects), we explored how they and other decision-makers encounter and deal with security and usability during the software development process in their companies. Based on 37 hours of interview recordings, we qualitatively analyzed and investigated 23 distinct development contexts in detail. In addition to individual awareness and factors that directly influence the implementation phase, we identify a high impact of contextual factors, such as stakeholder pressure, presence of expertise, and collaboration culture, and the specific implementation of the SDP on usable security in software products. We conclude our work by highlighting important gaps, such as studying and improving contextual factors that contribute to usable security and discussing potential improvements of the status quo.
... According to the dual-system thinking theory [8], "warning fatigue" is also likely to influence user behaviour when reacting to permission requests. Users become habituated to frequent warnings and notifications, leading them to simply "click away" permission requests to continue with their primary task, rather than reading, trying to understand the request [9] and taking decisions accordingly. ...
Conference Paper
Full-text available
App permission requests are a control mechanism meant to help users oversee and safeguard access to data and resources on their smartphones. To decide whether to accept or deny such requests and make this consent valid, users need to understand the underlying reasons and judge the relevance of disclosing data in line with their own use of an app. This study investigates people's certainty about app permission requests via an online survey with 400 representative participants of the UK population. The results demonstrate that users are uncertain about the necessity of granting app permissions for about half of the tested permission requests. This implies substantial privacy risks, which are discussed in the paper, resulting in a call for user protecting interventions by privacy engineers.
... In contrast to other contexts where usability can be addressed as an independent goal when security solutions are developed, it is paramount that usability is evaluated in relation to it [36]. Mechanisms designed to ensure security should never restrict the user from performing the main task but should be designed to recognize human limitations and prevent users from dealing with unusable systems [37]. However, attempts to combine usability and security are often limited to improving the transparency of security processes; they do not make the system usable, but make the communication of information and procedures usable [38]. ...
Article
Full-text available
The usability/security trade-off indicates the inversely proportional relationship that seems to exist between usability and security. The more secure the systems, the less usable they will be. On the contrary, more usable systems will be less secure. So far, attempts to reduce the gap between usability and security have been unsuccessful. In this paper, we offer a theoretical perspective to exploit this tradeoff rather than fight it, as well as a practical approach to the use of contextual improvements in system usability to reward secure behavior. The theoretical perspective, based on the concept of reinforcement, has been successfully applied to several domains, and there is no reason to believe that the cybersecurity domain will represent an exception. Although the purpose of this article is to devise a research agenda, we also provide an example based on a single-case study where we apply the rationale underlying our proposal in a laboratory experiment.
Chapter
Due to the number of data breaches occurring worldwide there is increasing vigilance regarding information security. Organisations employ a variety of technical, formal, and informal security controls but also rely on employees to safeguard information assets. This relies heavily on compliance and constantly challenges employees with security-related tasks. Security compliance behaviour is a finite resource and when employees engage in cost-benefit analyses that extend tolerance thresholds, security fatigue may set in. Security fatigue has been described as a despondency and weariness to experience any further security tasks. This study used a case study approach to investigate employee security fatigue, focusing on data specialists. Primary data was collected through semi-structured interviews with 12 data specialists in a large financial services company. A thematic analysis of the data revealed several interlinked themes that evidence security fatigue. Awareness and understanding of these themes can help organisations to monitor for this and tailor security activities, such as security education, training, and awareness for increased effectiveness.
Book
Cybersecurity needs a change in communication. It is time to show the world that cybersecurity is an exciting and diverse field to work in. Cybersecurity is not only about hackers and technical gobbledygook. It is a diverse field of work with a lot of collaboration with other disciplines. Over the years, security professionals have tried different awareness strategies to promote their work and to improve the knowledge of their audience but without much success. Communication problems are holding back advances in in the field. Visual Communication for Cybersecurity explores the possibilities of visual communication as a tool to improve the communication about cybersecurity and to better connect with non-experts. Visual communication is useful to explain complex topics and to solve complex problems. Visual tools are easy to share through social media and have the possibility to reach a wide audience. When applied strategically, visual communication can contribute to a people-centric approach to security, where employees are encouraged to actively engage in security activities rather than simply complying with the policies. Cybersecurity education does not usually include communication theory or creative skills. Many experts think that it is not part of their job and is best left to the communication department or they think that they lack any creative talent. This book introduces communication theories and models, gives practical tips, and shows many examples. The book can support students in cybersecurity education and professionals searching for alternatives to bullet-point presentations and textual reports. On top of that, if this book succeeds in inspiring the reader to start creating visuals, it may also give the reader the pleasure of seeing new possibilities and improving their performance. The book is divided into different parts for readers with different interests. There is no need to read the book from cover to cover; the chapters are organized thematically. Readers that are interested in how to apply communication theory to cybersecurity will enjoy the chapters about learning, the context in which communication takes place, and how people are persuaded. Readers that are looking for inspiration and examples of how to use visuals in their daily tasks go straight to the third section of the book. The last section is a workbook that will help the reader to take the first steps towards using visual communication at work.
Conference Paper
Full-text available
Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the "black box" of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users' habituation to security warnings.
Article
Full-text available
In this article, the author discusses why users compromise computer security mechanisms and how to take remedial measures. Confidentiality is an important aspect of computer security. It depends on authentication mechanisms, such as passwords, to safeguard access to information. Traditionally, authentication procedures are divided into two stages: identification and secret password. To date, research on password security and the usability of these mechanisms has rarely been investigated. Since security mechanisms are designed, implemented, applied and breached by people, human factors should be considered in their design. It seems that currently, hackers pay more attention to the human link in the security chain than security designers do, by using social engineering techniques to obtain passwords. The key element in password security is the crackablity of a password combination. System-generated passwords are essentially the optimal security approach; user-generated passwords are potentially more memorable and thus less likely to be disclosed. Password composition, alphanumeric password is more secure than one composed of letters alone. INSET: Recommendations.
Article
Full-text available
Why does your computer annoy you so much about security, but still fail to be secure? It’s because users don’t have a model for security, or a simple way to keep important things safe.
Conference Paper
Browsers warn users when the privacy of an SSL/TLS connection might be at risk. An ideal SSL warning would empower users to make informed decisions and, failing that, guide confused users to safety. Unfortunately, users struggle to understand and often disregard real SSL warnings. We report on the task of designing a new SSL warning, with the goal of improving comprehension and adherence. We designed a new SSL warning based on recommendations from warning literature and tested our proposal with microsurveys and a field experiment. We ultimately failed at our goal of a well-understood warning. However, nearly 30% more total users chose to remain safe after seeing our warning. We attribute this success to opinionated design, which promotes safety with visual cues. Subsequently, our proposal was released as the new Google Chrome SSL warning. We raise questions about warning comprehension advice and recommend that other warning designers use opinionated design.
Article
It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Se-curity advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims an-nually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.
Article
User errors cause or contribute to most computer security failures, yet user interfaces for security still tend to be clumsy, confusing, or near-nonexistent. Is this simply due to a failure to apply standard user interface design techniques to security? We argue that, on the contrary, effective security requires a different usability standard, and that it will not be achieved through the user interface design techniques appropriate to other types of consumer software. To test this hypothesis, we performed a case study of a security program which does have a good user interface by general standards: PGP 5.0. Our case study used a cognitive walkthrough analysis together with a laboratory user test to evaluate whether PGP 5.0 can be successfully used by cryptography novices to achieve effective electronic mail security. The analysis found a number of user interface design flaws that may contribute to security failures, and the user test demonstrated that when our test participants were g...