ArticlePDF Available

Scaring and Bullying People into Security Won't Work

Authors:

Abstract

Users will pay attention to reliable and credible indicators of a risk they want to avoid. More accurate detection and better security tools are necessary to regain users' attention and respect.
SECURITY & PRIVACY ECONOMICS
Editors: Michael Lesk, lesk@acm.org | Jerey MacKie-Mason, jmm@umich.edu
2 May/June 2015 Copublish ed by the IEEE Computer a nd Reliability So cieties 1540-7993/15/$31.00 © 2015 IEEE
Usable security and privacy
research began more than
15 years ago. In 1999, Alma Whit-
ten and J.D. Tygar explained “W hy
Johnny Can’t Encrypt,”1 and Anne
Adams and I pleaded that, even
though they don’t always com-
ply with security policies, “Users
Are Not the Enemy.2 Today, there
are several specialist conferences
and workshops: publications on
usability security and privacy are
featured in top usability confer-
ences, such as ACM SIGCHI
Conference on Human Factors in
Computing Systems (CHI), and
top security conferences, such as
the IEEE Symposium on Security
and Privacy.
An ongoing topic in usable
security research is security warn-
ings. Security experts despair that
the vast majority of users ignore
warnings—they just “swat” them,
as they do with most dialog boxes.
Over the past six years, continu-
ous eorts have focused on chang-
ing this behavior and geing users
to pay more aention. SSL certi-
cate warnings are a key example:
all browser providers have evolved
their warnings in an aempt to
get users to take them more seri-
ously. For instance, Mozilla Firefox
increased the number of dialog
boxes and clicks users must wade
through to proceed with the con-
nection, even though it might not
be secure. However, this has made
lile dierence to the many users
who decide to ignore the warnings
and proceed. But creating more
elaborate warnings to guide users
toward secure behavior is not nec-
essarily the best course of action, as
it doesn’t align with the principles
of user-centered design.
Refining Warnings
At ACM CHI 2015, two studies
reported on eorts to make more
users heed warnings. Adrienne
Porter Felt and her colleagues at
Google designed a new SSL warn-
ing for Google Chrome, applying
recommendations from current
usable security research: keep warn-
ings brief, use simple language to
describe the specic risk, and illus-
trate the potential consequences of
proceeding.3 e authors hypoth-
esized that if users understand the
risks associated with a warning,
they will heed rather than ignore it.
ey tested these improved
warnings in a series of mini surveys
and found a modest but signicant
(12 percent) improvement in the
number of participants who cor-
rectly identied the potential risks
of proceeding, but no signicant
improvement in the number of par-
ticipants who correctly identied
the data at risk. In addition, com-
pared to existing browser SSL warn-
ings, there was no improvement in
the number of participants who
thought the warning was likely to be
a false positive.
Scaring and Bullying People into Security
Wont Work
Angela Sasse | University College London
Felt and her colleagues reasoned
that if they couldn’t improve users’
understanding, they might still be
able to guide users toward secure
choices. ey applied what they
called opinionated design to make it
harder for participants to circum-
vent warnings, and visual design
techniques to make the secure
course of action look more arac-
tive. In a eld study, this technique
led to a 30 percent increase in the
number of participants who didn’t
proceed upon seeing the warning.
e authors concluded that it’s di-
cult to improve user comprehension
of online risks with simple, brief,
nontechnical, and specic warn-
ings, yet they urge fellow research-
ers to keep trying to develop such
warnings. In the meantime, they
advise designers to use opinionated
design to deter users from proceed-
ing in the face of warnings by mak-
ing them harder to circumvent and
emphasizing the risks associated
with doing so.
In the second paper, Bon-
nie Anderson and her colleagues
examined 25 participants’ brain
responses to warnings using a func-
tional magnetic resonance imaging
(fMRI) scanner.4 Previous studies
using eye tracking showed that users
habituate: the rst time around, a
warning catches their aention, but
aer repeated showings, it does not.
Anderson and her colleagues found
that the brain mirrors this habitu-
ation: when encountering a warn-
ing for the rst time, participants’
visual processing center in the supe-
rior parietal lobes showed elevated
activation levels, but these disap-
peared with repeated showings of
the warning.
e authors hypothesized that
varying a warning’s appearance,
such as its size, color, and text order-
ing, should prevent habituation
and keep participants paying aen-
tion. ey found that participants
indeed showed sustained activa-
tion levels when encountering these
polymorphic warnings; partici-
pants’ aention only decreased on
average aer the 13th variation of
the same warning. ey concluded
that users can’t help but habitu-
ate, and designers should combat
this by creating warnings that force
users to pay aention.
Usability: When
Does “Guiding”
Become Bullying”?
Both teams’ work was motivated by
an honorable intention—to help
users choose the secure option.
But as a security researcher with
a usability background and many
years of studying user behavior in
the lab as well as in real-world set-
tings, I am concerned by the sug-
gestion that we should use design
techniques to force users to keep
paying aention and push them
toward what we deem the secure—
and hence beer—option. It is a
paternalistic, technology-centered
perspective that assumes the secu-
rity experts’ solution is the correct
way to manage a specic threat.
In the case of SSL, the authors
recommended counteracting peo-
ple’s habituation response and
keeping their aention focused on
security. However, habituation is
an evolved response that increases
human eciency in day-to-day
interactions with the environment:
we stop paying aention to signals
we’ve deemed irrelevant. Crying
wolf too oen leads to alarm or alert
fatigue; this has been demonstrated
over many decades in industries
such as construction and mining
and, most recently, with the rapid
increase of monitoring equipment
in hospitals.
In 2013, the US Joint Com-
mission issued an alert about the
widespread phenomenon of alarm
fatigue.5 e main problem was
desensitization to alarms, which led
to sta missing critical events. An
increase in workload and decrease in
patient satisfaction were also noted.
Eminent soware engineer and
usability expert Alan Cooper identi-
ed the use of warnings in soware
as a problem more than a decade
ago.6 He pointed out that warn-
ings should be reserved for genuine
exceptions—events soware devel-
opers couldn’t reasonably anticipate
and make provisions for. Perhaps
on their legal advisors’ suggestion,
most developers have ignored Coo-
per’s recommendation, and the
increasing need for security has led
to a marked further increase in the
number of dialog boxes or warnings
that user have to “swat” today.
Strategies such as opinionated
design and forcibly aracting users’
aention do not align with usabil-
ity. As Cooper pointed out, usabil-
ity’s overall guiding principle is to
support users in reaching their pri-
mary goals as eciently as possible.
Security that routinely diverts the
aention and disrupts the activi-
ties of users in pursuit of these goals
is the thus the antithesis of a user-
centered approach.
And where, in practical terms,
would this approach lead us? A col-
league with whom I discussed the
studies commented: “Even with this
polymorphic approach, users stop
paying aention aer 13 warning
messages. I suppose the next step
is to administer signicant electri-
cal shocks to users as they receive
the warning messages, so that they
are literally jolted into paying aen-
tion.” (e colleague kindly allowed
me to use the quote, but wishes to
remain anonymous.) Scaring, trick-
ing, and bullying users into secure
behaviors is not usable security.
Cost versus Benefit
In 2009, Turing award and von Neu-
mann medal winner Butler Lamp-
son pointed out that7
[t]hings are so bad for usable
security that we need to give
up on perfection and focus on
essentials. e root cause of the
www.computer.org/security
3
problem is economics: we don’t
know the costs either of geing
security or of not having it, so
users quite rationally don’t care
much about it. … To x this we
need to measure the cost of secu-
rity, and especially the time users
spend on it.
Lampson’s observations haven’t
been heeded. User time and eort
are rarely at the forefront of usable
security studies; the focus is on
whether users choose the behavior
that researchers claim to be desir-
able because it’s more secure. Even if
users’ interaction time with specic
security mechanisms, such as a lon-
ger password, is measured, the cumu-
lative longer-term eect of draining
time from individual and organiza-
tional productivity isn’t considered.
Over the past few years,
researchers have declared the task
of recalling and entering 15- or
20-character complex passwords
“usable” because participants in
Mechanical Turk studies were able
to do so. But being able to do some-
thing a couple of times in the arti-
cial constraints of such studies
doesn’t mean the vast majority of
users could—or would want to—
do so regularly in pursuit of their
everyday goals.
Factors such as fatigue as well as
habituation aect performance. In
real-world environments, authen-
tication fatigue isn’t hard to detect:
users reorganize their primary tasks
to minimize exposure to secondary
security tasks, stop using devices
and services with onerous security,
and don’t pursue innovative ideas
because they can’t face any more
“bales with security” that they
anticipate on the path to realizing
those ideas.8 It’s been disheartening
to see that, in many organizations,
users who circumvent security
measures to remain productive
are still seen as the root of the
problem— “the enemy”2—and that
the answer is to educate or threaten
them into behavior security experts
demand—rather than considering
the possibility that security needs to
be redesigned.
A good example is the cur-
rently popular notion that sending
phishing messages to a company’s
employees, and directing them to
pages about the dangers of click-
ing links, is a good way to get their
aention and make them less
likely to click in the future. Telling
employees not to click on links can
work in businesses in which there’s
no need to click embedded links.
But if legitimate business tasks con-
tain embedded links, employees
can’t examine and ponder every
time they encounter a link without
compromising productivity.
In addition, being tricked by
a company’s own security sta is
a negative, adversarial experience
that undermines the trust relation-
ship between the organization and
employees. Security experts who
aim to make security work by “x-
ing” human shortcomings are ignor-
ing key lessons from human factors
and economics.
In modern, busy work environ-
ments, users will continue to cir-
cumvent security tasks that have
a high workload and disrupt pri-
mary activities because they sub-
stantially decrease productivity. No
amount of security education—a
further distraction from primary
tasks—will change that. Rather,
any security measure should pass
a cost–benet test: Is it easy and
quick to do, and does it oer a good
level protection?
Cormac Herley calculated that
the economic cost of the time users
spend on standard security mea-
sures such as passwords, antiphish-
ing tools, and certicate warnings is
billions of dollars in the US alone—
and this when the security benets
of complying with the security
advice are dubious.9 SSL warnings
have overwhelming false-positive
rate—close to 100 percent for many
years9—so users developed alarm
fatigue and learned to ignore
them. In addition, longer (12- to
15- character) passwords, which are
associated with a very real cost in
recall and entry time and increased
failure rates—especially on the now
widely used touchscreens—oer
no improvement in security.10
Fitting the Task
to the Human
e security-centered view assumes
that users want to avoid risk and
harm altogether. However, many
users choose to accept some risks in
pursuit of goals that are important
to them. Security experts assume
that users who don’t choose the
secure option are making a mistake,
and thus preventing mistakes and
educating users are the way forward.
However, a combination of
usability and economics insights
leads to a dierent way of thinking
about usable security:
Usable security starts by recog-
nizing users’ security goals, rather
than by imposing security experts’
views on users.
Usable security acknowledges that
users are focused on their primary
goals—for example, banking,
shopping, or social networking.
Rather than disrupting these pri-
mary tasks and creating a huge
workload for users, security tasks
should cause minimum friction.
Security experts must acknowl-
edge and support human capabili-
ties and limitations. Rather than
trying to “x the human,” experts
should design technology and
security mechanisms that don’t
burden and disrupt users.
Techniques from the human
factors eld can maximize perfor-
mance while ensuring safety and
security. A key principle is design-
ing technology that ts users’ physi-
cal and mental abilities—ing the
task to the human. Rarely should we
4
IEEE Secur ity & Privacy May/June 2015
SECURITY & PRIVACY ECONOMICS
t the human to the task, because
this requires signicant organiza-
tional investment in terms of behav-
ior change through education and
training. Security education and
training are only worthwhile if the
behavior ts with primary tasks.
An organization could train its
employees to become memory art-
ists, enabling them to juggle a large
number of changing PINs and pass-
words. But then employees would
need time for routines and exercises
that reinforce memory and recall.
Changing security policies and
implementing mechanisms that
enable employees to cope with-
out training are more ecient. For
instance, Michelle Steves and Mary
eofanos recommend a shi from
explicit to implicit authentication8;
in most environments, there are
other ways to recognize legitimate
users, including device and loca-
tion information or behavioral bio-
metrics, without disrupting users’
workow. ey also point out that
infrequent authentication requires
dierent mechanisms that com-
plement the workings of human
memory—something Adams and I
recommended aer our rst study
15 years ago2—but this rarely
occurs in practice.
Users will pay aention to
reliable and credible indi-
cators of risks they want to avoid.
Security mechanisms with a high
false- positive rate undermine the
credibility of security and train
users to ignore them. We need more
accurate detection and beer secu-
rity tools if we are to regain users’
aention and respect, rather than
scare, trick, and bully them into
complying with security measures
that obstruct human endeavor.
References
1. A. Whien and D. Tygar, “Why
Johnny Can’t Encrypt: A Usability
Evaluation of PGP 5.0,Proc. 8th
Conf. USENIX Security Symp., vol.
9, 1999, p. 14.
2. A. Adams and M.A. Sasse, “Users
Are Not the Enemy,Comm. ACM,
vol. 42, no. 12, 1999, pp. 40–46.
3. A. Porter Felt et al., “Improving
SSL Warnings: Comprehension
and Adherence,Proc. Conf. Human
Factors and Computing Systems,
2015; hps://adrifelt.github.io/ssl
interstitial-chi.pdf.
4. B.B. Anderson et al., “How Poly-
morphic Warnings Reduce Habitu-
ation in the Brain—Insights from an
f MRI Study,Proc. Conf. Human Fac-
tors and Computing Systems, 2015;
http://neurosecurity.byu.edu
/media/Anderson_et_al._CHI
_2015.pdf.
5. “Medical Device Alarm Safety in
Hospitals,Sentinel Event Alert, no.
50, 8 Apr. 2013; www.pwrnewmedia
.com/2013/joint_commission
/medical_alarm_safety/downloads
/SEA_50_alarms.pdf.
6. A . Cooper, e Inmates Are Running
the Asylum: Why High-Tech Products
Drive Us Crazy and How to Restore
the Sanity, Sams–Pearson, 2004.
7. B. Lampson, “Usable Security: How
to Get It,Comm. ACM, vol. 52, no.
11, 2009, pp. 25–27.
8. M.P. Steves and M.F. eofanos,
Report: Authentication Diary Study,
tech. report NISTIR 7983, Nat’l
Inst. Standards and Technology,
2014.
9. C. Herley, “So Long, and No
anks for the Externalities: e
Rational Rejection of Security
Advice by Users,Proc. 2009 Work-
shop New Security Paradigms, 2009,
pp. 133–144.
10. D. Florencio, C. Herley, and P.C.
van Oorschot, “An Administra-
tor’s Guide to Internet Password
Research, Proc. USENIX Conf.
Large Installation System Adminis-
tration, 2014, pp. 35–52.
Angela Sasse is a professor of human-
centered technology at University
College London. Contact her at
a.sasse@cs.ucl.ac.uk.
www.computer.org/security
5
... attenuation of a user interaction response with multiple exposures to a same warning) [33]. The adherence and phishing safety, however, does not come without a cost -usually, the forced attention is distracting, time-consuming, and tedious [38], especially with a high number of emails a user receives a day, and the element of fear increases with the repeated exposure to decisions to abandon a suspicious website [27]. There is also a difference in effectiveness whether the warning "friction" happens within an email client as a banner (the usual vector for delivery of phishing attacks [34]) or in a browser as a splash screen, with the later implementation being better preventing participants from reaching phishing websites [24]. ...
... These are "human vulnerabilities" and differ from the usual "cybersecurity vulnerabilities" (i.e., weaknesses in the technologies that result in security protections' failure [22]), though the notorious statement "humans are the weakest link" treats all users as the cybersecurity vulnerabilities. In response to this (mis)treatment, a user-centered security approach has been adopted [27,41] and, accounting for the human vulnerabilities, an inclusive security approach has recently gained traction [37]. The latter approach brings the attention of evaluation, involvement, and empowerment of users with human vulnerabilities with security technologies, such as authentication [12], usable security warnings [18,40], and misinformation [30]. ...
... With the second research question we wanted to learn whether BLV users (i) adhere to banner warnings like this in general, and (ii) continue to pay attention to them after encountering them multiple times under realistic conditions. These aspects have been evaluated with sighted users in the past [11,27], but have not been addressed in the previous work with BLV individuals [18,40]. Participants confirmed they do adhere to the warnings mostly because they "trust Gmail to filter the spam and phishing out for them" (P5), even "if there are instances when important correspondence has been sent to their spam inbox" (P21). ...
Conference Paper
Full-text available
Warning users about suspicious emails usually happens through visual interventions such as banners. Evidence from laboratory experiments shows that email banner warnings are unsuitable for blind and low-vision (BLV) users as they tend to miss or make no use of them. However, the laboratory settings preclude a full understanding of how BLV users would realistically behave around these banner warnings because the experiments don't use the individuals' own email addresses, devices, or emails of their choice. To address this limitation, we devised a study with n=21 BLV email users in realistic settings. Our findings indicate that this user population misses or makes no use of Gmail and Outlook banner warnings because these are implemented in a narrow sense, that is, (i) they allow access to the warning text without providing context relevant to the risk of associated email, and (ii) the formatting, together with the possible actions, is confusing as to how a user should deal with the email in question. To address these barriers, our participants proposed designs to accommodate the accessibility preferences and usability habits of individuals with visual disabilities according to their capabilities to engage with email banner warnings.
... These factors include refraining from instilling fear of security breaches [40,53] and not merely providing information about security [6,32,63] or repeating rules alone [11], but designing security education in such a way that it is targeted and actionable and also provides feedback to employees [6]. In this context, adaptation processes are necessary: security awareness and education should always be human-centered, the security behavior required of employees should be tailored to their primary tasks [62], and the value of security should be emphasized for the achievement of organizational goals [40]. ...
... This expert knowledge was to be provided by the external experts at the workshop. However, those had not been briefed by AutoCorp to relate their expertise in a way that would have been more helpful to the apprentices in their specific work situation, e. g., by focusing on risks that are known and relevant to the apprentices [43] and on advice that can be handled in line with their primary work task [62]. Instead, all external experts talked about their own realm of expertise, without referring to AutoCorp's security goals and practices or to content provided by other experts. ...
... Although we acknowledge that the results may be affected by social desirability effects, we still see them as overwhelmingly positive and could not observe any differences in this respect with regard to the different professions of the apprentices. The apprentices seem to have taken on secure behaviors that (probably most end-users) would deem hard to use -either because they are in conflict with a primary task [44,62] (e. g., the avoidance of public WiFi is the avoidance of a service that gets increasingly widely adapted) or are complex to implement (e. g., the separation of home WiFi networks, tedious and time consuming and potentially impossible for non-tech-savvy users). Although, the workshop has been a success in that way, we suspect the new behavior might not sustain for long. ...
... Delegating the decision to the end users is generally a bad idea since they tend to make uninformed decisions due to the lack of understanding (see Section 6.2). Moreover, given that certificate flaws are common, but mostly benign, end users could further lose their trust in the security of the TLS ecosystem [27,38], In most cases, IT professionals should, therefore, make security decisions during the development, testing and deployment process as they are capable of better-informed decisions. ...
Preprint
Flawed TLS certificates are not uncommon on the Internet. While they signal a potential issue, in most cases they have benign causes (e.g., misconfiguration or even deliberate deployment). This adds fuzziness to the decision on whether to trust a connection or not. Little is known about perceptions of flawed certificates by IT professionals, even though their decisions impact high numbers of end users. Moreover, it is unclear how much the content of error messages and documentation influences these perceptions. To shed light on these issues, we observed 75 attendees of an industrial IT conference investigating different certificate validation errors. We also analyzed the influence of reworded error messages and redesigned documentation. We find that people working in IT have very nuanced opinions, with trust decisions being far from binary. The self-signed and the name-constrained certificates seem to be over-trusted (the latter also being poorly understood). We show that even small changes in existing error messages can positively influence resource use, comprehension, and trust assessment. At the end of the article, we summarize lessons learned from conducting usable security studies with IT professionals.
... An improvement to the Lockdown Mode would be to redesign this system so that each type of notification (e.g., that a contact's photo and name are not shared) is only shown once and the user is only reminded occasionally that it is still in place. On the other hand, notifications such as warnings about adding websites to a whitelist or joining an insecure network seem to be designed to scare and bully users into submission, as criticised by Sasse [58], rather than helping them. It should also be possible for users to disable the above notifications, for example in a special section of the notifications or in the Lockdown Mode settings. ...
Preprint
Lockdown Mode was introduced in 2022 as a hardening setting for Apple's operating systems, designed to strengthen the protection against ``some of the most sophisticated digital threats''. However, Apple never explained these threats further. We present the first academic exploration of Lockdown Mode based on a 3-month autoethnographic study. We obtained a nuanced understanding of user experience and identified issues that can be extrapolated to larger user groups. The lack of information from Apple about the underlying threat model and details on affected features may hinder adequate assessment of Lockdown Mode, making informed decisions on its use challenging. Besides encountering undocumented restrictions, we also experienced both too much and too little visibility of protection during Lockdown Mode use. Finally, we deem the paternalistic security approach by Apple's Lockdown Mode harmful, because without detailed knowledge about technical capabilities and boundaries, at-risk users may be lulled into a false sense of security.
... To increase the effect of warning messages in the field of cybersecurity, a developer might be tempted to design warning messages that try to make users fear cyberattacks. This approach has proven to be ineffective [27], [28]. Current recommendations say, that good warning messages should be brief [29], use nontechnical language [29], [30], describe the risk [29], describe the consequences of notcompliance [29], describe how the cyberattack will affect the user personally [31], [32], provide instructions on how to avoid the risk [29] and do so in a way that aligns with how the user thinks about cyberattacks [31]. ...
Article
Full-text available
An intrusion detection system (IDS) is a proven approach to securing networks, typically installed on routers or Internet gateways to inspect all incoming and outgoing traffic, compare network packet signatures against a database of suspicious signatures, or use artificial intelligence. When an IDS identifies a suspicious network connection, it alerts the user. However, for home networks, users without cybersecurity expertise often struggle to understand IDS alerts, distinguish cyberattacks from false alarms, and take appropriate actions promptly, jeopardizing the security of home networks, smart home installations, and home office workers. To address this, we propose ChatIDS, an approach to explain IDS alerts to non-experts using large language models. We evaluate the feasibility of ChatIDS using ChatGPT and identify open research questions with the help of interdisciplinary experts in artificial intelligence. Issues related to trust, privacy, and ethics need to be resolved before ChatIDS can be implemented. Our results indicate that ChatIDS has the potential to enhance network security by providing intuitive suggestions for security measures based on IDS alerts.
... • Improve the usability of tools supporting work specific needs ensuring that their compliance with security restrictions does not jeopardize the user experience; • defining security policies and training campaigns that use a customised approach commensurate to the knowledge and skills of the employees and targeted to specific information security areas (example dividing among IT people and non-IT people). Security mechanisms should not make it difficult to perform the main task, but technologies, and so also implementation of security features, must be designed to fit users' physical and mental abilities (Sasse, 2015). Considering the involvement of vulnerable categories of users that could be exposed to attack and breach with repercussion higher than for other users (e.g., in case of possible discrimination or even physical security), it is essential for designing an effective data protection and cybersecurity strategy. ...
... These factors include refraining from instilling fear of security breaches [38,49] and not merely providing information about security [4,30,56] or repeating rules alone [9], but designing security education in such a way that it is targeted and actionable and also provides feedback to employees [4]. In this context, adaptation processes are necessary: security awareness and education should always be human-centered, the security behavior required of employees should be tailored to their primary tasks [55], and the value of security should be emphasized for the achievement of organizational goals [38]. ...
Conference Paper
In a survey of six widely used end-to-end encrypted messaging applications, we consider the post-compromise recovery process from the perspective of what security audit functions, if any, are in place to detect and recover from attacks. Our investigation reveals audit functions vary in the extent to which they rely on the end user. We argue developers should minimize dependence on users and view them as a residual, not primary, risk mitigation strategy. To provide robust communications security, E2EE applications need to avoid protocol designs that dump too much responsibility on naive users and instead make system components play an appropriate role.
Conference Paper
Full-text available
Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the "black box" of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users' habituation to security warnings.
Article
Full-text available
In this article, the author discusses why users compromise computer security mechanisms and how to take remedial measures. Confidentiality is an important aspect of computer security. It depends on authentication mechanisms, such as passwords, to safeguard access to information. Traditionally, authentication procedures are divided into two stages: identification and secret password. To date, research on password security and the usability of these mechanisms has rarely been investigated. Since security mechanisms are designed, implemented, applied and breached by people, human factors should be considered in their design. It seems that currently, hackers pay more attention to the human link in the security chain than security designers do, by using social engineering techniques to obtain passwords. The key element in password security is the crackablity of a password combination. System-generated passwords are essentially the optimal security approach; user-generated passwords are potentially more memorable and thus less likely to be disclosed. Password composition, alphanumeric password is more secure than one composed of letters alone. INSET: Recommendations.
Article
Full-text available
Why does your computer annoy you so much about security, but still fail to be secure? It’s because users don’t have a model for security, or a simple way to keep important things safe.
Conference Paper
Browsers warn users when the privacy of an SSL/TLS connection might be at risk. An ideal SSL warning would empower users to make informed decisions and, failing that, guide confused users to safety. Unfortunately, users struggle to understand and often disregard real SSL warnings. We report on the task of designing a new SSL warning, with the goal of improving comprehension and adherence. We designed a new SSL warning based on recommendations from warning literature and tested our proposal with microsurveys and a field experiment. We ultimately failed at our goal of a well-understood warning. However, nearly 30% more total users chose to remain safe after seeing our warning. We attribute this success to opinionated design, which promotes safety with visual cues. Subsequently, our proposal was released as the new Google Chrome SSL warning. We raise questions about warning comprehension advice and recommend that other warning designers use opinionated design.
Article
It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Se-curity advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims an-nually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.
Article
User errors cause or contribute to most computer security failures, yet user interfaces for security still tend to be clumsy, confusing, or near-nonexistent. Is this simply due to a failure to apply standard user interface design techniques to security? We argue that, on the contrary, effective security requires a different usability standard, and that it will not be achieved through the user interface design techniques appropriate to other types of consumer software. To test this hypothesis, we performed a case study of a security program which does have a good user interface by general standards: PGP 5.0. Our case study used a cognitive walkthrough analysis together with a laboratory user test to evaluate whether PGP 5.0 can be successfully used by cryptography novices to achieve effective electronic mail security. The analysis found a number of user interface design flaws that may contribute to security failures, and the user test demonstrated that when our test participants were g...