Content uploaded by Angela Sasse
Author content
All content in this area was uploaded by Angela Sasse on Nov 05, 2015
Content may be subject to copyright.
SECURITY & PRIVACY ECONOMICS
Editors: Michael Lesk, lesk@acm.org | Jerey MacKie-Mason, jmm@umich.edu
2 May/June 2015 Copublish ed by the IEEE Computer a nd Reliability So cieties 1540-7993/15/$31.00 © 2015 IEEE
Usable security and privacy
research began more than
15 years ago. In 1999, Alma Whit-
ten and J.D. Tygar explained “W hy
Johnny Can’t Encrypt,”1 and Anne
Adams and I pleaded that, even
though they don’t always com-
ply with security policies, “Users
Are Not the Enemy.2 Today, there
are several specialist conferences
and workshops: publications on
usability security and privacy are
featured in top usability confer-
ences, such as ACM SIGCHI
Conference on Human Factors in
Computing Systems (CHI), and
top security conferences, such as
the IEEE Symposium on Security
and Privacy.
An ongoing topic in usable
security research is security warn-
ings. Security experts despair that
the vast majority of users ignore
warnings—they just “swat” them,
as they do with most dialog boxes.
Over the past six years, continu-
ous eorts have focused on chang-
ing this behavior and geing users
to pay more aention. SSL certi-
cate warnings are a key example:
all browser providers have evolved
their warnings in an aempt to
get users to take them more seri-
ously. For instance, Mozilla Firefox
increased the number of dialog
boxes and clicks users must wade
through to proceed with the con-
nection, even though it might not
be secure. However, this has made
lile dierence to the many users
who decide to ignore the warnings
and proceed. But creating more
elaborate warnings to guide users
toward secure behavior is not nec-
essarily the best course of action, as
it doesn’t align with the principles
of user-centered design.
Refining Warnings
At ACM CHI 2015, two studies
reported on eorts to make more
users heed warnings. Adrienne
Porter Felt and her colleagues at
Google designed a new SSL warn-
ing for Google Chrome, applying
recommendations from current
usable security research: keep warn-
ings brief, use simple language to
describe the specic risk, and illus-
trate the potential consequences of
proceeding.3 e authors hypoth-
esized that if users understand the
risks associated with a warning,
they will heed rather than ignore it.
ey tested these improved
warnings in a series of mini surveys
and found a modest but signicant
(12 percent) improvement in the
number of participants who cor-
rectly identied the potential risks
of proceeding, but no signicant
improvement in the number of par-
ticipants who correctly identied
the data at risk. In addition, com-
pared to existing browser SSL warn-
ings, there was no improvement in
the number of participants who
thought the warning was likely to be
a false positive.
Scaring and Bullying People into Security
Won’t Work
Angela Sasse | University College London
Felt and her colleagues reasoned
that if they couldn’t improve users’
understanding, they might still be
able to guide users toward secure
choices. ey applied what they
called opinionated design to make it
harder for participants to circum-
vent warnings, and visual design
techniques to make the secure
course of action look more arac-
tive. In a eld study, this technique
led to a 30 percent increase in the
number of participants who didn’t
proceed upon seeing the warning.
e authors concluded that it’s di-
cult to improve user comprehension
of online risks with simple, brief,
nontechnical, and specic warn-
ings, yet they urge fellow research-
ers to keep trying to develop such
warnings. In the meantime, they
advise designers to use opinionated
design to deter users from proceed-
ing in the face of warnings by mak-
ing them harder to circumvent and
emphasizing the risks associated
with doing so.
In the second paper, Bon-
nie Anderson and her colleagues
examined 25 participants’ brain
responses to warnings using a func-
tional magnetic resonance imaging
(fMRI) scanner.4 Previous studies
using eye tracking showed that users
habituate: the rst time around, a
warning catches their aention, but
aer repeated showings, it does not.
Anderson and her colleagues found
that the brain mirrors this habitu-
ation: when encountering a warn-
ing for the rst time, participants’
visual processing center in the supe-
rior parietal lobes showed elevated
activation levels, but these disap-
peared with repeated showings of
the warning.
e authors hypothesized that
varying a warning’s appearance,
such as its size, color, and text order-
ing, should prevent habituation
and keep participants paying aen-
tion. ey found that participants
indeed showed sustained activa-
tion levels when encountering these
polymorphic warnings; partici-
pants’ aention only decreased on
average aer the 13th variation of
the same warning. ey concluded
that users can’t help but habitu-
ate, and designers should combat
this by creating warnings that force
users to pay aention.
Usability: When
Does “Guiding”
Become Bullying”?
Both teams’ work was motivated by
an honorable intention—to help
users choose the secure option.
But as a security researcher with
a usability background and many
years of studying user behavior in
the lab as well as in real-world set-
tings, I am concerned by the sug-
gestion that we should use design
techniques to force users to keep
paying aention and push them
toward what we deem the secure—
and hence beer—option. It is a
paternalistic, technology-centered
perspective that assumes the secu-
rity experts’ solution is the correct
way to manage a specic threat.
In the case of SSL, the authors
recommended counteracting peo-
ple’s habituation response and
keeping their aention focused on
security. However, habituation is
an evolved response that increases
human eciency in day-to-day
interactions with the environment:
we stop paying aention to signals
we’ve deemed irrelevant. Crying
wolf too oen leads to alarm or alert
fatigue; this has been demonstrated
over many decades in industries
such as construction and mining
and, most recently, with the rapid
increase of monitoring equipment
in hospitals.
In 2013, the US Joint Com-
mission issued an alert about the
widespread phenomenon of alarm
fatigue.5 e main problem was
desensitization to alarms, which led
to sta missing critical events. An
increase in workload and decrease in
patient satisfaction were also noted.
Eminent soware engineer and
usability expert Alan Cooper identi-
ed the use of warnings in soware
as a problem more than a decade
ago.6 He pointed out that warn-
ings should be reserved for genuine
exceptions—events soware devel-
opers couldn’t reasonably anticipate
and make provisions for. Perhaps
on their legal advisors’ suggestion,
most developers have ignored Coo-
per’s recommendation, and the
increasing need for security has led
to a marked further increase in the
number of dialog boxes or warnings
that user have to “swat” today.
Strategies such as opinionated
design and forcibly aracting users’
aention do not align with usabil-
ity. As Cooper pointed out, usabil-
ity’s overall guiding principle is to
support users in reaching their pri-
mary goals as eciently as possible.
Security that routinely diverts the
aention and disrupts the activi-
ties of users in pursuit of these goals
is the thus the antithesis of a user-
centered approach.
And where, in practical terms,
would this approach lead us? A col-
league with whom I discussed the
studies commented: “Even with this
polymorphic approach, users stop
paying aention aer 13 warning
messages. I suppose the next step
is to administer signicant electri-
cal shocks to users as they receive
the warning messages, so that they
are literally jolted into paying aen-
tion.” (e colleague kindly allowed
me to use the quote, but wishes to
remain anonymous.) Scaring, trick-
ing, and bullying users into secure
behaviors is not usable security.
Cost versus Benefit
In 2009, Turing award and von Neu-
mann medal winner Butler Lamp-
son pointed out that7
[t]hings are so bad for usable
security that we need to give
up on perfection and focus on
essentials. e root cause of the
www.computer.org/security
3
problem is economics: we don’t
know the costs either of geing
security or of not having it, so
users quite rationally don’t care
much about it. … To x this we
need to measure the cost of secu-
rity, and especially the time users
spend on it.
Lampson’s observations haven’t
been heeded. User time and eort
are rarely at the forefront of usable
security studies; the focus is on
whether users choose the behavior
that researchers claim to be desir-
able because it’s more secure. Even if
users’ interaction time with specic
security mechanisms, such as a lon-
ger password, is measured, the cumu-
lative longer-term eect of draining
time from individual and organiza-
tional productivity isn’t considered.
Over the past few years,
researchers have declared the task
of recalling and entering 15- or
20-character complex passwords
“usable” because participants in
Mechanical Turk studies were able
to do so. But being able to do some-
thing a couple of times in the arti-
cial constraints of such studies
doesn’t mean the vast majority of
users could—or would want to—
do so regularly in pursuit of their
everyday goals.
Factors such as fatigue as well as
habituation aect performance. In
real-world environments, authen-
tication fatigue isn’t hard to detect:
users reorganize their primary tasks
to minimize exposure to secondary
security tasks, stop using devices
and services with onerous security,
and don’t pursue innovative ideas
because they can’t face any more
“bales with security” that they
anticipate on the path to realizing
those ideas.8 It’s been disheartening
to see that, in many organizations,
users who circumvent security
measures to remain productive
are still seen as the root of the
problem— “the enemy”2—and that
the answer is to educate or threaten
them into behavior security experts
demand—rather than considering
the possibility that security needs to
be redesigned.
A good example is the cur-
rently popular notion that sending
phishing messages to a company’s
employees, and directing them to
pages about the dangers of click-
ing links, is a good way to get their
aention and make them less
likely to click in the future. Telling
employees not to click on links can
work in businesses in which there’s
no need to click embedded links.
But if legitimate business tasks con-
tain embedded links, employees
can’t examine and ponder every
time they encounter a link without
compromising productivity.
In addition, being tricked by
a company’s own security sta is
a negative, adversarial experience
that undermines the trust relation-
ship between the organization and
employees. Security experts who
aim to make security work by “x-
ing” human shortcomings are ignor-
ing key lessons from human factors
and economics.
In modern, busy work environ-
ments, users will continue to cir-
cumvent security tasks that have
a high workload and disrupt pri-
mary activities because they sub-
stantially decrease productivity. No
amount of security education—a
further distraction from primary
tasks—will change that. Rather,
any security measure should pass
a cost–benet test: Is it easy and
quick to do, and does it oer a good
level protection?
Cormac Herley calculated that
the economic cost of the time users
spend on standard security mea-
sures such as passwords, antiphish-
ing tools, and certicate warnings is
billions of dollars in the US alone—
and this when the security benets
of complying with the security
advice are dubious.9 SSL warnings
have overwhelming false-positive
rate—close to 100 percent for many
years9—so users developed alarm
fatigue and learned to ignore
them. In addition, longer (12- to
15- character) passwords, which are
associated with a very real cost in
recall and entry time and increased
failure rates—especially on the now
widely used touchscreens—oer
no improvement in security.10
Fitting the Task
to the Human
e security-centered view assumes
that users want to avoid risk and
harm altogether. However, many
users choose to accept some risks in
pursuit of goals that are important
to them. Security experts assume
that users who don’t choose the
secure option are making a mistake,
and thus preventing mistakes and
educating users are the way forward.
However, a combination of
usability and economics insights
leads to a dierent way of thinking
about usable security:
■ Usable security starts by recog-
nizing users’ security goals, rather
than by imposing security experts’
views on users.
■ Usable security acknowledges that
users are focused on their primary
goals—for example, banking,
shopping, or social networking.
Rather than disrupting these pri-
mary tasks and creating a huge
workload for users, security tasks
should cause minimum friction.
■ Security experts must acknowl-
edge and support human capabili-
ties and limitations. Rather than
trying to “x the human,” experts
should design technology and
security mechanisms that don’t
burden and disrupt users.
Techniques from the human
factors eld can maximize perfor-
mance while ensuring safety and
security. A key principle is design-
ing technology that ts users’ physi-
cal and mental abilities—ing the
task to the human. Rarely should we
4
IEEE Secur ity & Privacy May/June 2015
SECURITY & PRIVACY ECONOMICS
t the human to the task, because
this requires signicant organiza-
tional investment in terms of behav-
ior change through education and
training. Security education and
training are only worthwhile if the
behavior ts with primary tasks.
An organization could train its
employees to become memory art-
ists, enabling them to juggle a large
number of changing PINs and pass-
words. But then employees would
need time for routines and exercises
that reinforce memory and recall.
Changing security policies and
implementing mechanisms that
enable employees to cope with-
out training are more ecient. For
instance, Michelle Steves and Mary
eofanos recommend a shi from
explicit to implicit authentication8;
in most environments, there are
other ways to recognize legitimate
users, including device and loca-
tion information or behavioral bio-
metrics, without disrupting users’
workow. ey also point out that
infrequent authentication requires
dierent mechanisms that com-
plement the workings of human
memory—something Adams and I
recommended aer our rst study
15 years ago2—but this rarely
occurs in practice.
Users will pay aention to
reliable and credible indi-
cators of risks they want to avoid.
Security mechanisms with a high
false- positive rate undermine the
credibility of security and train
users to ignore them. We need more
accurate detection and beer secu-
rity tools if we are to regain users’
aention and respect, rather than
scare, trick, and bully them into
complying with security measures
that obstruct human endeavor.
References
1. A. Whien and D. Tygar, “Why
Johnny Can’t Encrypt: A Usability
Evaluation of PGP 5.0,” Proc. 8th
Conf. USENIX Security Symp., vol.
9, 1999, p. 14.
2. A. Adams and M.A. Sasse, “Users
Are Not the Enemy,” Comm. ACM,
vol. 42, no. 12, 1999, pp. 40–46.
3. A. Porter Felt et al., “Improving
SSL Warnings: Comprehension
and Adherence,” Proc. Conf. Human
Factors and Computing Systems,
2015; hps://adrifelt.github.io/ssl
interstitial-chi.pdf.
4. B.B. Anderson et al., “How Poly-
morphic Warnings Reduce Habitu-
ation in the Brain—Insights from an
f MRI Study,” Proc. Conf. Human Fac-
tors and Computing Systems, 2015;
http://neurosecurity.byu.edu
/media/Anderson_et_al._CHI
_2015.pdf.
5. “Medical Device Alarm Safety in
Hospitals,” Sentinel Event Alert, no.
50, 8 Apr. 2013; www.pwrnewmedia
.com/2013/joint_commission
/medical_alarm_safety/downloads
/SEA_50_alarms.pdf.
6. A . Cooper, e Inmates Are Running
the Asylum: Why High-Tech Products
Drive Us Crazy and How to Restore
the Sanity, Sams–Pearson, 2004.
7. B. Lampson, “Usable Security: How
to Get It,” Comm. ACM, vol. 52, no.
11, 2009, pp. 25–27.
8. M.P. Steves and M.F. eofanos,
Report: Authentication Diary Study,
tech. report NISTIR 7983, Nat’l
Inst. Standards and Technology,
2014.
9. C. Herley, “So Long, and No
anks for the Externalities: e
Rational Rejection of Security
Advice by Users,” Proc. 2009 Work-
shop New Security Paradigms, 2009,
pp. 133–144.
10. D. Florencio, C. Herley, and P.C.
van Oorschot, “An Administra-
tor’s Guide to Internet Password
Research,” Proc. USENIX Conf.
Large Installation System Adminis-
tration, 2014, pp. 35–52.
Angela Sasse is a professor of human-
centered technology at University
College London. Contact her at
a.sasse@cs.ucl.ac.uk.
www.computer.org/security
5