Ethics in Security Vulnerability Research

Article (PDF Available)inIEEE Security and Privacy Magazine 8(2):67-72 · March 2010with 132 Reads
DOI: 10.1109/MSP.2010.67 · Source: DBLP
Abstract
Debate has arisen in the scholarly community, as well as among policymakers and business entities, regarding the role of vulnerability researchers and security practitioners as sentinels of information security adequacy. The exact definition of vulnerability research and who counts as a "vulnerability researcher" is a subject of debate in the academic and business communities. For purposes of this article, we presume that vulnerability researchers are driven by a desire to prevent information security harms and engage in responsible disclosure upon discovery of a security vulnerability. Yet provided that these researchers and practitioners do not themselves engage in conduct that causes harm, their conduct doesn't necessarily run afoul of ethical and legal considerations. We advocate crafting a code of conduct for vulnerability researchers and practitioners, including the implementation of procedural safeguards to ensure minimization of harm.
Basic Training
Editors: Richard Ford, rford@se.fit.edu
Deborah Frincke, deborah.frincke@pnl.gov
MARCH/APRIL 2010 1540-7993/10/$26.00 © 2010 IEEE COPUBLISHED BY THE IEEE COMPUTER AND RELIABILITY SOCIETIES 67
AndreA M.
MAtwyshyn
University of
Pennsylvania
Ang Cui,
Angelos d.
KeroMytis,
And sAlvAtore
J. stolfo
Columbia
University
a whole. Yet skeptics (including
some security professionals) ar-
gue that short-term expenditures
on such “nonessential” items as
analy sis should be curtailed, and
that the results of any analyses
should be kept secret. The return
on investment in security isnt vis-
ible in the short term, and, there-
fore, detractors feel empowered to
ignore the well-known long-term
costs of vulnerability, which in-
clude negative eects on the value
of intangible assets and goodwill.
They argue that investment in se-
curity is squandering corporate as-
sets that could be better utilized to
generate strong short-run returns
for shareholders.
Unfortunately, corporate in-
formation security skeptics cur-
rently have a rm hold inside
many enterprises. In particular,
empirical data indicates that com-
panies aren’t successfully antici-
pating and managing information
risk. For example, in the 2008
PricewaterhouseCoopers annual
information security survey of
more than 7,000 respondents—
comprising CEOs, CFOs, CIOs,
CSOs, vice presidents, and direc-
tors of IT and information secu-
rity from 119 countries—at least
three of 10 respondents couldnt
answer basic questions about their
organizations’ information secu-
rity practices. Thirty-ve percent
didnt know how many security
incidents occurred in the past
year; 44 percent didnt know what
types of security incidents present-
ed the greatest threats to the com-
pany; 42 percent couldnt identify
the source of security incidents;
and, nally, 67 percent said their
of vulnerability research and who
counts as a “vulnerability research-
er” is a subject of debate in the
academic and business communi-
ties. For purposes of this article,
we presume that vulnerability
researchers are driven by a desire
to prevent information security
harms and engage in responsible
disclosure upon discovery of a se-
curity vulnerability.) Yet provided
that these researchers and practi-
tioners do not themselves engage
in conduct that causes harm, their
conduct doesn’t necessarily run
afoul of ethical and legal consid-
erations. We advocate crafting a
code of conduct for vulnerabil-
ity researchers and practitioners,
including the implementation of
procedural safeguards to ensure
minimization of harm.
Why Vulnerability
Research Matters
The computer and network tech-
nologies that we’ve come to de-
pend on in every part of our life
are imperfect. During the past de-
cade, the practice of nding and
exploiting such imperfections has
matured into a highly lucrative
industry. To combat this escalat-
ing threat, security researchers
nd themselves in a perpetual race
to identify and eliminate vulner-
abilities before attackers can ex-
ploit them and to educate and train
practitioners to test for known vul-
nerabilities in deployed systems.
While nefarious parties operate in
secrecy without fear of law, ethics,
or public scrutiny, legitimate secu-
rity researchers operate in the open
and are subject to these constraints.
Hence, researchers are sometimes
hesitant to explore important in-
formation security issues owing to
concern about their ethical and le-
gal implications.
Provided that vulnerability re-
search is done ethically, research-
ers perform an important social
function: they provide informa-
tion that closes the information
gap between the creators, opera-
tors, or exploiters of vulnerable
products and the third parties who
will likely be harmed because of
them. A culture war is currently
under way in the cybersecurity in-
dustry and research communities
regarding the value of investment
in vulnerability analysis of prod-
ucts and operations. On one hand,
many data security champions ar-
gue that maintaining best practic-
es in information security, which
includes diligent analysis of prod-
ucts for vulnerabilities and aws,
is “the right thing to doboth for
the system operator and society as
D
ebate has arisen in the scholarly community,
as well as among policymakers and business
entities, regarding the role of vulnerability
researchers and security practitioners as sen-
tinels of information security adequacy. (The exact denition
Ethics in Security
Vulnerability Research
Basic Training
68 IEEE SECURITY & PRIVACY
organization didn’t audit or moni-
tor compliance with the corporate
information security policy
whether the attack was most likely
to have originated from employees
(either current or former), custom-
ers, partners or suppliers, hackers,
or others. According to this annual
longitudinal research, many com-
pany leaders lack a well-rounded
view of their information security
compliance activities: business
and IT executives may not have a
full picture of compliance lapses ...
Fewer than half of all respondents
say their organization audits and
monitors user compliance with
security policies (43 percent)and
only 44 percent conduct com-
pliance testing” (www.pwc.com/
extweb/insights.nsf/docid/0E50
FD887E3DC70F852574DB005
DE509/$File/Safeguarding_the
_new_currency.pdf). Rampant data
breaches of millions of records
in 2009 further speak for them-
selves, demonstrating widespread
inadequacies in corporate infor-
mation handling (www.privacy
rights.org/ar/ChronDataBreaches.
htm). Meanwhile, each of those
breached records is attached to a
company or a consumer potential-
ly harmed by the disclosure.
It’s undisputable that lax in-
formation security and vulnerable
products erode commercial trust
and impose costs on third par-
ties—business partners, sharehold-
ers, consumers, and the economic
system as a whole. The reason for
this arises from the nature of in-
formation risk: its impact is in-
herently transitive. This means
that if a company fails to secure
another company’s information,
the negative eects to the shared
data are similar to those that
would have occurred if the origi-
nal company had been breached
itself (for example, banks aected
by data breaches have argued that
they can’t continue to absorb the
downstream costs of other com-
panies’ information security mis-
takes
1
). In practice, this means that
negative nancial externalities are
imposed on individuals and com-
panies not responsible for the data
loss. Furthermore, information
stolen about individual consum-
ers is sometimes used for identity
theft. Harms to social institutions
also occur. The social security sys-
tem, for example, has been threat-
ened in part due to rampant social
security number vulnerability.
2
Similarly, the integrity of social
structures, such as law enforce-
ment and the criminal justice
system, is negatively aected by
information crime. For instance,
identity thieves sometimes identi-
fy themselves using a victims per-
sonal information when charged
with a crime.
The proper calculus with re-
spect to information security ad-
equacy should turn on the simple
ethical question: “Have we veri-
ed that our products and op-
erations dont cause avoidable
harm to others?” This duty not
to harm can be operationalized
in information security practices
in at least two ways. First, it in-
volves timely, fair, and accurate
disclosure of the existence of se-
curity vulnerabilities that put
consumers, business partners, and
the social system at risk, thereby
enabling these aected parties to
mitigate their exposure to infor-
mation risk. Second, it involves
due care in research and devel-
opment, as well as auditing and
updating information security
practices to stay in step with the
state of the art. To date, neither
of these practices are a universal
norm of corporate conduct. Fur-
ther current legal regimes aren’t
robust; the law is currently inad-
equate to enforce this duty not to
harm.
3
An impactful information
gap exists, which vulnerability re-
searchers help to close. Without
this intermediation, it’s unlikely
that meaningful improvements in
information security will occur in
a timely manner. Meanwhile, the
consequences of widespread vul-
nerability carry heavy social costs.
Vulnerability Research:
Neither Unethical
nor Illegal
Increasingly, ethics scholars are
recognizing the existence of core
ethics standards that apply to all
commercial activities. They point
to factors such as acting honestly
and in good faith, warn against
conicts of interest, require the ex-
ercise of due care, and emphasize
fairness and just results. (In Con-
fronting Morality in Markets,
Thomas Dunfee and N.E. Bowie
argue that morality is expressed
within markets and could result
in pressures on organizations to
respond.
4
) Perhaps the most basic
of benign moral concerns in ethics
is the duty to avoid knowingly or
recklessly harming others—that is,
the duty not to harm.
Some critics of vulnerabil-
ity research assert that it’s in-
herently unethical, presumably
because it involves testing sys-
tems and analyzing products cre-
ated and maintained by someone
other than the researcher (http://
searchsecurity.techtarget.com/
magazineFeature/0,296894,sid14
_gci1313268,00.html). If we ap-
ply the ethics principle of the duty
not to harm, however, a strong
argument exists that at least a
portion of vulnerability research
is ethical and, in fact, ethically
desirable. Provided that vulner-
ability research is technologically
nondisruptive, doesn’t damage
Increasingly, ethics scholars are recognizing the
existence of core ethics standards that apply to all
commercial activities.
Basic Training
www.computer.org/security 69
the functionality of the products
and systems it tests or otherwise
harm third parties, the ethi-
cal duty not to harm appears to
be met. Additionally, with some
vulnerability research, its goal is
explicitly to prevent or mitigate
harm occurring to third parties
because of vulnerable products
and operations whose creator has
failed to disclose the danger. As
such, we can argue that the ethi-
cal duty not to harm might even
mandate vulnerability research in
some cases: the community of re-
searchers possessing special skills
to protect society from vulner-
able products could have a moral
obligation to use these skills not
only for personal gain but also for
the benet of society as a whole.
(For some ethicists, corporations
might have ethical obligations to
exercise unique competencies for
societal good.
5
)
Perhaps the most supercially
potent objections vulnerability re-
search skeptics have raised involve
the law. First, critics assert that vul-
nerability research is unnecessary
and, second, that all such research
is, by denition, “illegalbecause
it violates state and federal com-
puter intrusion statutes or intellec-
tual property rights. On the point
of vulnerability research being su-
peruous, critics state that acting
responsibly for a business entity in
the area of information security
simply means complying with the
law and that the law denes what
constitutes good business practic-
es. This objection fundamentally
misunderstands the relationship
between responsible corporate
conduct and legal regulation. Law
is merely a oor of conduct, not
a marker of best practices or ethi-
cal conduct. Leading ethicists have
explicitly rejected the idea that
law and business ethics necessarily
converge.
6
Furthermore, although
both US and international regula-
tors are beginning to take action in
the realm of information security
regulation, legally speaking, the
eld is still in its infancy. To date,
the information security legal re-
gime adopted in the US to address
issues of vulnerability is an imper-
fect patchwork of state and fed-
eral laws, widely critiqued in legal
scholarship;
7
its also barely a de-
cade old, doctrinally inconsistent,
and in a state of ux.
3
A need for
timely, fair, and accurate disclo-
sure of the existence of informa-
tion security problems arises from
the ethical duty not to harm, re-
gardless of the state of the law. By
the time disclosure is legally man-
dated, irreparable harm has usually
occurred. In fact, we can view the
law as creating negative incentives
for correcting security vulner-
abilities: because contract law has
allowed technology producers to
disclaim essentially all liability as-
sociated with their products, there
are limited nancial incentives for
these producers to disclose the ex-
istence of vulnerabilities andx
products promptly so as to avoid
lawsuits. Vulnerability research
lls an information void the law
doesnt adequately address.
Although it’s likely that a court
would construe some forms of
vulnerability research to be in vio-
lation of state or federal computer
intrusion statutes, it’s equally like-
ly that some forms of this research
would be deemed legally permis-
sible. Even intellectual property
rights have recognized limits at
which concerns of consumer harm
exist. In fact, Congress has en-
couraged vulnerability research
in certain instances—for example,
in the Digital Millennium Copy-
right Act, Congress explicitly pro-
tects research designed to test the
privacy-invading potential and
security implications of particular
digital rights management tech-
nology.
8
Furthermore, the exact
construction of the Computer
Fraud and Abuse Act, the leading
federal computer intrusion statute,
is a subject of much debate and
dissention, even among federal ap-
pellate courts. (For example, crit-
ics have analyzed the 7th Circuit
9
and the 9th Circuit
10
to stand in
direct contradiction of each other
with regard to whether an em-
ployee who accesses employer les
and uses that information for his
own purposes has committed a
violation of the Computer Fraud
and Abuse Act.) Its interpretation
and meaning are far from clear. Its
not obvious, for example, which
forms of vulnerability research are
prohibited by law. In the absence
of clear legal guidance, however,
it’s essential that the commu-
nity of vulnerability researchers
commence a dialogue on self-
regulation, best practices, and the
boundaries of ethical conduct.
Crafting Norms of
Vulnerability Research
Using a case study from research
conducted at Columbia University,
we propose several possible “best
practicesin vulnerability research
that we believe should be incorpo-
rated into a vulnerability research-
ers’ code of conduct. Research
demonstrates that the existence of
corporate codes of conduct on eth-
ical behavior are signicantly relat-
ed to such behavior or to whether
employees behave ethically.
11
In
particular, codes that clearly stip-
ulate standards for information
security conduct and sanctions
for data mishandling are likely to
generate more ethical conduct.
12
Because inappropriately done vul-
nerability research can cause sig-
nicant harm to systems and the
people who rely on them, this type
of research should be undertaken
with care.
Our work at Columbia Uni-
versity looked at vulnerabilities
in routers and other embedded
networked devices as they are de-
ployed across the Internet rather
than strictly conned to an iso-
lated laboratory. Such embedded
networked devices have become a
ubiquitous xture in the modern
home and oce as well as in the
global communication infrastruc-
Basic Training
70 IEEE SECURITY & PRIVACY
ture. Devices like routers, NAS
appliances, home entertainment
appliances, Wi-Fi access points,
webcams, voice-over-IP appli-
ances, print servers, and video
conferencing units reside on the
same networks as our personal
computers and enterprise servers
and together form our world-wide
communication infrastructure.
Widely deployed and often mis-
congured, they constitute highly
attractive targets for exploitation.
We conducted a vulnerability
assessment of embedded network
devices within the world’s largest
ISPs and civilian networks span-
ning North America, Europe,
and Asia. Our goal was to identify
the degree of vulnerability of the
overall networking infrastructure
and, having devised some poten-
tial defenses, to determine their
practicality and feasibility as a re-
active defense. To give a sense of
the problems scale, we provide
some quantitative data. In our vul-
nerability assessment, we scanned
486 million IP addresses, looking
for a trivial vulnerability: embed-
ded systems with a default pass-
word setting to their telnet or Web
server interface. Out of the 3 mil-
lion Web servers and 2.8 million
telnet servers discovered, 102,896
embedded devices were openly
accessible with default adminis-
trative credentials (username and
password). Some of these devices
were routers or devices managing
or controlling the connectivity of
hundreds (or thousands) of other
devices. Other unprotected de-
vices such as video conferencing
units, IP telephony devices, and
networked monitoring systems
can be exploited to extract vast
amounts of highly sensitive tex-
tual, audio, and visual data.
In trying to devise defenses for
such devices, however, were forced
to acknowledge and think about
the ethical questions this technical
reality raises: such devices consti-
tutenetwork plumbing,” which
few people want to think about or
spend much time tinkering with,
except when it visibly fails. Even
with the active support of router
manufacturers, expecting that us-
ers would update their embedded
systems (which arent as straightfor-
ward to update as the typical desk-
top or laptop operating system) isnt
an optimal strategy from the stand-
point of minimizing harm. In fact,
anecdotal evidence suggests that
publicizing the vulnerabilities we
knew about in the form of a ven-
dor-approved or even vendor-sup-
plied software patch would likely
cause more damage—such a move
would attract the attention of pre-
viously unaware attackers and cre-
ate a sense of urgency in exploiting
these vulnerabilities before they
“disappear.Reactive defenses, on
the other hand, could sidestep these
issues by hardening those systems
without any action by the device
owners, in response to a detected
attack (whether against a specic
device or the network as a whole).
However, this entire line of
research raises a slew of ethical,
moral, and even legal questions.
Is our vulnerability assessment of
the Internet (or a large fraction of
it) ethical? Is our disclosure of the
assessment and its results ethical?
What of the contemplated defens-
es? Although proactively deploy-
ing our defenses without owners’
consent across the Internet would
likely be viewed as unethical,
there’s also a reasonable expecta-
tion that in the event of a major
cybersecurity incident, an organi-
zation such as the US Department
of Homeland Security would
choose to employ such means to
defend the critical infrastructure.
Where, then, do qualied security
professionals lie on this spectrum?
What about someone who discov-
ers a weakness in such an attack
and rapidly develops and deploys
a countermeasure that uses the
same attack vector to install itself
on still-vulnerable systems? Crude
attempts along these lines could be
seen in the CodeRed/CodeGreen
engagement and Li0n/Cheese
worms in 2001, and the Santy/
anti-Santy Web worms in 2004.
Based on our experiences and
discussions conducting this re-
search, we propose the following
suggestions for best practices for
ethical vulnerability research.
Disclose Intent
and Research
As a rst step, the research’s
intent should be publicly an-
nounced, including details about
the methods involved in acquir-
ing data or testing devices or
products for vulnerabilities. Open
communication of this informa-
tion can be easily accomplished
through a well-publicized Web
site, such as Columbia University’s
home router vulnerability assess-
ment Web page at www.hacktory.
cs.columbia.edu.
Seek Legal Counsel Prior
to Starting Research
Active debate is under way in the
courts and legal academic com-
munity regarding the appropriate
construction of computer intrusion
and intellectual property statutes,
and researchers should consult ad-
vice of counsel prior to commenc-
ing a project whenever practicable.
Possible sources of inexpensive le-
gal advice for researchers include
intellectual property law clinics
operated by law schools, univer-
sity general counsel, law rm pro
bono initiatives, and nonprot
organizations concerned about is-
sues of civil liberty and consumer
protection such as the Electronic
Frontier Foundation.
Be Proactive
about Data Protection
At every stage of the research,
the team must be informed about
the nature and need to safeguard
data. There are several important
considerations.
All members of the research
team should receive, review, and
Basic Training
www.computer.org/security 71
preferably sign a “best data prac-
tices policy” that states the rules
of research conduct and infor-
mation handling that the princi-
pal investigator sets forth for the
lab or project at hand. Because
graduate students and other indi-
viduals working on the research
team might be unfamiliar with
the data collection, handling,
and use practices the principal
investigator expects, obtaining
the entire team’s agreement on
the rules of data protection for
the project prior to starting will
help prevent misunderstandings
and careless errors. In the un-
likely event that a team member
engages in unethical behavior,
the existence of this policy dem-
onstrates that the unethical con-
duct was indeed a transgression,
even if the conduct falls in an
ethical gray area.
Access to any sensitive data used
as part of or obtained during
the research should be carefully
safeguarded with limited access
on a “need-to-know” basis only.
Certainly, security practitioners
must safeguard the customer’s
condential information about
its own security posture.
Finally, data should be ano-
nymized to the greatest extent
possible in accordance with cur-
rent capabilities.
Further Knowledge
in the Field
Basic research should further the
state of knowledge in the eld
of information security. General
results of a scientic or techni-
cal nature revealing the scale and
scope of signicant vulnerabilities
should be widely published in the
scientic literature; publications
should reveal sucient detail to
help other researchers devise new
vulnerability assessment methods.
Report Serious
Vulnerabilities
Any signicant ndings of harm-
ful vulnerabilities should be re-
ported, directly or indirectly, to
the people who can best correct
the problems. The optimal chan-
nels for this disclosure are cur-
rently a matter of debate in both
the legal and information secu-
rity community. At present, each
principal investigator should as-
sess the unique facts and circum-
stances of the vulnerability and
apply the duty not to harm. In
other words, an ethical vulner-
ability researcher will determine
which methods of notication are
most likely to limit harm to third
parties, in particular, users of the
vulnerable product and those who
rely on its use. In short, the goal
of an ethical security researcher’s
disclosure is always to minimize
harm, to reinforce through con-
duct the goals of the research stat-
ed prior to commencement, and
improve the overall state of infor-
mation security.
Prepare the
Next Generation
of Professionals and
Researchers
To better train the next genera-
tion of security professional and
researchers to combat informa-
tion security harms, the com-
puter science curriculum should
include penetration testing and
real-world exploitation tech-
niques. Building and auditing
systems eectively to minimize
harm requires knowledge par-
ity in practical skills between
malicious actors and the security
champions who seek to protect
innocent third parties.
T
he technical considerations of
any security professionals basic
training are quite challenging. The
practice of security professionals is
perhaps far more complex when
we consider the moral and ethical
challenges that confront each of
us when we apply our knowledge
and skills to protect the systems on
which we depend.
References
1. S. Gaudin, “Banks Hit T.J. Maxx
Owner with Class-Action Law-
suit,Information Week, 25 Apr.
2007; www.informationweek.
com/news/internet/showArticle.
jhtml?articleID=199201456&quer
yText=Banks%20Hit%20T.J.%20
Maxx%20Owner%20with%20
Class-Action%20Lawsuit.
2. A.M. Matwyshyn, “Material Vul-
nerabilities: Data Privacy, Cor-
porate Information Security and
Securities Regulation, Berkeley
Business Law J., vol. 3, 2005, pp.
129–203.
3. A.M. Matwyshyn, “Techno-
consen(t)sus, Wash. Univ. Law.
Rev., vol. 85, 2007, pp. 529–574.
4. N.E. Bowie and T.W. Dunfee,
“Confronting Morality in Mar-
kets, J. Business Ethics, vol. 38,
no. 4, 2002, pp. 381–393.
5. T.W. Dunfee, “Do Firms with
Unique Competencies for Rescu-
ing Victims of Human Catastro-
phes Have Special Obligations?”
Business Ethics Q., vol. 16, no. 2,
2006, pp. 185–210.
6. T.W. Dunfee, “The World is Flat
in the Twenty-First Century: A
Response to Hasnas, Business
Ethics Q., vol. 17, no. 3, 2007, pp.
427431.
7. P.M. Schwartz, “Notications of
Data Security Breaches, Michi-
gan Law Rev. vol. 105, 2007, pp.
913–971.
8. Digital Millennium Copyright
Act, US Code, Title 12, section
1201(i)–(j).
9. Int’l Airport Centers, LLC et al. v.
Jacob Citrin, Federal Supplement,
3rd Series, vol. 440, 2006, p. 419
(US Court of Appeals for the 7th
Circuit).
10. LVRC Holdings v Brekka, Federal
Supplement, 3rd Series, vol. 581,
2009, p. 1127, 1137 (US Court of
Appeals for the 9th Circuit).
11. R.C. Ford and W.D. Richardson,
“Ethical Decision Making: A Re-
view of the Empirical Literature,
J. Business Ethics, vol. 13, 1994,
205–221.
12. M.S. Schwartz, T.W. Dunfee,
Basic Training
72 IEEE SECURITY & PRIVACY
and M.J. Kline, “Tone at the Top:
An Ethics Code for Directors?”
J. Business Ethics, vol. 58, no. 1,
2005, pp. 79100.
Andrea M. Matwyshyn is an assistant
professor of Legal Studies and Busi-
ness Ethics at the Wharton School at
the University of Pennsylvania. Her
research focuses on the intersection of
information security and privacy regu-
lation, corporate law, and technology
policy. She is the editor of Harboring
Data: Information, Security, Law,
and the Corporation (Stanford Press
2009). Contact her at amatwysh@
wharton.upenn.edu.
Ang Cui is a graduate research assis-
tant with the Department of Computer
Science at Columbia University. His
research interests include next-gener-
ation botnets and the defense and ex-
ploitation of routers and other network
embedded devices. Contact him at
ang@cs.columbia.edu.
Angelos D. Keromytis is an associ-
ate professor with the Department of
Computer Science at Columbia Uni-
versity and director of the Network
Security Lab. His research interests re-
volve around most aspects of security,
with particular interest in systems and
software security, cryptography, and
access control. Keromytis has a PhD in
computer science from the University
of Pennsylvania. He’s a senior member
of the ACM and IEEE. Contact him at
angelos@cs.columbia.edu.
Salvatore J. Stolfo is a professor of com-
puter science at Columbia University.
His research interests include computer
security, intrusion detection, machine
learning, and parallel computing. Stolfo
has a PhD in computer science from
New York University’s Courant Institute.
Contact him at sal@cs.columbia.edu.
Selected CS articles and columns
are also available for free at
http://ComputingNow.computer.org.
Call for Papers
For submission information and author guidelines, please visit
www.computer.org/security/author.htm
The Usability of Security
Guest Editors: Mary Frances Theofanos, NIST (mary.theofanos@
nist.gov) and Shari Lawrence Pfleeger, RAND (shari@pfleeger.com)
Please email the guest editors with a brief description of the article
you plan to submit by 1 June 2010.
Final submissions due 1 July 2010
The usability of security is different from both usability and security.
In most systems, usability refers to the primary task the user is trying
to accomplish. But security is almost always a secondary task, one that
frequently is perceived as standing in the way of the primary one. Here,
security isn’t just a matter of training, and usability isn’t simply good
design of the primary tasks. Instead, usable security must be viewed in
terms of raising and keeping security awareness while not interfering
with the primary task. Moreover, it must be considered early, during
design and construction, rather than after the fact.
We welcome articles that address a variety of questions whose
answers will suggest the best ways to make security usable. For example:
What are the best ways to create and maintain awareness
of security without having a degrading effect on the primary
task? Are there results in behavioral science or other disciplines
that have bearing on answering this question?
How should we balance usability in the large vs. usability in the
small? Users and system administrators must manage many
security applications at once. There might be a conflict between
ease of use for the security administrators with ease of use
for users performing their primary tasks. What strategies are
useful for choosing the best combinations of applications and
security policies? What are the effects of cognitive load and task
complexity in addressing these problems?
How do we ensure security usability across the life cycle? What
can we do as we are designing and building applications so that
the resulting systems have usable security?
How can we best harmonize security goals with other application
goals? To answer this question, we must first evaluate the costs
of poor usability of security. Then, how do we use this cost
information to balance multiple goals?
How can the user be kept privacy-aware? How is the user’s
environment reflected in policies and actions that protect
privacy? How can the user protect someone else’s privacy without
revealing the protected parties’ identities?
What legal issues relate to poor security usability? Are there legal
implications for security problems caused by poor usability? Can
a minimum level of usable security be mandated, and how could
such a mandate be enforced?
We welcome papers that address the interaction between usability
and security, particularly those that present an empirically-based
picture of the nature and magnitude of the problems and possible
solutions in organizational settings.
www.computer.org/security/cfp.htm
To submit a manuscript, please log on to Manuscript Central (https://mc.manuscriptcentral.com/cs-ieee) to create or access an account.
Interested in writing for this
department? Please contact
editors Richard Ford (rford@
se. t.edu) and Deborah Frincke
(deborah.frincke@pnl.gov).
  • Conference Paper
    Full-text available
    Spam and other electronic abuses have long been a focus of computer security research. However, recent work in the domain has emphasized an economic analysis of these operations in the hope of understanding and disrupting the profit model of attackers. Such studies do not lend themselves to passive measurement techniques. Instead, researchers have become middle-men or active participants in spam behaviors; methodologies that lie at an interesting juncture of legal, ethical, and human subject (e.g., IRB) guidelines. In this work two such experiments serve as case studies: One testing a novel link spam model on Wikipedia and another using blackhat software to target blog comments and forums. Discussion concentrates on the experimental design process, especially as influenced by human-subject policy. Case studies are used to frame related work in the area, and scrutiny reveals the computer science community requires greater consistency in evaluating research of this nature.
  • Conference Paper
    Full-text available
    Cloud computing based delivery model has been adopted by end-users and enterprises to reduce IT costs and complexities. The ability to offload user software and data to cloud data centers has raised many security and privacy concerns over the cloud computing model. Significant research efforts have focused on hyper visor security and low-layer operating system implementations in cloud data centers. Unfortunately, the role of cloud carrier in the security and privacy of user software and data has not been well studied. Cloud carrier represents the wide area network that provides the connectivity and transport of cloud services between cloud consumers and cloud providers. In this paper, we present a risk assessment framework to study the security risk of the cloud carrier between cloud consumers and cloud providers. The risk assessment framework leverages the National Vulnerability Database (NVD) to examine the security vulnerabilities of operating systems of routers within the cloud carrier. This framework provides quantifiable security metrics for cloud carrier, which enables cloud consumers to establish the quality of security services among cloud providers. Such security metric information is very useful in the Service Level Agreement (SLA) negotiation between a cloud consumer and a cloud provider. It can be also be used to build a tool to verify SLA compliance. Furthermore, we implement this framework for the cloud carriers of Amazon Web Services and Windows Azure Platform. Our experiments show that the security risks of cloud carriers on these two commercial clouds are significantly different. This finding provides guidance for a network provider to improve the security of cloud carriers.
  • Conference Paper
    Full-text available
    In recent times, cloud computing based delivery model has been proven to reduce enterprise IT costs and complexities. In contrast to traditional enterprise IT solution, the cloud computing model moves the application software and data to remote servers in large datacenters, which raise many security challenges. One of the critical challenges is the inability to characterize the cloud network's impact on the cloud security and performance guarantees. In this paper, we analyze the degree of security provided by the network to data sharing applications deployed in cloud environments that span administrative and network domains. Our analysis is based on examining the security level of network applications on routers which lie between cloud subscriber and cloud provider. Our preliminary results confirm that the majority of the routers are plagued by insecure network protocols, leading to vulnerable routers. These results confirm our hypothesis that the security of the network infrastructure needs to be upgraded to assure the protection of information exchange between the cloud subscriber and cloud provider.
  • Conference Paper
    Computer security research frequently entails studying real computer systems and their users; studying deployed systems is critical to understanding real world problems, so is having would-be users test a proposed solution. In this paper we focus on three key concepts in re-gard to ethics: risks, benefits, and informed consent. Many researchers are required by law to obtain the approval of an ethics committee for research with human subjects, a process which includes addressing the three concepts focused on in this paper. Computer security researchers who conduct human subjects research should be concerned with these aspects of their methodology regardless of whether they are required to by law, it is our ethical responsibility as professionals in this field. We augment previous discourse on the ethics of computer security research by sparking the discussion of how the nature of security research may complicate determining how to treat human subjects ethically. We con-clude by suggesting ways the community can move forward.
  • Conference Paper
    Full-text available
    We present a quantitative lower bound on the number of vulnerable embedded device on a global scale. Over the past year, we have systematically scanned large portions of the internet to monitor the presence of trivially vulnerable embedded devices. At the time of writing, we have identified over 540,000 publicly accessible embedded devices configured with factory default root passwords. This constitutes over 13% of all discovered embedded devices. These devices range from enterprise equipment such as firewalls and routers to consumer appliances such as VoIP adapters, cable and IPTV boxes to office equipment such as network printers and video conferencing units. Vulnerable devices were detected in 144 countries, across 17,427 unique private enterprise, ISP, government, educational, satellite provider as well as residential network environments. Preliminary results from our longitudinal study tracking over 102,000 vulnerable devices revealed that over 96% of such accessible devices remain vulnerable after a 4-month period. We believe the data presented in this paper provides a conservative lower bound on the actual population of vulnerable devices in the wild. By combining the observed vulnerability distributions and its potential root causes, we propose a set of mitigation strategies and hypothesize about its quantitative impact on reducing the global vulnerable embedded device population. Employing our strategy, we have partnered with Team Cymru to engage key organizations capable of significantly reducing the number of trivially vulnerable embedded devices currently on the internet. As an ongoing longitudinal study, we plan to gather data continuously over the next year in order to quantify the effectiveness of community's cumulative effort to mitigate this pervasive threat.
  • Article
    Simulated security assessments (a collective term used here for penetration testing, vulnerability assessment, and related nomenclature) may need standardisation, but not in the commonly assumed manner of practical assessment methodologies. Instead, this study highlights market failures within the providing industry at the beginning and ending of engagements, which has left clients receiving ambiguous and inconsistent services. It is here, at the prior and subsequent phases of practical assessments that standardisation may serve the continuing professionalisation of the industry, and provide benefits not only to clients, but the practitioners involved in the provision of these services. These findings are based on the results of 54 stakeholder interviews with providers of services, clients, and coordinating bodies within the industry. The paper culminates with a framework for future advancement of the ecosystem, which includes three recommendations for standardisation.
  • Chapter
    The subject of IS Security covers a broad spectrum of issues that cannot be covered comprehensively in a single chapter. However, no book on big data would be complete without recognition of the issues and some of the solutions. In practice, IS security is about balancing the business risks posed by a security failure and the cost of making a system secure. This chapter begins with risk assessment; what constitutes a security threat and what is the scale of those threats? We explore different aspects of hacking such as denial-of-service attacks, viruses, worms, Trojan Horses and the use of spyware. We also examine some of the counter-measures that are available to protect systems and data. In this chapter we also highlight the weakest link in the security ‘chain’; people… Uncontrolled access to server rooms, unattended workstations left logged-on, lap-tops stolen from cars or left on trains and confidential files transmitted over coffee-shop wireless networks etc. Finally, recent developments such as wiki-leaks and the revelations of Edward Snowden have brought the issues of data privacy, ethics and governance to the public’s attention. In this chapter we examine these issues and, more generally, the role of data protection and data protection legislation.
  • Conference Paper
    In this paper, through reverse engineering Twitter spammers' tastes (their preferred targets to spam), we aim at providing guidelines for building more effective social honeypots, and generating new insights to defend against social spammers. Specifically, we first perform a measurement study by deploying "benchmark" social honeypots on Twitter with diverse and fine-grained social behavior patterns to trap spammers. After five months' data collection, we make a deep analysis on how Twitter spammers find their targets. Based on the analysis, we evaluate our new guidelines for building effective social honeypots by implementing "advanced" honeypots. Particularly, within the same time period, using those advanced honeypots can trap spammers around 26 times faster than using "traditional" honeypots. In the second part of our study, we investigate new active collection approaches to complement the fundamentally passive procedure of using honeypots to slowly attract spammers. Our goal is that, given limited resources/time, instead of blindly crawling all possible (or randomly sampling) Twitter accounts at the first place (for later spammer analysis), we need a lightweight strategy to prioritize the active crawling/sampling of more likely spam accounts from the huge Twittersphere. Applying what we have learned about the tastes of spammers, we design two new, active and guided sampling approaches for collecting most likely spammer accounts during the crawling. According to our evaluation, our strategies could efficiently crawl/sample over 17,000 spam accounts within a short time with a considerably high "Hit Ratio", i.e., collecting 6 correct spam accounts in every 10 sampled accounts.
  • Article
    Firms possessing a unique competency to rescue the victims of a human catastrophe have a minimum moral obligation to devote substantial resources toward best efforts to aid the victims. The minimum amount that firms should devote to rescue is the largest sum of their most recent year’s investment in social initiatives, their five-year trend, their industry’s average, or the national average. Financial exigency may justify a lower level of investment. Alternative social investments may be continued if they have an equally compelling rationale. These duties apply to the global pharmaceutical companies in the context of the AIDS pandemic in sub-Saharan Africa.
  • Article
    When an organization is pressured to respond to moral expressions in capital, consumer and labor markets, it faces a dilemma of how to respond. Should Shell have given in to Greenpeace in deciding how to dispose of the Brent Spar Oil Rig? Should Cracker Barrel give in to pressures to fire homosexual employees? Firms should consider the nature of the moral expressions pressuring them in deciding how to respond. Moral expressions can be divided into three descriptive categories: Benign, Disputed and Problematic. Each carries different implications for corporate action and in some cases will justify corporate resistance to moral expressions by stakeholders. In order to appropriately respond to moral pressures, firms should first engage in a process of discovery aimed at identifying moral pressures relevant to the firmös missions and objectives and then engage in a process of justification concerning their responses. Such a conclusion is consistent with important trends of contemporary thought in ethics and political philosophy and is strongly supported by Kantian analysis.
  • Article
    Hasnas is correct that ethicists should pay attention to law and be on guard for perverse effects from regulation and legal interpretations that may encourage or require unethical behavior. He is not correct that the business ethics literature assumes that law and ethics consistently pull in the same direction. Analysis of the relationship between law and ethics requires nuanced, in-depth treatment. An example is provided regarding the well-known case of United States v. Park. Ultimately, there is a need for more serious consideration of ethical principles and norms in legal policy making and practice.
  • Article
    Full-text available
    Recent work on the analysis of polymorphic shellcode engines suggests that modern obfuscation methods would soon eliminate the usefulness of signature-based network intrusion detection methods [36] and supports growing views that the new generation of shellcode cannot be accurately and efficiently represented by the string signatures which current IDS and AV scanners rely upon. In this paper, we expand on this area of study by demonstrating never before seen concepts in advanced shellcode polymorphism with a proof-of-concept engine which we call Hy-dra. Hydra distinguishes itself by integrating an array of obfuscation techniques, such as recursive NOP sleds and multi-layer ciphering into one system while offering multiple improvements upon existing strategies. We also intro-duce never before seen attack methods such as byte-splicing statistical mimicry, safe-returns with forking shellcode and syscall-time-locking. In total, Hydra simultaneously attacks signature, statistical, disassembly, behavioral and emulation-based sensors, as well as frustrates offline forensics. This engine was developed to present an updated view of the frontier of modern polymorphic shellcode and provide an effective tool for evaluation of IDS systems, Cyber test ranges and other related security technologies.
  • Article
    Full-text available
    The law increasingly mandates that private companies disclose information for the benefit of consumers. The latest example of such regulation through disclosure is a requirement that companies notify individuals of data security incidents involving their personal information. In the wake of highly publicized data spills, numerous states have now enacted such legislation, and federal legislation in this area has also been proposed. These statutes seek to punish the breached entity and protect consumers by requiring that a breached entity disclose information about the data spill. There are competing possible approaches, however, to how the law is to mandate release of information about data leaks. This Article finds that a reputational sanction from breach notification can be important, but not for the reasons conventionally discussed. Moreover, a further function of breach notification is mitigation of harm after a data leak. This function requires a multi-institutional coordinated response of the kind that is absent from current policy proposals. To fill this gap, this Article advocates creation of a coordinated response architecture and develops the elements of such an approach.
  • Article
    Law is contributing to an information security paradox. Consumers are regularly "consenting" to the installation of computer code that makes them more vulnerable to harms such as identity theft. In particular, digital rights management technology accompanying digital music has recently left a wake of compromised user machines. Using this case study of security-invasive digital rights management technology, this article argues that a fundamental tension exists among intellectual property law, computer intrusion law and contract law regarding meaningful consumer consent in digital contexts. This article proposes to ease this noise in consent doctrine through creating an objective "reasonable digital consumer" standard based on empirical testing of real consumers.
  • Article
    This article undertakes a normative and empirical legal inquiry into the manner information security vulnerabilities are being addressed through law and in the marketplace. Specifically, this article questions the current legislative paradigm for information security regulation by presenting a critique grounded in information security and cryptography theory. Consequently, this article advocates shifting our regulatory approach to a process-based security paradigm that focuses on improving security of our system as a whole. Finally, this article argues that in order to accomplish this shift with least disruption to current legal and economic processes, expanding an existing set of well-functioning legal structures is preferable to crafting new legal structures. Securities disclosure law is already focused on regulating the most connected points in our economy, publicly traded entities. Public companies provide a good starting point for spreading better information security behaviors because of this connectedness; disclosure of public companies' information security behaviors will assist them in maximizing shareholder value and will assist regulators in finding the inadequately secure points in our economy.
  • Article
    Recent corporate scandals have focused the attention of a broad set of constituencies on reforming corporate governance. Boards of directors play a leading role in corporate governance and any significant reforms must encompass their role. To date, most reform proposals have targeted the legal, rather than the ethical obligations of directors. Legal reforms without proper attention to ethical obligations will likely prove ineffectual. The ethical role of directors is critical. Directors have overall responsibility for the ethics and compliance programs of the corporation. The tone at the top that they set by example and action is central to the overall ethical environment of their firms. This role is reinforced by their legal responsibilities to provide oversight of the financial performance of the firm. Underlying this analysis is the critical assumption that ethical behavior, especially on the part of corporate leaders, leads to the best long-term interests of the corporation. We describe key components of a framework for a code of ethics for corporate boards and individual directors. The proposed code framework is based on six universal core ethical values: (1) honesty; (2) integrity; (3) loyalty; (4) responsibility; (5) fairness; and (6) citizenship. The paper concludes by suggesting critical issues that need to be dealt with in firm-based codes of ethics for directors.
  • Article
    The authors review the empirical literature in order to assess which variables are postulated as influencing ethical beliefs and decision making. The variables are divided into those unique to the individual decision maker and those considered situational in nature. Variables related to an individual decision maker examined in this review are nationality, religion, sex, age, education, employment, and personality. Situation specific variables examined in this review are referent groups, rewards and sanctions, codes of conduct, type of ethical conflict, organization effects, industry, and business competitiveness. The review identifies the variables that have been empirically tested in an effort to uncover what is known and what we need to know about the variables that are hypothesized as determinants of ethical decision behavior.