Content uploaded by Merrill Warkentin
Author content
All content in this area was uploaded by Merrill Warkentin on Jan 14, 2020
Content may be subject to copyright.
Author's personal copy
Don’t make excuses! Discouraging
neutralization to reduce IT policy violation
Jordan B. Barlow
a
, Merrill Warkentin
b,
*, Dustin Ormond
b
,
Alan R. Dennis
a
a
Department of Operations and Decision Technologies, Kelley School of Business, Indiana University, USA
b
Department of Management and Information Systems, College of Business, Mississippi State University, USA
article info
Article history:
Received 11 March 2013
Received in revised form
25 April 2013
Accepted 31 May 2013
Keywords:
IT security
Policies
Neutralization
Deterrence
Rationalization
Message framing
Training
Awareness
Compliance
abstract
Past research on information technology (IT) security training and awareness has focused
on informing employees about security policies and formal sanctions for violating those
policies. However, research suggests that deterrent sanctions may not be the most
powerful influencer of employee violations. Often, employees use rationalizations, termed
neutralization techniques, to overcome the effects of deterrence when deciding whether or
not to violate a policy. Therefore, neutralization techniques often are stronger than
sanctions in predicting employee behavior. For this study, we examine “denial of injury,”
“metaphor of the ledger,” and “defense of necessity” as relevant justifications for violating
password policies that are commonly used in organizations as used in (Siponen and Vance,
2010). Initial research on neutralization in IS security has shown that results are consistent
regardless of which type of neutralization is considered (Siponen and Vance, 2010). In this
study, we investigate whether IT security communication focused on mitigating neutral-
ization, rather than deterrent sanctions, can reduce intentions to violate security policies.
Additionally, considering the effects of message framing in persuading individuals against
security policy violations are largely unexamined, we predict that negatively-framed
communication will be more persuasive than positively-framed communication. We test
our hypotheses using the factorial survey method. Our results suggest that security
communication and training that focuses on neutralization techniques is just as effective
as communication that focuses on deterrent sanctions in persuading employees not to
violate policies, and that both types of framing are equally effective.
ª2013 Elsevier Ltd. All rights reserved.
1. Introduction
Violation of organizational information technology (IT) secu-
rity policies is a common problem in organizations
(Warkentin and Willison, 2009). These violations range from
sharing passwords with coworkers and others who have ac-
cess privileges (Willison and Warkentin, 2013) to the theft of
large sums of money through the abuse of workplace tech-
nologies. However, the distinction between large and small
violations is not always cleardfor example, sharing a pass-
word may seem like a small violation; however, sharing a
password with a malicious coworker could have great conse-
quences. Therefore, all security policy violations should be a
concern to both researchers and practitioners. In an effort to
*Corresponding author. Department of Management and Information Systems, College of Business, Mississippi State University, P.O.
Box 9581, Mississippi State, MS 39762, USA. Tel.: þ1 662 325 1955.
E-mail addresses: m.warkentin@msstate.edu,merrillwarkentin@hotmail.com (M. Warkentin).
Available online at www.sciencedirect.com
journal homepage: www.elsevier.com/locate/cose
computers & security 39 (2013) 145e159
0167-4048/$ esee front matter ª2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.cose.2013.05.006
Author's personal copy
improve compliance with information security policies, re-
searchers and practitioners strive to understand why people
violate security policies (Puhakainen and Siponen, 2010).
While much security research has focused on technical issues
for improving security, a growing research stream focuses on
behaviors of individuals (Crossler et al., 2013; Furnell and
Clarke, 2012).
One problem in particular is that individuals who have
authorized access to protected systems often use neutraliza-
tion techniques, or rationalizations, to justify deviant actions,
even when strong organizational sanctions exist for viola-
tions. For example, employees may choose to share a network
password because they rationalize that no one is being injured
as a result of their actions. These rationalizations cause even
non-malicious employees to knowingly violate security pol-
icies (Guo et al., 2011; Siponen and Vance, 2010).
By rationalizing their motivations, employees attempt to
reduce their guilt or shame for intending to violate IT policies.
In their minds, these rationalizations make their actions seem
more normal or more necessary than is actually the case
(Siponen and Vance, 2010). As a result, communication from
the organization around IT security, including organizational
security training and other awareness programs, that focuses
simply on policies or sanctions may be less effective in
reducing policy violations than communication that is framed
to convince employees not to use rationalizations to violate
policies.
In addition to reacting to security policy violations by
applying sanctions to employees who exhibit deviant
behavior, organizations must also use proactive measures to
deter and prevent such abuse (Straub and Welke, 1998),
including the implementation of security education, training,
and awareness (SETA) programs (Whitman, 2003). While such
programs are common, they often only inform employees
about policies and about consequent sanctions for violations
of such policies (Straub and Welke, 1998). In reality, organi-
zations should adopt necessary tools and practices to reduce
employee tendencies to rationalize their behavior. Improved
training techniques and other communication that focus on
reducing rationalization behaviors may be the key in helping
employees understand that policy-breaking is neither com-
mon nor acceptable. Because neutralization techniques often
are stronger than sanctions in influencing intentions to
violate (Siponen and Vance, 2010), researchers and practi-
tioners should combat neutralization techniques directly
through persuasive communication to employees, including
security training programs. Properly structured communica-
tion may ultimately lead to fewer violations of IT security
policies. Thus, for this study, we seek to answer the following
question:
Does the focusing of IT security communication decrease
employee intentions to violate security policy?
The remainder of the paper is organized as follows: first,
we outline the general theoretical background and previous
literature on security policy violations, SETA, and neutraliza-
tion techniques. Next, we present theoretically-grounded
hypotheses regarding specific effects of security communi-
cation on policy violation intentions. We discuss our method
for testing our hypotheses and summarize results. We
conclude with a brief discussion of the implications and lim-
itations of the current study.
2. Theoretical background
Information protection through IT security is becoming a
more serious issue as the amount of knowledge sharing and
online transactions among individuals and organizations in-
creases. Although security professionals and industry groups
are actively working to improve security, there is an
increasing need for non-technical users to perform security-
related behaviors, such as installing anti-virus software,
avoiding questionable emails, and using complex passwords
(Anderson and Agarwal, 2010; Bulgurcu et al., 2010; Johnston
and Warkentin, 2010; Liang and Xue, 2009, 2010; Warkentin
et al., 2011). Today, the greatest threat to security is insiders,
especially employees and former employees, not external
hackers (Warkentin and Willison, 2009), whether their actions
are accidental, volitional, or malicious.
Nonmalicious actions may result from poor training,
negligence, or human error. A recent survey indicated that 80
percent of chief information security officers (CISOs) believe
that employees and contractors present a greater threat to
their data than external hackers (Wilson, 2009). Employees
who believe that security procedures hindered their jobs may
try to forego them (Post and Kagan, 2007). Data breaches,
which cost an organization over $6 million per incident on
average (Ponemon Institute, 2010), are attributed to negligent
insiders, rather than malicious insiders, in over 40 percent of
the incidents in the U.S. and U.K. (Ponemon Institute, 2012a;
Wall, 2011).
Recent growth in personal mobile device use by employees
has introduced further opportunities for data leakage due to
carelessness and noncompliance (Ponemon Institute, 2012b).
One recent study indicated that password sharing, especially
privileged password sharing, is the root cause of many
expensive data security failures (Butler, 2012). A large U.S.
telecom company experienced a major data breach that was
apparently facilitated by common insider knowledge of
administrative passwords that were shared (Roberts, 2012). In
an empirical investigation of privacy breach causes, it was
found that human error has grown in significance in recent
years relative to malicious causes (Liginlal et al., 2009). A study
conducted by Symantec Corporation and the Ponemon Insti-
tute in 2009 found that over 75 percent of former employees
admit to taking company data without the employer’s
permission (Symantec Corporation, 2009).
Predictors of insider and employee security policy viola-
tions include individual propensity and moral beliefs (Hu
et al., 2011), perceived justice of punishment (Xue et al.,
2011), cognitive processing (Posey et al., 2011), and mandato-
riness of policies (Boss et al., 2009). While many articles focus
on the need for deterrence techniques to combat violations,
such as formal and informal organizational sanctions (e.g.,
D’Arcy et al., 2009), previous research has shown that em-
ployees often rely on moral reasoning (Myyry et al., 2009)or
neutralization techniques to justify their deviant actions even
when strong organizational sanctions exist (Siponen and
computers & security 39 (2013) 145e159146
Author's personal copy
Vance, 2010). Implementation of deterrent sanctions may
even cause employees to feel that the organization does not
trust them, leading to the very behaviors that the sanctions
intended to combat (Posey et al., 2011).
The theory of neutralization techniques, which originated
from criminology research, defines several types of neutrali-
zation techniques and explains that deviant behavior occurs
partly due to the human tendency to rationalize certain be-
haviors (Sykes and Matza, 1957). Several fields and researchers
have since expanded the theory by defining additional
neutralization techniques (Copes, 2003; Willison and
Warkentin, 2013) and applying them to unique contexts
such as music piracy (Ingram and Hinduja, 2008) and software
piracy (Harrington, 1996). Siponen and Vance (2010) applied
neutralization theory to IS security policy research; they
investigated specific neutralization techniques and found
them to be more powerful than sanctions in predicting
employee violations of IT security policies. Subsequent
research confirms that neutralization techniques are an
important predictor of IT security violations (Warkentin et al.,
under review, Willison and Warkentin, 2010, 2013). Other
studies on IT security also allude to rationalization by regular
employees without specifically mentioning neutralization
techniques (Myyry et al., 2009; Willison, 2006).
IT security training and awareness has been shown to be
an important proactive measure to decrease security policy
violations (D’Arcy et al., 2009; Karjalainen and Siponen, 2011);
methods for implementing such training and awareness
programs include web-based tutorials, checklists, videos,
courses, handouts, reminders, and newsletters (Puhakainen
and Siponen, 2010). However, most studies on security
communication actions by organizational managers focus
only on policy awareness and deterrent sanctions. Perhaps
because negatively-framed messages are often more powerful
in persuading people to take certain actions (Levin et al., 1998),
organizations use deterrence-focused training in an attempt
to decrease employee violations (D’Arcy et al., 2009; Straub
and Welke, 1998).
However, because neutralization techniques are more
powerful than sanctions in determining whether employees
will violate policy, we argue that security communication in
various SETA programs should also focus on ways to mitigate
neutralization behaviors. Communication from the organi-
zation that addresses these rationalizations may be highly
effective because rationalizations are more often the cause of
policy violations than the lack of formal or informal punish-
ment by the organization (Siponen and Vance, 2010). There-
fore, properly focused communication that effectively
persuades against neutralization should be more powerful
than communication involving only awareness of policies and
sanctions. With proper persuasion, employees who face a
decision of whether or not to violate policy will be less likely to
rationalize their deviant intentions. Such communication can
reduce rationalization behaviors and ultimately intentions to
violate policy by helping employees realize that ration-
alizations are faulty and that the associated behavior is not
“normal.”
In addition to using neutralization theory to better design
security communication, we also use framing theory. Framing
theory states that the framing of a message (whether it is
positive or negative) can change the actions of the person
receiving the message (Kahneman and Tversky, 1979; Tversky
and Kahneman, 1981). The theory has been used to explain
behavior in several disciplines, including psychology (Fagley
et al., 2010; van Buiten and Keren, 2009), marketing
(Donovan and Jalleh, 1999; Grewal et al., 1994), and informa-
tion systems (Cheng and Wu, 2010).
Previous research developed a typology that differentiates
between risk framing, attribute framing, and goal framing
(Levin et al., 1998). In risk framing, the portrayal of the amount
of risk present in a situation is manipulated. In attribute
framing, an object’s attributes are portrayed with differing
valence (e.g., beef could be described as 75 percent lean or 25
percent fat). In goal framing, a hypothetical situation is por-
trayed as having either positive benefits for completing an
action or negative consequences for failing to complete that
action. Goal framing is most salient to the present study
because it addresses the impact of positively or negatively
framed communication on persuasion to take a certain action
(Anderson and Agarwal, 2010). Therefore, we use goal framing
in addition to neutralization theory to explore how security
communication can be properly framed to reduce intentions
to violate IT security policies.
There has been only limited application of framing theory
to IT security contexts (e.g., Anderson and Agarwal, 2010;
Angst and Agarwal, 2009; Shropshire et al., 2010). In some
cases, positively framed messages can increase a person’s
perceived subjective norms with regard to security behavior
(Anderson and Agarwal, 2010). When individuals have more
concerns for information privacy, positive framing is more
powerful than neutral framing in encouraging use of elec-
tronic health records (Angst and Agarwal, 2009). Negative
message framing is more powerful in encouraging use of de-
tective security software than preventive security software
(Shropshire et al., 2010). In each of these examples, re-
searchers examined the effects of framing on encouraging
security behavior. However, the effects of message framing in
persuading individuals against security policy violations are
largely unexamined, and thus form the foundation of the
present study.
3. Developing a model for communicating
about neutralization
3.1. Deterrence and neutralization
Most research on IT security policy violations is centered
around deterrence theory (D’Arcy and Herath, 2011; D’Arcy
et al., 2009; Herath and Rao, 2009; Hu et al., 2011; Lee and
Lee, 2002; Siponen and Vance, 2010; Straub and Nance, 1990).
Deterrence theory states that people rationally assess the risk
of consequences when deciding whether to commit crimes or
violate rules. According to this theory, the risk of conse-
quences is decomposed into severity, certainty, and celerity of
sanctions. If an individual feels that the severity, certainty,
and/or celerity of a possible negative consequence are high,
the associated behavior is judged to be risky. If the risks
associated with the behavior exceed the benefits or rewards,
the person is deterred from committing the action. In the IT
computers & security 39 (2013) 145e159 147
Author's personal copy
security context, sanctions include formal sanctions epun-
ishments created by the organization against those who break
rules (e.g., negative performance reviews, low salary raises,
loss of privileges, termination, etc.) eand informal sanctions
enegative social consequences such as disapproval of peers,
friends, or leaders as the result of an action (Siponen and
Vance, 2010), often leading to feelings of guilt and shame. As
an employee feels more certain of formal consequences from
the organization or social consequences from others, or per-
ceives that those consequences will be more severe or swift,
he or she will perceive those actions as too risky and will be
less likely to violate the IT security policy. While various
research programs have provided somewhat differing results,
most research shows that deterrence is effective to some
extent (D’Arcy and Herath, 2011).
However, even in the presence of deterrence techniques,
employees frequently violate policies. Thereasons to do so may
be maliciousbut many times are not (Guo et al., 2011; Warkentin
et al., under review).In situations where people makedeliberate
decisions in ethical situations rather than relying simply on
intuition, thedecisions tend to be more unethical (Zhong,2011).
Such decisions likely occur because people tend to think of
reasons to justify their behavior. For example, employees may
choose to violate policies for reasons of convenience, and may
feel that no one is harmed in the process. Consider a security
policy that discourages employees from writing down pass-
words, yet because they cannot remember them, they write
them down. Or consider a policythat requires a data encryption
procedure for all customer data transferred to a USB drive that
employees feel is too time consuming to perform. In other
words, those who violate ethical standards or security policies
often have rational, practical reasons for their behavior, and
may even believe that their actions are “normal” or similar to
the actions of other employees in their same situation (Ames,
2004; De La Haye, 2000; Flynn and Wiltermuth, 2010). Thus,
people rationalize violating the policy, believing their actions
are essentially not “wrong.”
Individuals who rationalize their behavior may feel shame
or guilt for violating the policy because they know and un-
derstand that the policies prohibit the behavior. An apparent
disconnect exists between feeling the action is justifiable and
perceiving the action is prohibited by policy. Neutralization
theory predicts that individuals resort to neutralization to
convince themselves that the policy violation is not a prob-
lem, so they can reduce or eliminate the shame or guilt they
experience when they knowingly violate security policies
(Siponen and Vance, 2010). Neutralization may even cause
violators to perceive their behavior to be less risky.
The original neutralization theory defined five commonly
used types of neutralization, and has been expanded by at
least 12 more (Willison and Warkentin, 2013). Siponen and
Vance (2010) evaluated six neutralization types in their orig-
inal research on effects of neutralization on IT security vio-
lations, omitting types they felt were less applicable to the
security context. Warkentin et al. (under review) use three
neutralization types in their work on the effects of neutrali-
zation and organizational justice on violations of security
policy. For this study, we examine “denial of injury” and
“metaphor of the ledger” (two types common to both these
studies) and “defense of necessity” (Siponen and Vance, 2010).
Initial research on neutralization in IS security has shown that
results are consistent regardless of which type of neutraliza-
tion is considered (Siponen and Vance, 2010). Thus, rather
than testing all possible types of neutralization, we chose to
focus on only three for simplicity of research design. These
three are relevant to the type of policy violation used in our
experimental study. That is, the three we focus on as exam-
ples are all relevant justifications for violating password pol-
icies that are commonly used in organizations. An expert
panel of security experts also agreed with our selection of
neutralization types, stating that they were realistic examples
of reasons to violate a password policy.
3.2. Defense of necessity
The “defense of necessity” neutralization (Minor, 1981;
Siponen and Vance, 2010) is used when an individual who
intends to violate a policy rationalizes that they have no other
acceptable choice. The choice is out of their hands; therefore,
they should not feel guilty for violating the policy. Such a
person may even rationalize that if they are caught, they can
easily defend themselves against organizational sanctions
using this rationalization. In addition, the sanctions may
seem less risky than they otherwise would. For example, in-
dividuals who choose to violate a policy which states that
employees cannot download company data onto a flash drive
may deem it necessary to take the data home to meet a
deadline. Therefore, an individual may conclude that they
have no rational choice but to violate the policy, download the
data, and work on it at home. Hence, we hypothesize that:
H1a.Use of the “defense of necessity” neutralization is positively
associated with intentions to violate IT security policies.
3.3. Denial of injury
The “denial of injury” neutralization (Siponen and Vance, 2010;
Sykes and Matza, 1957) is used when an individual who intends
to violate a policy rationalizes that no one will be hurt. If no one
is hurt by the action, the violator convinces himself that there is
no reason to feel shame or guilt. For example, suppose an
employee knows about a security policy which states that all
employees must use strong passwords on their work PC,
though no electronically-mandated enforcement is imple-
mented. An employee may rationalize that his or her work is
not confidential and someone with his or her password could
not cause real injury to the company. Therefore, an individual
may feel that the risks and consequences do not exceed the
benefits of using a simple, easily-remembered password, so he
or she may violate the policy. From this we propose:
H1b.Use of the “denial of injury” neutralization is positively asso-
ciated with intentions to violate IT security policies.
3.4. Metaphor of the ledger
The “metaphor of the ledger” neutralization (Klockars, 1974;
Siponen and Vance, 2010) is used when an individual who in-
tends to violate a policyrationalizes thatthey have done enough
computers & security 39 (2013) 145e159148
Author's personal copy
good deeds to justify doing something against policy. An
employee may rationalize that he or she is entitled to one bad
act because the net benefit of his or her contributions to the
organization is still positive. Such an employee may feel that
any sanctions may be unwarranted and avoidable because
others would agree that this employee is overall a “good per-
son.” For example, suppose an employee works late every night
for a monthat an organization.One day, during regular business
hours, theemployee gets bored and feels theurge to browse the
Internet and engage in online shopping for a relative’s birthday.
The employee knows that surfing the Web during business
hours is not allowed by policy. However, the employee ratio-
nalizes that because he or she has worked late over the last few
weeks, he or she deserves a break and is entitled to take some
company time to relax. Using this rationalization, he or she
decides to knowingly violate the policy. Therefore, we predict:
H1c.Use of the “metaphor of the ledger” neutralization is positively
associated with intentions to violate IT security policies.
3.5. IT security communication focused on deterrence
and neutralization
Many researchers have focused on IT security training and
awareness as a means to reduce intentions to violate policy
(D’Arcy et al., 2009; Karjalainen and Siponen, 2011; Puhakainen
and Siponen, 2010; Siponen, 2000). Siponen (2000) reviews a
wide range of literature that discusses various frameworks and
tutorials for implementing training and awareness on security
policies. Such programs include formal trainings through
meetings and handbooks, as well as other communication
from the organization, such as reminders through posters or
screen savers. The purpose of our study is not to address the
best methods to carry out security communication through
training and awareness but to investigate the training effects
of general content focus and framing on intentions to violate
policies. Such design of security communication applies not
only to formal programs, but also to informal discussion
among users about adherence to security policies.
Many proposed techniques of training and awareness
focus on deterrence; in other words, making employees aware
about a policy and the negative consequences associated with
violating that policy (D’Arcy et al., 2009). A major reason for
training and awareness programs is to “convince potential
abusers that the company is serious about security and will
not take intentional breaches of this security lightly” (Straub
and Welke, 1998, p. 445).
Perhaps one reason that training and awareness programs
have focused so much on deterrence in the past is related to
the goal framing effect. The goal framing effect is the effect
that positively or negatively framed messages can have on
persuading a person to perform certain actions. Negative
(positive) framing of a message generally refers to relaying a
message that has a focus on negative (positive) consequences
for performing a given action. Negatively-framed messages
are powerful in situations where a person is faced with a
choice that has highly-visible possible negative consequences
and few visible positive consequences (Levin et al., 1998). For
example, this effect is often seen in health research where
negatively-framed messages are more powerful than
positively-framed messages in convincing people to take
preventative action to protect themselves from unwanted
health problems. Discussing the possible negative conse-
quences of smoking is more powerful in helping people quit
than discussing the positive benefits of quitting (Wilson et al.,
1990). The same effect has been studied in the context of heart
disease (Scott and Curbow, 2006), medical screening (Cox and
Cox, 2001), and other health research (Levin et al., 1998).
In the IT security policy domain, employees are faced with
similar dilemmas. In these cases, the consequences for
violating policies are clear when organizations make em-
ployees aware of formal sanctions. Based on the theory
behind the goal framing effect, we propose:
H2a.Individuals receiving persuasive communication focused on
deterrent sanctions are less likely to form intentions to violate IT se-
curitypolicies than employees receivingtraining that hasno such focus.
Employees frequently violate policies, even in the face of
deterrence techniques. Research on neutralization theory
suggests that such employees rationalize in order to reduce
the perceived negative consequences of deterrence in their
minds. In some situations where there is a perceived reason to
violate the policy, the powerful effects of negative framing
may be overcome by the neutralization. Therefore, organiza-
tions should not only supply awareness of deterrent sanc-
tions, but also make employees more aware of the tendency
and resultant problems of rationalizing behavior that is in
violation of security policies.
Neutralization-based communication should focus on
mitigating the neutralization common to a particular policy.
For example, a common neutralization for password policies
may be “denial of injury.” The common assumption that a
violation of password policy is not hurting anyone could be
false. Therefore, security communication could attempt to
reduce the likelihood of this neutralization by emphasizing
why the policy exists and how its violation could result in injury
to the company, customers, or employees. We then posit that:
H2b.Individuals receiving persuasive communication focused on
mitigating neutralization are less likely to form intentions to violate
IT security policies than employees receiving training that has no
such focus.
Often, neutralization techniques can be more powerful
than sanctions in predicting employee violations of security
policies (Siponen and Vance, 2010). Employees often use these
techniques when they encounter strong sanctions, especially
when they feel their organization is treating them unjustly
(Warkentin et al., under review). Because neutralization is at
least as strong as sanctions in predicting employee behavior,
persuading against neutralization should be as useful as
communicating about deterrent sanctions, in combatting in-
tentions to violate security policies. We then posit that:
H2c.Individuals receiving persuasive communication focused on
mitigating neutralization are equally likely to form intentions to
violate IT security policies as individuals receiving persuasive
communication focused on deterrent sanctions.
computers & security 39 (2013) 145e159 149
Author's personal copy
3.6. Utilizing the framing effect
Given that the negative framing effect is more powerful than
positive framing (Levin et al., 1998), researchers and practi-
tioners often focus on negative deterrent consequences when
developing security training and in other instances when
communicating about IT security. Because neutralization is
often used to rationalize in the presence of sanctions, these ef-
fects of negative framing are removed in the employee’s mind.
However, the negative framing effect could still be useful
for security communication whether it focuses on deterrence
or neutralization. Security communication related to
neutralization could be framed to convince an employee of
negative internal or moral consequences for violating security
policies, as well as a focus on the negative consequences for
others. In other words, security communication can be
powerfully framed around negative consequences in more
ways that simply focusing on punishments for the action at
hand. For example, focusing on the possible consequences of
justifying the sharing of a password (e.g., other less trust-
worthy employees will gain access to confidential informa-
tion) may reduce the intention to violate the policy even when
the employee considers using neutralization.
Regardless whether the consequences are deterrent sanc-
tions, organizational loss, or internal shame, the negative
consequences for violating policies are easier to determine
than the benefits of compliance, which is not often tangibly
rewarded. As a result, the goal framing effect occurs. Discus-
sion of negative consequences becomes a more powerful
persuader than positively-framed training. Thus, whether
communication is focused on addressing neutralization or on
deterrent consequences, communication that accentuates
negative consequences will be more persuasive to employees.
We thus hypothesize that:
H3.Individuals receiving security communication that is negatively
framed (i.e., focused on avoiding negative consequences) are less
likely to form intentions to violate IT security policies than in-
dividuals receiving security communication that is positively framed
(i.e., focused on benefits of compliance to IT security policies).
In summary, we predict that while employees are more
likely to violate policies when using neutralization, organiza-
tions can combat this tendency using communication about
IT security that is focused around why employees should not
rationalize violations; such communication should be nega-
tively framed and consequence-based. Further, we propose
that communicating about neutralization is just as powerful
as communicating about punishment of actions. SETA pro-
grams and other communication could include discussions on
why the justification of behavior and subsequent policy
violation would have negative consequences. We next discuss
the method we propose to test our hypotheses.
4. Method
To test our hypotheses, we used the factorial survey method
(FSM) design (Jasso, 2006; Rossi and Anderson, 1982; Shlay
et al., 2005). FSM instruments are vignette-based
experiments in which each respondent reads several ver-
sions of a short story (vignette or scenario) with independent
variables randomly manipulated as elements embedded into
the sentences found in the scenario (Taylor, 2006), thus
“introducing more realistic complexity.” (Lyons, 2008, p. 112).
In other words, participants read scenarios containing a sub-
set of the experimental treatments and then answer survey
questions based on their perceptions of that scenario.
Scenario-based methods are commonly used in security and
business ethics research (Herzog, 2003; Jasso, 2006; Seron
et al., 2006; Trevino, 1992; Weber, 1992) because it is difficult
to assess actual deviant behavior in the workplace by obser-
vation or by direct questioning. Further, the random assign-
ment of the factors, which are approximately orthogonal
(Lyons, 2008; Rossi and Anderson, 1982), ensures that the
levels with the manipulated factors are not correlated with
each other, as each one has an equal probability of assign-
ment (Shlay et al., 2005). In addition to the facilitation of
experimentally testing multiple variables simultaneously in a
realistic, but complex instrument, FSM experiments allow for
more straightforward analysis (Taylor, 2006).
4.1. Participants
Participants in the study were full-time employees of U.S.
companies with experience using computers in the work-
place. The participants were recruited through Qualtrics, a
market research and survey method firm, which provided a
panel of qualified survey respondents. Qualtrics did not pro-
vide information on the total number of individuals who
initiated the survey, only the responses of those who
completed it. All subjects completed the survey anonymously.
To prevent survey fatigue and reduce learning effects and
hypothesis guessing, each participant was presented with a
random set of only four out of 36 possible scenarios. This
method increases our total sample size by four times the
number of individuals who completed the survey. Of the in-
dividuals recruited by Qualtrics to take the survey, ninety
employees completed the survey, resulting in a total sample
size of 360 survey responses.
4.2. Task
Each participant completed the task four times with four
different scenarios containing different treatments. An
example of a scenario is contained in Appendix A.
The first paragraph of each scenario gave an introduction
to a hypothetical company and presented a security policy
related to password sharing.
The next part of the scenario provided the deterrence and
neutralization mitigation treatments. In the deterrence
treatment a statement was provided about the deterrent
sanctions for violating the policy. In the neutralization miti-
gation treatment, a statement was provided encouraging
employees not to justify their behavior. In the baseline treat-
ment, neither of these statements was present.
The next part of the scenario presented the framing
treatment. In the negatively-framed treatment, negative
consequences of violating the policy were presented. In the
positively-framed treatment, a statement was provided giving
computers & security 39 (2013) 145e159150
Author's personal copy
appreciation and positive support for compliance with the
policy. In the baseline treatment, neither of these framing
statements was present.
The next part of the scenario described a situation where a
particular employee of the hypothetical company violated a
security policy by sharing his computer password for various
reasons. Company names were randomly drawn from a list of
names that were vetted by an expert panel review to ensure
their neutrality with no regional or other bias.
After viewing the scenario, participants responded to
various questions related to the scenario, including an
assessment of whether the participant would be likely to
violate the policy under the same circumstances.
4.3. Experimental treatment
To test our hypotheses, we conducted a 3 (Focus: deterrence,
neutralization mitigation, or neither) 3 (Framing: negative
consequences, positive benefits, or neither) 4 (Neutraliza-
tion Type: defense of necessity, denial of injury, metaphor of
the ledger, or no neutralization) scenario-based factorial
design. Each scenario demonstrated moderately high severity
and certainty of sanctions to control for deterrence effects. All
scenarios were critiqued at multiple expert panel reviews to
ensure clarity and realism of the scenarios and manipula-
tions, as recommended by Lanza (1988) and Lauder (2002).
The focus of the security awareness communication was
manipulated by either including a section on deterrence or
neutralization mitigation. In the deterrence treatment, the
training program reiterated the serious consequences
enforced by the company for violating policies. In the
neutralization mitigation treatment, the scenario stated that
security policies are put in place to prevent harm to the
company or customers, even when this is not evident, and
that employees should not justify violations of the policy.
The framing of a scenario was manipulated by framing the
security communication in a negative way, a positive way, or
not at all. In a negatively-framed treatment, the scenario
stated that sharing passwords may result in more malicious
deviant behavior by the employee with whom the password is
shared. In a positively-framed treatment, the scenario stated
that compliance with the policy is important to the organi-
zation and that employees play a key role in helping the or-
ganization keep information secure. The baseline treatment
contained neither statement.
Finally, the type of neutralization was manipulated by
changing (or omitting) a sentence at the end of the scenario
which included the rationalization for violating an organiza-
tional policy. For example, in a scenario where “denial of
injury” was experienced, the final sentence stated that the
employee “feels that no harm would result from sharing his
password.”
4.4. Dependent variable measurements
Each respondent was asked to rate the likelihood he or she
would violate the given security policy (Paternoster and
Simpson, 1996; Siponen and Vance, 2010). To avoid reliability
issues (Cook and Campbell, 1979), three items were used for
the dependent variable. The Cronbach’s alpha was 0.986,
indicating adequate reliability. Each item was measured on a
fully-anchored five-point Likert-type scale ranging from
strongly disagree to strongly agree (Warkentin et al., under
review). The measures are listed in Appendix B.
4.5. Procedure
Participants received a link to participate in the study from
Qualtrics. After completing the consent statement, each
participant answered two filter questions to ensure that he or
she had work experience in a company with formal policies
and experience using a computer. If a participant answered
negatively to either of the filter questions regarding his or her
experience in a workplace with formal policies and using a
computer terminal, the survey ended at that point and the
participant was not included in the study.
After the filter questions, participants viewed a scenario
followed immediately by three manipulation check questions.
These questions asked the respondents about the neutrali-
zation/deterrence focus, positive/negative framing, and
neutralization present in the scenario.
After the manipulation check questions, participants
responded to the three dependent variable items and two
other items: (1) a response set question to ensure responses
were based on a sincere reading of the question rather than
simply answering in patterns and not paying attention (e.g.,
“Select Disagree as the response to this question”) (Andrich,
1978; Kerlinger, 1973); and (2) an item measuring the
perceived realism of the scenario (Siponen and Vance, 2010).
Participants responded to these items for each of the four
scenarios. At the completion of four scenarios, participants
answered a set of demographic questions. Any participant
who did not fully complete the survey was excluded from the
study. See Appendix B for all measures.
Because common method bias is a serious concern for field
studies (Podsakoff et al., 2003), we followed the recommen-
dations of Podsakoff et al. (2003) to address a number of spe-
cific threats to common method bias. First, we addressed the
threat of social desirability, the tendency to respond to ques-
tions in a culturally acceptable way, in two ways: (1) By using
the scenario technique, rather than direct questions, partici-
pants are more likely to give true responses about intentions
to violate rules (Trevino, 1992). (2) We assured participants
that responses were completely anonymous. We did not
receive any personally identifiable information from Qualtrics.
Next, as recommended by Podsakoff et al. (2003), we ran-
domized the set of scenarios that each participant received in
order to reduce tendencies to answer questions about each
scenario based on previous scenarios (“halo effect”). Another
way we addressed common method bias was to incorporate
response set question to ensure that participants would not
simply provide automatic responses without reading and
evaluating each question. Further, the manipulation check
questions were used to ensure attention to responsedany
incorrect response to manipulation check questions resulted
in discarding the responses for that scenario.
Another important issue was to ensure realism of the
scenarios to encourage more valid responses. This was
addressed in two ways. First, the scenarios and questions
were reviewed by two expert panels to ensure they were
computers & security 39 (2013) 145e159 151
Author's personal copy
perceived to be realistic. Second, we included the realism
question in the survey in order to control for the effects of how
realistic the scenario was (Siponen and Vance, 2010). Finally,
the response set and realism questions were intermixed with
the dependent variable items to avoid grouping constructs
(Podsakoff et al., 2003).
5. Results
Ninety individuals passed the filter questions and fully
completed the survey, answering four scenarios each, result-
ing in a sample size of 360. The demographic distribution of
the sample varied, and the majority of respondents had over
10 years of work experience (see Table 1).
Of the 360 scenario responses, 103 contained at least one
incorrect response to an experimental manipulation check
question or response set question. These responses were
discarded from the dataset (Andrich, 1978; Kerlinger, 1973;
Rennie, 1982), resulting in 257 usable scenario responses. A
power analysis indicated that this would be a sufficient sam-
ple size to find a main effect in the data of at least effect size
f
2
¼0.25.
OLS regression is a preferred technique for the factorial
survey design (Rossi and Anderson, 1982; Shlay et al., 2005)
due to the ease of interpreting the coefficients. However, Rossi
and Anderson (1982) note that any multivariate technique that
fits the data can be used; they suggest that in cases where a
normal distribution is not present, logistic regression is a
suitable alternative. Because our data did not meet the as-
sumptions for OLS regression, we chose to use repeated-
measures logistic regression (see Appendix C for details on
assumption checking and procedure selection). Repeated-
measures logistic regression is done using the Generalized
Estimating Equations (GEE) approach (Allison, 1999), an
extension of generalized linear models that accounts for
correlations among observations from the same subject
(Zeger and Liang, 1986).Logistic regression predicts a binary
responsedin this case, whether the participant has intentions
to violate the policy or not. The dependent variable was
measured as the average score of three items ranging from
one to five. Thus, we categorized the responses as those with a
value higher than three (those who exhibit some intention to
violate) and those with a score of three or lower (those who do
not exhibit any intention to violate).
The results of this model on our data are summarized in
Table 2. As in OLS regression, categorical variables are repre-
sented with dummy variables; that is, for a categorical variable
with mlevels, the results show statistics for m1 levels,
excluding the ‘base’ level of the variable to avoid linear de-
pendency (Rossi and Anderson, 1982). In our dataset, the ‘base’
level of each variable consists of the scenarios where the var-
iable was not represented in the scenario text. For example, for
the “focus” variable, the neutralization focus and the deter-
rence focus are the two levels represented in the results. The
scenarios where neither neutralization nor deterrence focuses
were included in the scenario make up the ‘base’ level of this
construct. In this manner, statistics (e.g., parameter estimates,
pvalues) indicate the difference between one level and the
‘base’ level of the construct. For example, in our model, the
statistics for the Denial of Injury neutralization compare the
scenarios with this type of neutralization to scenarios where
no neutralization was provided.
Hypothesis 1 was partially supported by the results of the
factorial survey. Those participants who viewed a scenario
with the “defense of necessity” neutralization had signifi-
cantly higher intentions to violate security policies than those
who viewed a scenario with no neutralization, giving support
for Hypothesis 1a. However, participants viewing scenarios
with the “denial of injury” and “metaphor of the ledger”
neutralizations did not have significantly higher intentions to
violate the password sharing policy; thus Hypotheses 1b and
1c were not supported.
For Hypothesis 2, both the neutralization-focused
communication and the deterrence-focused communication
resulted in significantly lower intentions to violate policies
than did scenarios where no focus statement was given, thus
supporting H2a and H2b. This indicates that additional
training beyond simple awareness of policies is effective to
reduce policy violations. To test Hypothesis 2c, a comparison
between the neutralization mitigation and deterrence focuses
is necessary. The GEE method provides follow-up analysis to
contrast other levels of the variables beyond the ‘base’ level.
The difference between the neutralization mitigation focus
and the deterrence focus was not statistically significant
(c
2
¼0.41, p¼0.521). Thus, Hypothesis 2c was supported.
Hypothesis 3 was not supported by the GEE model. Neither
the negative nor the positive framing of scenarios had a strong
enough effect on intentions to be significantly different from
the base scenario without a framing statement. The follow-up
contrast between the negative and positive framing levels was
also not significant (c
2
¼0.52, p¼0.470). The demographic
variables (gender, age, work experience, and level of educa-
tion) also did not have an effect on intentions to violate. A
summary of which hypotheses were supported is given in
Table 3.
Table 1 eDemographic information.
Gender
Female 51 (56.7%)
Male 39 (43.3%)
Age
18e29 21 (23.3%)
30e39 25 (27.8%)
40e49 20 (22.2%)
50e59 16 (17.8%)
60þ8 (8.9%)
Years of work experience
0e4 6 (6.7%)
5e9 22 (24.4%)
10e19 19 (21.1%)
20þ43 (47.8%)
Level of education completed
Some high school 1 (1.3%)
High school 20 (22.2%)
Undergraduate degree 43 (47.8%)
Graduate degree 26 (28.9%)
computers & security 39 (2013) 145e159152
Author's personal copy
A variable coded as whether the scenario was the first to be
seen by the participant or not was used to test order effects. As
shown in Table 2, order had a statistically significant effect on
intentions, with first scenario being rated with higher in-
tentions to violate than subsequent scenarios. Additional
analysis was done to test order as a moderator of any of the
main variables. These analyzes reflect that the order a person
viewed the scenarios did not have an effect on other re-
lationships in the model. Furthermore, running a separate
model with only the scenario first seen by a participant
resulted in similar parameter estimates to the original model,
indicating that while participants tended to rate the first
scenario higher, the order of the scenarios did not have an
effect on the relationships in the model.
The perceived realism of the scenarios did not have a sta-
tistically significant effect on how highly the participants
rated their intentions to violate. Furthermore, testing a sepa-
rate model that discards responses with low rated realism did
not make notable changes to the results of the analysis.
6. Discussion
We hypothesized that neutralization would cause higher in-
tentions to violate security policies. This hypothesis was
partially supported by the results of the factorial survey. Par-
ticipants who viewed a scenario with the “defense of neces-
sity” neutralization had significantly higher intentions to
violate security policies than those who viewed a scenario
with no neutralization. These results give further evidence
that neutralization is an important predictor of the intentions
to violate security policies. However, participants viewing
scenarios with the “denial of injury” and “metaphor of the
ledger” neutralization did not have significantly higher in-
tentions to violate the password sharing policy.
Past research on neutralization has found all types of
neutralization to have similar effects (Siponen and Vance,
2010; Warkentin et al., under review), but our results suggest
that some neutralization types may be more powerful than
others, depending on the circumstances. It is possible that
“defense of necessity” is particularly salient for password
sharing scenarios. In over one third of the responses, partici-
pants indicated they would be likely to share a password
(answering ‘agree’ or ‘strongly agree’ to the intention ques-
tions), indicating that many view password sharing as a
violation they would commit. In such situations where
violation of the policy seems somewhat acceptable, many
people would violate the policy if other priorities become
more important. It seems that the defense of necessity is a
more acceptable rationalization for password sharing than
concluding that one can violate policies because of one’s good
behavior or perceptions of not hurting others eat least to our
sample of full-time employees of U.S. companies.
We also hypothesized that persuading employees not to
use neutralization would reduce policy violations and be as
powerful as providing information about deterrent sanctions.
Our results supported this hypothesis: neutralization mitiga-
tion resulted in lower intentions to violate the policy and was
as strong as focusing on deterrent sanctions. This leads us to
conclude that organizations should try to give adequate
attention to both deterrent sanctions as well as neutralization
mitigation when providing training on IT security policies.
Finally, we hypothesized that a negative framing of secu-
rity communication should be more powerful than a focus on
positive benefits of compliance. This hypothesis was groun-
ded in the framing effects theory. We did not find this effect in
our sample. It may be that the overall wording of the scenarios
presented a positive or negative framing that offset the spe-
cific framing statements.
7. Contribution
Our research offers several contributions to the literature on
behavioral IT security research. First, we offer unique, prac-
tical, and theoretically-justified ways to reduce IT security
policy violations by designing communication to mitigate
Table 2 eRepeated-measures logistic regression results.
Estimate Std. error Zp
(Intercept) 1.095 1.305 0.84 0.401
Defense of necessity
a
1.026 0.360 2.85 0.004
Denial of injury 0.433 0.315 1.38 0.168
Metaphor of the ledger 0.295 0.351 0.84 0.400
Focus: neutralization
a
0.908 0.248 3.66 <0.001
Focus: deterrence
a
0.777 0.246 3.16 0.002
Framing: negative 0.140 0.226 0.62 0.536
Framing: positive 0.300 0.282 1.06 0.288
Order
a
0.655 0.222 2.95 0.003
Realism 0.111 0.231 0.48 0.630
Gender 0.144 0.435 0.33 0.741
Age 0.237 0.295 0.80 0.422
Work experience 0.087 0.405 0.21 0.831
Education 0.541 0.321 1.69 0.092
a Significant at 0.05 level.
Table 3 eSummary of hypotheses.
H1a. Defense of necessity /intentions to violate Supported
H1b. Denial of injury /intentions to violate Not supported
H1c. Metaphor of the ledger /intentions to violate Not supported
H2a. Communication of deterrent sanctions /lower intentions to violate Supported
H2b. Communication to mitigate neutralization /lower intentions to violate Supported
H2c. Intentions to violate after neutralization mitigation
communication ¼intentions to violate after deterrence communication
Supported
H3. Intentions to violate after negative training <intentions to violate after positive training Not supported
computers & security 39 (2013) 145e159 153
Author's personal copy
neutralization. Our study confirms that training focused
around fighting neutralization should be powerful in reducing
intentions to violate policies. Future research can further
refine our theory and design specific SETA programs and
techniques incorporating these findings.
Next, we show that some types of neutralization can be
more powerful than others depending on the context. Pass-
word sharing was shown to be a policy that a large portion of
people were willing to violate by justifying that it was neces-
sary in order for other priorities to be successful. Fewer
justified violating this policy based on “denial of injury” or
“metaphor of the ledger” techniques even though these were
judged as realistic rationalizations. While previous research
suggested that all neutralization types have similar effects
(Siponen and Vance, 2010), our results show that different
neutralization types have different effects, possibly depend-
ing on the type of security violation in question. This also has
implications for research. Results of studies examining a
subset of neutralization types cannot be generalized to all
other neutralization types.
Finally, we show that both neutralization mitigation and
deterrence have an effect on the intentions individuals form
toward security policy violation. The neutralization mitigation
and deterrence focus statements both resulted in lower in-
tentions to violate security policy. Thus, while one was not
more powerful than the other, we show that both are impor-
tant and should be incorporated into security training and
other aspects of communication from the organization about
IT security. These results apply not only to the design of
training and awareness programs, but also to informal dis-
cussion among employees about policies. Informal discussion
can, and often does, contradict the intent of formal training.
For example, peers or mentors often inadvertently empower
an employee to neutralize and violate a policy by saying in
passing, for example, that “few people actually follow this
password policy.” Future research should explore the effects
of informal discussion, especially as compared to the effects
of training and awareness programs.
Based on our results, we conclude that the ideal security
communication program should include training on both
deterrence and neutralization, and that such training does not
necessarily have to be negatively or positively framed. That is,
organizations need to include consequences for violating pol-
icies in their training, as well as focus on why behavior should
not be justified. While a general discussion on rationalization is
helpful, organizations may want to focus on those types of
neutralization that are most salient to the organization’s pol-
icies. For example, when discussing password violations, or-
ganizations should focus on fighting excuses related to the
“defense of necessity,” since this type of neutralization is likely
to have the strongest effects on intentions to violate password
policies.
8. Limitations and future research
One limitation of this research is that we examined only regular
employee violations of minor IT security policies. Extreme
policy breakers (e.g., those who steal large amounts of money)
usually realize that their behavior is not “normal” or common
and may not neutralize in the same manner as the majority.
Previous research points out three distinct levels of violation:
passive, volitional, and malicious (Guo et al., 2011; Warkentin
et al., under review). This paper focuses on those employees
who may be volitional violators, those who consciously violate
policies but who do so for minor reasons and are not malicious.
Further research could examine how training may or may not
influence extreme or malicious violators.
Often, months or years pass between the time an employee
or other insider is trained on or made aware of security pol-
icies and the consequences of compliance with or violation of
these policies. Thus, another limitation of the current study is
that the participants responded concerning their intentions
only moments after receiving the training. Future studies
should examine longitudinal effects.
Finally, a common limitation of survey research is that
participants often do not pay attention or fully cooperate
during a study. However, we believe that our extensive
filtering of participants through manipulation check and
response set questions allowed us to achieve relatively high
quality data for Web research.
9. Conclusion
This research discusses the need for proper design of security
training and awareness programs. While both deterrence and
neutralization affect employees’ intentions to violate IT se-
curity policies, these intentions can be reduced by proper
training that gives focus to addressing neutralization tech-
niques rather than the traditional view of only focusing on
deterrence and awareness.
Appendix A. Scenarios
Baseline scenario
At Agile Industries*, management has been focusing on
increasing compliance with IT security policies. The company
has sanctions (penalties) in place for employees who violate
policies. The company recently developed a training program
where employees read security training information and have
group discussion about the meaning of the policy. Here is an
excerpt from the training materials:
“As stated in our security policies, employees should not
share computer passwords with other coworkers. [Insert
statement focus here.] This applies equally to all employees.
“[Insert framed statement here.]”
Sam* is one of the employees at Agile Industries who has
completed the training program. While out of town, Sam gets
a call from a coworker, Bill. By the sound of his voice, Sam can
tell that Bill is under some stress. Bill tells Sam that he has to
get a project done right away to meet a deadline but that he
needs some information from Sam. Sam recalls that the in-
formation Bill needs is saved on the hard drive of Sam’s office
computer, which is not set up for remote access. Bill asks Sam
to share his password in order to access the needed infor-
mation for his report. [Insert neutralization statement here.]
Sam decides to go ahead and share his password with Bill.
computers & security 39 (2013) 145e159154
Author's personal copy
1. No specific focus
2. Neutralization-Focused: Sometimes employees believe
that sharing passwords can be justified under certain cir-
cumstances without any real consequences. However,
sharing of passwords should not be justified for any reason.
A recent survey of our employees at Fast Lane showed that
over three-fourths would not share their password even if
they thought it might be justified by the circumstances.
3. Deterrence-Focused: Further, the policy states that those
who knowingly share their password with coworkers will
be reprimanded and will have a written warning put in
their employee file. Multiple warnings will result in
termination.
1. No specific framing
2. Negatively-Framed (consequence-oriented): While it may not
appear to be the case, there are often real consequences of
sharing passwords such as improper access to confidential in-
formation. Even seemingly honest employees gain access to
passwords for malicious intent. In other words, consequences
extend beyond the person disobeying the policy.
3. Positively-Framed (benefits-oriented): We appreciate your help
and support in this effort. Through employee compliance with this
policy, we can ensure the safety and security of our company.
Your efforts to support the company in this manner are not trivial.
1. (No Neutralization)
2. Defense of Necessity: Sam knows that Bill’s project is
critical to the success of their department. If the project
fails, there will be consequences not only for Bill, but also
for Sam. Sam is unable to get to the office today, so he
feels there is no other choice.
3. Denial of Injury: Sam knows that Bill is trustworthy and
feels that no harm would result from sharing his pass-
word with Bill this one time. Besides, he can change his
password a different day when he is back in the office.
4. Metaphor of the Ledger: Sam feels that he has been a very
faithfuland honest employee forseveral years. Considering
his previous history with the company, he feels that it
would notbe a problem to share hispassword this onetime.
*Company and individual names were different for each
scenario presented to participants.
Example Scenario
(Neutralization-Focused, Negatively-Framed,Denial of Injury)
At Fast Lane Construction, management has been focusing
on increasing compliance with IT security policies. The com-
pany has sanctions (penalties) in place for employees who
violate policies. The company recently developed a training
program where employees read security training information
and have group discussion about the meaning of the policy
and any questions employees may have. Below is an excerpt
from the training materials:
“As stated in our security policies, employees should not
share computer passwords with other coworkers. Sometimes
employees believe that sharing passwords can be justified
under certain circumstances without any real consequences.
However, sharing of passwords should not be justified for any
reason. A recent survey of our employees at Fast Lane showed
that over three-fourths would not share their password even if
they thought it might be justified by the circumstances. This
applies equally to all employees.
While it may not appear to be the case, there are often real con-
sequences of sharing passwords such as improper access to confi-
dential information. Even seemingly honest employees gain access to
passwords for malicious intent.”
Sam is one of the employees at Fast Lane Construction who
has completed the training program. While out of town, Sam
gets a call from a coworker, Bill. By the sound of his voice, Sam
can tell that Bill is under some stress. Bill tells Sam that he has
to get a project done right away to meet a deadline but that he
needs some information from Sam. Sam recalls that the in-
formation Bill needs is saved on the hard drive of Sam’s office
computer, which is not set up for remote access. Bill asks Sam
to share his password in order to access the needed infor-
mation for his report. Sam knows that Bill is trustworthy and
feels that no harm would result from sharing his password
with Bill this one time. Besides, he can change his password a
different day when he is back in the office. Sam decides to go
ahead and share his password with Bill.
Appendix B. Survey measures
Filter questions
Have you held a job in a workplace that had guidelines, work
rules, or policies for employees? YES/NO.
Have you held a job in which you used a computer for your
work? YES/NO.
[If participants answer ‘no’ to either, the survey ends]
Manipulation Check
Please select an answer for the following items as they relate
to the scenario above:
In this scenario, the training material clearly states that:
a. employees should never rationalize sharing passwords.
b. employees will be reprimanded for sharing passwords.
c. The training material does not specify either of the above
statements.
According to this scenario, the company motivates it em-
ployees to comply in the training material by:
a. stressing the consequences of sharing passwords.
b. encouraging employee support to ensure safety and se-
curity of the company.
c. The training material does not use either of the above
techniques.
How does Sam justify sharing his password in this scenario?
a. The scenario does not state that he justifies his behavior.
b. He believes that no harm will result from sharing his password.
c. He believes that sharing his password is necessary for the
success of his department.
d. He believes that because he has been a good employee for
many years he can share his password.
computers & security 39 (2013) 145e159 155
Author's personal copy
Appendix C. Notes on statistical procedure
selection
OLS regression is a preferred technique for the factorial
survey design (Rossi and Anderson, 1982; Shlay et al., 2005)
due to the ease of interpreting the coefficients. However,
OLS requires a normal distribution of the dependent vari-
able. Rossi and Anderson (1982) note that any multivariate
technique that fits the data can be used; they suggest that in
cases where a normal distribution is not present, logistic
regression is a suitable alternative. In security violation
research, the dependent variable often displays a skewed
distribution because of the sensitive nature of admitting
guilt to violating rules. The KolgomoroveSmirnov (0.249;
df ¼257; p<0.001) and Shapiro Wilk (0.877; df ¼257;
p<0.001) tests of normality both indicated that the distri-
bution of our data is not normally distributed. Figure C1
displays the distribution of the dependent variable for our
study.
Content Validity (Realism Check)
SD DNASA
I could imagine a similar
scenario taking
place at work.
12345
Dependent variable measures (behavioral intention)
SD DNASA
In this situation, I would
do the same as Sam.
12345
If I were Sam, I would have
also shared my password.
12345
I think I would do what Sam
did if this happened to me.
12345
Demographic items
I am Male Female
My age is 18e21 22e29 30e39 40e49 50e59 60þ
Year of work
experience:
0e45e910e19 20þ
Level of
education:
Some high
school
High school Undergraduate degree
Graduate degree
Fig. C.1 eDistribution of the dependent variable
(standardized).
computers & security 39 (2013) 145e159156
Author's personal copy
Because the distribution of the data appears to be bimodal,
with a large group answering below the mean, and one group
answering above the mean, the data appear suitable for lo-
gistic regression. Logistic regression predicts a binary
responsedin this case, whether the participant has intentions
to violate the policy or not. The dependent variable was
measured as the average score of three items ranging from
one to five. Thus, we categorized the responses as those with a
DV score higher than three (those who exhibit some intention
to violate) and those with a score of three or lower (those who
do not exhibit any intention to violate).
Another assumption of both OLS and logistic regression is
independence of errors. This assumption is violated with
repeated-measures designs, such as our survey requiring each
participant to respond to four separate scenarios, because
responses from the same subject are likely to be correlated.
Thus, the most appropriate technique to analyze our dataset
is repeated-measures logistic regression. Repeated-measures
logistic regression is done using the Generalized Estimating
Equations (GEE) approach (Allison, 1999), an extension of
generalized linear models that accounts for correlations
among observations from the same subject (Zeger and Liang,
1986). The parameter estimates of the model are interpreted
in a similar manner to those of traditional logistic regression.
references
Allison PD. Logistic regression using the SAS system: theory and
application. Cary, NC, USA: SAS Institute; 1999.
Ames DR. Strategies for social inference: a similarity contingency
model of projection and stereotyping in attribute prevalence
estimates. Journal of Personality & Social Psychology
2004;87(5):573e85.
Anderson CL, Agarwal R. Practicing safe computing: a
multimethod empirical examination of home computer user
security behavioral intentions. MIS Quarterly
2010;34(3):613e43.
Andrich D. A rating formulation for ordered response categories.
Psychometrika 1978;43(4):561e73.
Angst CM, Agarwal R. Adoption of electronic health records in the
presence of privacy concerns: the elaboration likelihood
model and individual persuasion. MIS Quarterly
2009;33(2):339e70.
Boss SR, Kirsch LJ, Angermeier I, Shingler RA, Boss RW. If
someone is watching, I’ll do what I’m asked: mandatoriness,
control, and information security. European Journal of
Information Systems 2009;18(2):151e64.
Bulgurcu B, Cavusoglu H, Benbasat I. Information security policy
compliance: an empirical study of rationality-based beliefs
and information security awareness. MIS Quarterly
2010;34(3):523e48.
Butler M. Privileged password sharing: “root” of all evil. SANS
Institute; 2012.
Cheng F-F, Wu C-S. Debiasing the framing effect: the effect of
warning and involvement. Decision Support Systems
2010;49(3):328e34.
Cook TD, Campbell DT. Quasi experimentation: design and
analytical issues for field settings. Chicago, IL, USA: Rand
McNally; 1979.
Copes H. Societal attachments, offending frequency, and
techniques of neutralization. Deviant Behavior
2003;24(2):101e27.
Cox D, Cox A. Communicating the consequences of early
detection: the role of evidence and framing. Journal of
Marketing 2001;65(3):91e103.
Crossler RE, Johnston AC, Lowry PB, Hu Q, Warkentin M,
Baskerville R. Future directions for behavioral information
security research. Computers & Security 2013;32(1):90e101.
D’Arcy J, Herath T. A review and analysis of deterrence theory in
the IS security literature: making sense of the disparate
findings. European Journal of Information Systems
2011;20(6):643e58.
D’Arcy J, Hovav A, Galletta D. User awareness of security
countermeasures and its impact on information systems
misuse: a deterrence approach. Information Systems
Research 2009;20(1):79e98.
De La Haye A-M. A methodological note about the measurement
of the false-consensus effect. European Journal of Social
Psychology 2000;30(4):569e81.
Donovan RJ, Jalleh G. Positively versus negatively framed product
attributes: the influence of involvement. Psychology &
Marketing 1999;16(7):613e30.
Fagley NS, Coleman JG, Simon AF. Effects of framing, perspective
taking, and perspective (affective focus) on choice. Personality
and Individual Differences 2010;48(3):264e9.
Flynn FJ, Wiltermuth SS. Who’s with me? False consensus,
brokerage, and ethical decision making in organizations.
Academy of Management Journal 2010;53(5):1074e89.
Furnell S, Clarke N. Power to the people? the evolving recognition
of human aspects of security. Computers & Security
2012;31(8):983e8.
Grewal D, Gotlieb J, Marmorstein H. The moderating effects of
message framing and source credibility on the price-perceived
risk relationship. Journal of Consumer Research
1994;21(1):145e53.
Guo KH, Yuan Y, Archer NP, Connelly CE. Understanding
nonmalicious security violations in the workplace: a
composite behavior model. Journal of Management
Information Systems 2011;28(2):203e36.
Harrington SJ. The effect of codes of ethics and personal denial of
responsibility on computer abuse judgments and intentions.
MIS Quarterly 1996;20(3):257e78.
Herath T, Rao HR. Encouraging information security behaviors
in organizations: role of penalites, pressures and
perceived effectiveness. Decision Support Systems
2009;47(2):154e65.
Herzog S. The relationship between public perceptions of crime
seriousness and support for plea-bargaining practices in
Israel: a factorial survey approach. The Journal of Criminal
Law & Criminology 2003;94(1):103e31.
Hu Q, Xu Z, Dinev T, Ling H. Does deterrence work in reducing
information security policy abuse by employees?
Communications of the ACM 2011;54(6):54e60.
Ingram J, Hinduja S. Neutralizing music piracy: an empirical
examination. Deviant Behavior 2008;29(4):334e66.
Jasso G. Factorial survey methods for studying beliefs and
judgments. Sociological Methods & Research
2006;34(3):334e423.
Johnston AC, Warkentin M. Fear appeals and information security
behaviors: an empirical study. MIS Quarterly
2010;34(3):549e66.
Kahneman D, Tversky A. Prospect theory: an analysis of decision
under risk. Econometrica 1979;47(2):263e91.
Karjalainen M, Siponen M. Toward a new meta-thoery for
designing information systems (IS) security training
approaches. Journal of the Association for Information
Systems 2011;12(8):518e55.
Kerlinger F. Foundations of behavioral research. 2nd ed. London,
UK: Holt Reinhart & Winston; 1973.
Klockars C. The professional fence. New York: Free Press; 1974.
computers & security 39 (2013) 145e159 157
Author's personal copy
Lanza ML. Technical notes edevelopment of a vignette: a data
collection instrument about patient assault. Western Journal
of Nursing Research 1988;10(3):346e51.
Lauder W. Factorial survey methods: a valuable but under-
utilized research method in nursing research? Nursing Times
Research 2002;7(1):35e43.
Lee J, Lee Y. A holistic model of computer abuse within
organizations. Information Management & Computer Security
2002;10(2):57e63.
Levin IP, Schneider SL, Gaeth GJ. All frames are not created equal:
a typology and critical analysis of framing effects.
Organizational Behavior and Human Decision Processes
1998;76(2):149e88.
Liang H, Xue Y. Avoidance of information technology threats: a
theoretical perspective. MIS Quarterly 2009;33(1):71e90.
Liang H, Xue Y. Understanding security behaviors in personal
computer usage: a threat avoidance perspective. Journal of the
Association for Information Systems 2010;11(7):394e413.
Liginlal D, Sim I, Khansa L. How significant is human error as a
cause of privacy breaches? an empirical study and a
framework for error management. Computers & Security
2009;28(3e4):215e28.
Lyons CJ. Individual perceptions and the social construction of
hate crimes: a factorial survey. Social Science Journal
2008;45(1):107e31.
Minor WW. Techniques of neutralization: a reconceptualization
and empirical examination. Journal of Research in Crime and
Delinquency 1981;18(2):295e318.
Myyry L, Siponen M, Pahnila S, Vartiainen T, Vance A. What levels
of moral reasoning and values explain adherence to
information security rules? an empirical study. European
Journal of Information Systems 2009;18(2):126e39.
Paternoster R, Simpson S. Sanction threats and appeals to
morality: testing a rational choice model of corporate crime.
Law & Society Review 1996;30(3):549e84.
Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP. Common
method biases in behavioral research: a critical review of the
literature and recommended remedies. Journal of Applied
Psychology 2003;88(5):879e903.
Ponemon Institute. 2009 annual study: cost of data breach 2010.
Ponemon Institute. 2011 cost of data breach study 2012.
Ponemon Institute. 2013 state of the endpoint 2012.
Posey C, Bennett RJ, Roberts TL. Understanding the mindset of the
abusive insider: an examination of insiders’ causal reasoning
following internal security changes. Computers & Security
2011;30(6e7):486e97.
Post GV, Kagan A. Evaluating information security tradeoffs:
restricting access can interfere with user tasks. Computers &
Security 2007;26(3):229e37.
Puhakainen P, Siponen M. Improving employees’ compliance
through information systems security training: an action
research study. MIS Quarterly 2010;34(4):757e88.
Rennie L. Research note: detecting a response set to Likert-style
attitude items with the rating model. Educational Research
and Perspectives 1982;9(1):114e8.
Roberts P. Hacking group TeaMpoisoN claims breach of T-mobile.
ThreatPost [accessed 18.02.13] from, http://threatpost.com/
en_us/blogs/hacking-group-teamp0ison-claims-breach-t-
mobile-011612; January 16, 2012.
Rossi PH, Anderson AB. The factorial survey approach: an
introduction. In: Rossi PH, Nock SL, editors. Measuring social
judgments: the factorial survey approach. Beverly Hills, CA,
USA: Sage; 1982. p. 15e67.
Scott LB, Curbow B. The effect of message frames and CVD risk
factors on behavioral outcomes. American Journal of Health
Behavior 2006;30(6):582e97.
Seron C, Pereira J, Kovath J. How citizens assess just punishment
for police misconduct. Criminology 2006;44(4):925e60.
Shlay AB, Tran H, Weinraub M, Harmon M. Teasing apart the
child care conundrum: a factorial survey analysis of
perceptions of child care quality, fair market price and
willingness to pay by low-income, African American parents.
Early Childhood Research Quarterly 2005;20(4):393e416.
Shropshire JD, Warkentin M, Johnston AC. Impact of negative
message framing on security adoption. The Journal of
Computer Information Systems 2010;51(1):41e51.
Siponen M. A conceptual foundation for organizational
information security awareness. Information Management &
Computer Security 2000;8(1):31e41.
Siponen M, Vance A. Neutralization: new insights into the
problem of employee information systems security policy
violations. MIS Quarterly 2010;34(3):487e502.
Straub DW, Nance WD. Discovering and disciplining computer
abuse in organizations: a field study. MIS Quarterly
1990;14(1):45e60.
Straub DW, Welke RJ. Coping with systems risk: security planning
models for management decision making. MIS Quarterly
1998;22(4):441e69.
Sykes G, Matza D. Techniques of neutralization: a theory of
delinquency. American Sociological Review 1957;22(6):664e70.
Symantec Corporation. More than half of ex-employees admit to
stealing company data according to new study 2009.
Taylor BJ. Factorial surveys: using vignettes to study professional
judgment. British Journal of Social Work 2006;36(7):1187e207.
Trevino LK. Experimental approaches to studying ethical-
unethical behavior in organizations. Business Ethics Quarterly
1992;2(2):121e36.
Tversky A, Kahneman D. The framing of decisions and the
psychology of choice. Science 1981;211(4481):453e8.
van Buiten M, Keren G. Speakerelistener incompatibility: joint
and separate processing in risky choice framing.
Organizational Behavior and Human Decision Processes
2009;108(1):106e15.
Wall DS. Organizational security and the insider threat:
malicious, negligent and well-meaning insiders. Symantec
2011.
Warkentin M, Johnston AC, Shropshire J. The influence of the
informal social learning environment on information privacy
policy compliance efficacy and intention. European Journal of
Information Systems 2011;20(3):267e84.
Warkentin M, Willison R. Behavioral and policy issues in
information systems security: the insider threat. European
Journal of Information Systems 2009;18(2):101e5.
Warkentin M, Willison R, Johnston AC. Examining the influence
of disgruntlement on employee computer abuse intentions:
Insights from justice, deterrence, and rationalization
perspectives. MIS Quarterly, under review.
Weber J. Scenarios in business ethics research: review, critical
assessment, and recommendations. Business Ethics Quarterly
1992;2(2):137e60.
Whitman ME. Enemy at the gate: threats to information security.
Communications of the ACM 2003;46(8):91e5.
Willison R. Understanding the perpetration of employee
computer crime in the organisational context. Information
and Organization 2006;16(4):304e24.
Willison R, Warkentin M. The expanded security action cycle: a
temporal analysis “left of bang”. Dewald Roode information
security Workshop, IFIP WG811/1113. Boston, MA, 2010.
Willison R, Warkentin M. Beyond deterrence: an expanded view
of employee computer abuse. MIS Quarterly 2013;37(1):1e20.
Wilson DK, Wallston KA, King JE. Effects of contract framing,
motivation to quit, and self-efficacy on smoking reduction.
Journal of Applied Social Psychology 1990;20(7):531e47.
Wilson T. CISOs say insiders are the greatest threat to data. Dark
Reading [accessed 19.02.13] from, http://www.darkreading.
com/security/news/218100924; 2009.
computers & security 39 (2013) 145e159158
Author's personal copy
Xue Y, Liang H, Wu L. Punishment, justice, and compliance in
mandatory IT settings. Information Systems Research
2011;22(2):400e14.
Zeger SL, Liang KY. Longitudinal data analysis using generalized
linear models. Biometrica 1986;73(1):13e22.
Zhong C-B. The ethical dangers of deliberative decision making.
Administrative Science Quarterly 2011;56(1):1e25.
Jordan B. Barlow is a doctoral student at the Kelley School of
Business, Indiana University. He is a graduate of the Masters of
Information Systems program at Brigham Young University
where he was enrolled in the Information Systems Ph.D. Prepa-
ration Program. His research interests include collaboration, CMC,
virtual teams, and behavioral IT security. He has published
research in MIS Quarterly,Communications of the AIS, and Group
Decision and Negotiation.
Merrill Warkentin is Professor and the Richard Puckett Notable
Scholar in the College of Business at Mississippi State University.
His research has appeared in MIS Quarterly,Decision Sciences,Euro-
pean Journal of Information Systems,Decision Support Systems,Com-
puters & Security,Information Systems Journal, and others. He is the
AIS Departmental Editor for IS Security & Privacy, the Chair of the
IFIP Working Group on IS Security Research, and Track Co-Chair for
the ICIS2013 Security Track. His primary research focus is in
behavioral IS security issues. He is an AE for MIS Quarterly,European
Journal of Information Systems, and Information & Management.
Dustin Ormond is a doctoral student at Mississippi State Univer-
sity. He received both his bachelor’s and master’s degrees in In-
formation Systems Management from Brigham Young University.
His current research interests include information security and
privacy, affective computing, fraud, deception, and mobile
technologies.
Alan R. Dennis is Professor and John T. Chambers Chair of
Internet Systems in the Kelley School of Business at Indiana
University. He was a Senior Editor at MIS Quarterly, and has served
as the Publisher of MIS Quarterly Executive since its founding.Prof.
Dennis has written more than 150 research papers, and has won
numerous awards for his theoretical and applied research. His
research focuses on team collaboration; neuro IS; and the use of
the Internet to improve business and education. He was made a
Fellow of the AIS in 2012.
computers & security 39 (2013) 145e159 159