Content uploaded by James R. Detert
Author content
All content in this area was uploaded by James R. Detert on Nov 25, 2014
Content may be subject to copyright.
Situational Moral Disengagement: Can the Effects
of Self-Interest be Mitigated?
Jennifer Kish-Gephart •James Detert •
Linda Klebe Trevin
˜o•Vicki Baker •
Sean Martin
Received: 2 January 2013 / Accepted: 20 September 2013 / Published online: 12 October 2013
Springer Science+Business Media Dordrecht 2013
Abstract Self-interest has long been recognized as a
powerful human motive. Yet, much remains to be under-
stood about the thinking behind self-interested pursuits.
Drawing from multiple literatures, we propose that situa-
tions high in opportunity for self-interested gain trigger a
type of moral cognition called moral disengagement that
allows the individual to more easily disengage internalized
moral standards. We also theorize two countervailing for-
ces—situational harm to others and dispositional conscien-
tiousness—that may weaken the effects of personal gain on
morally disengaged reasoning. We test our hypotheses in
two studies using qualitative and quantitative data and
complementary research methods and design. We demon-
strate that when personal gain incentives are relatively
moderate, reminders of harm to others can reduce the like-
lihood that employees will morally disengage. Furthermore,
when strong personal gain incentives are present in a situ-
ation, highly conscientious individuals are less apt than their
counterparts to engage in morally disengaged reasoning.
Keywords Moral disengagement Motivated
cognition Self-interest Unethical decision making
Introduction
Self-interest has long been recognized as a powerful human
motive (Miller 1999; Moore and Lowenstein 2004; Sen
1977) that explains much of human survival and success.
Yet, when left unchecked, self-interest motives have been
blamed for many ethical scandals, including the 2008
financial crisis (McLean and Nocera 2010). While behav-
ioral ethics researchers recognize the importance and
potential negative consequences of self-interest (e.g.,
Moore and Lowenstein 2004; Schweitzer et al. 2004),
much remains to be understood about the thinking behind
individuals’ self-interested pursuits. Prior research, for
example, describes the relationship between self-interest
and lying as simply: ‘‘people lie when doing so benefits
them’’ (Grover and Hui 1994, p. 13). Such explanations
implicate self-interest in a self-evident manner, but they
fail to provide a nuanced understanding of the thought
processes that underlie such self-interested motivations
(Bersoff 1999; Wang and Murnighan 2011). In this set of
studies, we propose that situations high in opportunity for
self-interested gain are likely to trigger a type of moral
cognition called moral disengagement that allows the
individual to more easily disengage internalized moral
standards. We then propose that two countervailing forces
(one situational and one individual) may reduce self-
interest’s negative effects on moral cognition.
J. Kish-Gephart (&)
University of Arkansas – Fayetteville, 407 Business Building,
Fayetteville, AR 72701, USA
e-mail: jgephart@walton.uark.edu
J. Detert
Cornell University, 342 Sage Hall, Ithaca, NY 14853, USA
e-mail: jrd239@cornell.edu
L. K. Trevin
˜o
The Pennsylvania State University, 402 Business Building,
University Park, PA 16801, USA
e-mail: ltrevino@psu.edu
V. Baker
Albion College, 611 E. Porter Street, Albion, MI 49224, USA
e-mail: vbaker@albion.edu
S. Martin
Cornell University, 201 Sage Hall, Ithaca, NY 14853, USA
e-mail: srm238@cornell.edu
123
J Bus Ethics (2014) 125:267–285
DOI 10.1007/s10551-013-1909-6
Bandura’s moral disengagement theory may provide
valuable insight into understanding individuals’ responses
to personal gain opportunities that trigger self-interested
motives. According to Bandura’s (1986) moral disen-
gagement theory, unethical behavior results from failed
activation of self-regulatory processes. People internalize
behavioral standards (e.g., societal values) via socializa-
tion, and these standards are used to regulate and guide
behavior. The theory proposes that if the opportunity to
engage in unethical behavior arises, moral standards are
activated and self-regulatory mechanisms (e.g., guilt and
self-censure) constrain the individual from engaging in the
behavior. However, Bandura identified eight cognitive
mechanisms—or ‘‘moral disengagement mechanisms’’—
that can deactivate this moral self-regulatory process,
thereby preventing self-censure or guilt in the face of
unethical behavior. Bandura’s eight moral disengagement
mechanisms include moral justification, euphemistic
labeling, and advantageous comparison that help individ-
uals reduce the perceived offensiveness of their actions by
depicting them as justified ‘‘in the service of valued social
or moral purposes’’ (Bandura et al. 2000, p. 58), using
neutralizing terminology (e.g., referring to ‘‘stealing’’ as
‘‘borrowing’’), or contrasting them with more egregious
behavior, respectively. Other mechanisms—displacement
of responsibility, diffusion of responsibility, and distortion
of consequences—serve to minimize one’s own role in
harmful actions, or to minimize the consequences of those
actions. The final two disengagement mechanisms involve
blaming the victim for his/her plight (attribution of blame)
or removing the victim’s status as a valued human being
(dehumanization). In identifying these eight cognitive
mechanisms, Bandura effectively cohered under one the-
oretical umbrella a number of rationalization and neutral-
ization mechanisms discussed and studied by others
(Ashforth and Anand 2003; Kelman and Hamilton 1989;
Sykes and Matza 1957; Tenbrunsel and Messick 1999).
Drawing from motivated cognition research, we know
that individuals not only desire outcomes that benefit
themselves, but also desire to appear consistent with their
values and to believe that they are good and moral people
(Bandura 1986; Batson et al. 2003; Cooper 2001; Haidt
2001; Haidt and Kesebir 2010; Jones and Ryan 1997; Kunda
1990; Tsang 2002). In the face of personal gain opportuni-
ties, then, individuals will likely look for ways to both
achieve self-interested outcomes and to maintain the
appearance of morality (to themselves and others). Moral
disengagement mechanisms provide an opportunity to do
just that: they allow individuals to act on self-interested
opportunities while also avoiding self-censure or guilt and
preserving a positive self-image. Following this logic, we
propose that personal gain situations are likely to motivate
the use of moral disengagement mechanisms. Indeed,
statements such as ‘‘everyone is doing it’’ (diffusion of
responsibility) can be found in recent media reports of
portfolio managers accused of insider trading (Glovin et al.
2011) and in accounts from some who lied about their
income to get mortgage applications approved (Nocera
2011; Smith 2011). In addition, Bernie Madoff employed
the moral disengagement mechanisms of diffusion of
responsibility and advantageous comparison in his descrip-
tion of the largest Ponzi scheme in history, claiming that
‘‘the whole government is a Ponzi scheme’’ (Smith 2011).
The potential for self-interested situations to motivate
the use of moral disengagement mechanisms is troubling
not only because prior research has demonstrated a rela-
tionship between morally disengaged thinking and unethi-
cal behavior (e.g., Detert et al. 2008; Moore et al. 2012),
but also because reward systems focused on personal gain
are ubiquitous in work organizations. Indeed, workers from
the factory floor to Wall Street traders to CEOs are often
offered bonuses to incentivize high performance, and it is
unlikely that such incentives will be eliminated anytime
soon. Thus, beyond knowing if personal gain opportunities
trigger morally disengaged reasoning, it is especially
important to consider what factors, if any, are strong
enough to attenuate the effect of personal gain on this type
of problematic cognition.
One potentially powerful countervailing force against
the effects of personal gain on moral disengagement is
situational harm to others. Rooted in evolution and rein-
forced through socialization, human beings have developed
a natural tendency to automatically recognize and respond
when others are being harmed (de Waal 2008; Hoffman
2000). Such empathetic responses are considered ‘‘the
biological substrate for prosocial behavior in humans’’
(Eisenberg 2000, p. 684). Behavioral ethics researchers
have likewise identified harm to others as important to
ethical decision making (Batson et al. 2003; Graham et al.
2011; Haidt 2001; Jones 1991). Collectively, this work
suggests that harm to others is a likely candidate to coun-
teract the effect of personal gain on morally disengaged
reasoning. In addition, we answer calls to take a more
interactionist approach to studying ethics and moral disen-
gagement (Moore et al. 2012; Trevin
˜o1986) by proposing
both situational harm to others and conscientiousness (a
disposition) as countervailing forces on morally disengaged
reasoning in response to self-interest furthering situations.
By tapping into responsibility and self-discipline (Cawley
et al. 2000; McAdams 2009; McCrae and John 1992)—two
characteristics that likely affect one’s ability to self-regulate
in the face of temptation—conscientiousness may lessen the
likelihood for an individual to morally disengage in the face
of a personal gain opportunity.
This research makes two major contributions. First, by
elucidating the thinking associated with personal gain, we
268 J. Kish-Gephart et al.
123
are able to increase understanding of the situational and
dispositional factors that may reduce the effects of self-
interest on morally disengaged reasoning. Second, our
focus on moral disengagement in response to specific types
of situations extends prior work that has primarily treated
moral disengagement as dispositional (e.g., Claybourn
2011; Detert et al. 2008; Duffy et al. 2005; Hinrichs et al.
2012; Moore et al. 2012).
We examine our research questions through two mixed-
methods studies. In Study 1, via analysis of participants’
open-ended responses to ethical decision-making vignettes,
we explore how situations characterized by personal gain
relate to the prevalence of morally disengaged reasoning.
To our knowledge, Study 1 represents the first moral dis-
engagement study to use qualitative data. In Study 2, we
use experimental methods to investigate the relationship
between personal gain and moral disengagement, and
countervailing forces that may attenuate that relationship.
Study 1
We designed Study 1 to examine whether situations char-
acterized by personal gain are indeed related to an increased
use of moral disengagement mechanisms. Specifically,
Study 1 considers the following hypothesis, which is con-
sistent with the broader literature on self-interest and ethics-
related cognition but has yet to be empirically examined:
Hypothesis 1 The level of potential for personal gain in a
situation will be positively related to the use of morally
disengaged reasoning in response to that situation.
To test this hypothesis, we coded and analyzed quali-
tative data that we collected as part of a larger study.
During the final wave of the project, respondents were
asked to provide short narratives to explain their thinking
about their (un)willingness to engage in the behaviors
described in a set of scenarios. Based on Bandura’s (1986)
typology of eight moral disengagement mechanisms, we
developed a coding scheme and systematically applied it to
these free response descriptions. We expected to find that
the level of personal gain in a scenario would be positively
related to respondents’ use of moral disengagement
mechanisms (Hypothesis 1). Unexpectedly, during our
initial coding to examine this hypothesis, we discovered
suggestive patterns in the responses that prompted addi-
tional analyses that are described below.
Sample and Procedure
Undergraduate students at a large Northeastern public
university were invited to take part in a multi-wave survey
study. Participants were assured confidentiality, and
received course extra credit for participation. Respondents
were presented with 12 ethical decision-making scenarios,
and were asked to indicate how likely they would be to
engage in the behavior described in each. The scenarios
included such actions as loading another’s software onto
one’s computer for use without payment, retaining extra
change received at a coffee shop, dumping toxic chemicals
down the sink instead of properly disposing of them, and
staying quiet despite knowledge that the dry cleaning
business where one works has contaminated the land
directly adjacent to a day care center. After indicating their
willingness to engage in each behavior, four scenarios,
randomly selected from the original 12, reappeared for
each respondent. Respondents were then asked to type a
short narrative description of why they would be (un)likely
to engage in the behavior depicted in each of the four
scenarios they saw for the second time.
The final sample consists of the 275 respondents who
provided complete data. This sample was predominately
male (61.5 %), raised in the United States (93 %), and
Caucasian (86 %). Respondents had, on average, 18.4
months of work experience (SD =13.2 months). There
were no significant differences in native country, ethnicity,
or work experience between this study’s 275 respondents
and others who provided some, but not all, of the relevant
data. The final sample did have significantly more male
representation (61.5 vs. 49 %) than the respondents who
were removed due to incomplete data.
Measures
Personal Gain in Ethically-Charged Scenarios
To assess the relationship between the use of moral disen-
gagement mechanisms and the level of personal gain in each
scenario, we first needed to compute a personal gain score
for each scenario. We therefore recruited a panel of 20
experts, all with relevant graduate training (including
research-active management faculty and senior Ph.D. stu-
dents), and received their ratings on the level of personal
gain in each of the 12 scenarios. These experts had an
average of 12.4 years of management or business ethics
teaching experience and an average of 23.7 academic pub-
lications. Each rater was asked to rate on a five-point Likert
scale, ‘‘the extent to which you agree/disagree that the sce-
nario represents a behavior that is in the immediate self-
interest of the actor involved.’’ Personal gain ratings ranged
from 2.75 to 4.85, indicating significant variation in how
much personal gain was involved in each scenario according
to the panel of experts. Using ICC(2) (Bartko 1976; LeBr-
eton and Senter 2008), reliability among judges was .97.
Situational Moral Disengagement 269
123
Moral Disengagement Mechanisms Used in Response
to Ethically-Charged Scenarios
Because each of the 275 participants provided narrative
responses to four (of the 12) scenarios, a total of 1,100
statements (with an average of 92 per scenario) were
available for analysis. To code this narrative data, three of
the authors iteratively developed a coding scheme and
coding instruction document using Bandura’s (1986; Smith
2011) descriptions of the eight moral disengagement
mechanisms. A basic definition was provided for each
mechanism, and several illustrations of the specific types of
language denoting each mechanism were listed. For
example, advantageous comparison was defined as making
a behavior seem of little consequence by comparing it to a
much worse action; it is illustrated by statements such as,
‘‘Eating a few french fries that are left over is not as bad as
cooking a hamburger for myself at work’’ and ‘‘Just asking
friends about topics covered is not as bad as looking at the
actual exam in advance.’’
To further hone the definitions and their application to
the data, we randomly selected approximately two hundred
statements and had three of the authors independently code
this subset to identify any passages that represented one or
more of the eight moral disengagement mechanisms. These
coding results were compared and discussed in order to
resolve discrepancies, and the process was repeated itera-
tively until we reliably produced the same coding results
while working independently. Two of the authors (includ-
ing one who was not involved in the development or
refinement of the coding scheme) then used the finalized
coding scheme to independently code all 1,100 narrative
responses for instances of the eight moral disengagement
mechanisms. Moral disengagement mechanism(s) was
coded as 1 if one (or more) mechanism(s) was(were)
identified in the respondents’ narrative explanation for each
scenario, and 0 if absent. Inter-rater reliability between
these two coders for the entire sample of qualitative data,
calculated using Cohen’s Kappa, was .98. The two coders
subsequently discussed and reached agreement on codes
for the few discrepant cases.
Findings
As illustrated in Table 1, seven of the eight specific
mechanisms of moral disengagement were identified in the
1,100 explanations of situation-specific reasoning provided
by the respondents (no examples of dehumanizing state-
ments were identified). More specifically, we found that the
frequency of the use of moral disengagement mechanisms
varied significantly across the 12 scenarios—from zero
instances of moral disengagement identified in explana-
tions of responses to two scenarios (i.e., situational moral
disengagement identified 0 % of the time) to a high of
nearly three quarters of the responses to a scenario (74 %)
containing evidence of one or more of the moral disen-
gagement mechanisms.
To determine whether respondents’ use of moral dis-
engagement mechanisms is related to the levels of personal
gain in the scenarios, we examined the bivariate correla-
tions between the averages of the experts’ ratings of per-
sonal gain in each scenario and the use of moral
disengagement mechanisms in each of the 12 scenarios
(measured as a percentage). Consistent with our expecta-
tions for Hypothesis 1, the level of personal gain in the 12
scenarios was strongly and positively related (r=.69;
p\.01) to the prevalence of morally disengaged reason-
ing. For example, in situations high in potential personal
gain—such as receiving extra change from a cashier—
respondents commonly used morally disengaged state-
ments such as ‘‘[the company] is a huge corporation, and it
won’t hurt them in the long run to lose ten bucks’’ (dis-
tortion of consequences), or ‘‘It’s not my fault the cashier
made the mistake’’ (attribution of blame). Common trends
found in narratives for the situations high in personal gain
included: ‘‘It doesn’t hurt anyone’’ (distortion of conse-
quences), ‘‘Why should I have to correct their mistake’’
(attribution of blame), and ‘‘Everybody is doing it’’ (dif-
fusion of responsibility).
Table 1 Sample moral disengagement statements from participants
in Study 1
Moral justification ‘‘If there is a surplus at the end of the day
which is thrown away, then I would consider
eating without paying as I am doing them a
favor.’’
Euphemistic
labeling
‘‘I most likely would be inclined to borrow the
software [that someone else purchased] and
load it onto my computer.’’
Advantageous
comparison
‘‘I would likely [look at a copy of the exam
from an earlier class section]…I would not
consider it as bad as memorizing the
answers like many of my peers were doing.’’
Displacement of
responsibility
‘‘I might be likely to do this because it is my
BOSS that is asking me to do this.’’
Attribution of blame ‘‘Since I was already a few blocks away and it
was the cashier’s own mistake, I don’t feel it
is my responsibility to correct it.’’
Diffusion of
responsibility
‘‘[I would be very likely to copy software
because] everyone copies software, CDs,
DVDs, etc.’’
Distortion of
consequences
‘‘I would [deceptively pose as a student
researcher] because it’s not harming
anyone.’’
270 J. Kish-Gephart et al.
123
Additional Analyses
To further understand the nature of situationally induced
moral disengagement, we performed two additional anal-
yses based on patterns that emerged during our initial
coding. First, coders noticed that the scenarios that pro-
duced little or no moral disengagement involved harm to
others, suggesting a situational characteristic that appeared
to reduce the likelihood of moral disengagement. We asked
the same expert panel to rate the scenarios for the extent to
which they involved harm to others
1
and, indeed, found the
correlation between the panel’s rating of the level of harm
to others in each scenario and the detection of morally
disengaged respondent reasoning in the twelve scenarios to
be negative and significant at r=-.58 (p\.05).
Coders also noted that when harm to others was salient,
respondents seemed to invoke moral standards rather than
moral disengagement mechanisms. We thus re-coded the
1,100 narratives for statements reflecting moral standards,
such as ‘‘I would not do this because it’s not fair’’ (fairness
standard) and ‘‘That could really hurt the person’’ (harm
standard). Moral standards were coded as one if present
and zero if not present. Consistent with the observed pat-
tern, we found that the level of harm to others present in the
scenarios (as rated by the expert panel) was strongly and
positively related to the participants’ invocation of moral
standards (r=.68; p=.01). We also found that the
invocation of moral standards in the narrative data was
strongly negatively related to the level of personal gain in
the scenarios (r=-.67; p=.02). In fact, respondents’
explanations in the high personal gain scenarios contained
almost no invocation of moral standards. Furthermore,
consistent with moral disengagement theory (Bandura
1986), the correlation between the invocation of moral
standards and the use of moral disengagement mechanisms
was strongly and negatively correlated (r=-.91;
p\.01). This finding suggests that one does not use
morally disengaged reasoning when one’s internal moral
regulation system has been activated (e.g., in this case, by
recognition of salient harm to others in a situation). And,
the pattern of findings suggests that harm to others, by
evoking and bringing internal moral standards to con-
sciousness, strongly attenuates the ability of situations,
even those involving personal gain, to evoke morally dis-
engaged reasoning.
Finally, the coders noticed that participants tended to
use similar types of moral disengagement in response to the
particular scenarios. Based on this observation, we coded
respondent explanations for the specific moral disengage-
ment mechanisms used by participants (e.g., advantageous
comparison, displacement of responsibility, attribution of
blame, etc.). In other words, rather than focusing on whe-
ther or not any moral disengagement mechanism was
present in each narrative, we coded for which specific
moral disengagement mechanisms were used in each sce-
nario. As illustrated in Table 2, different scenarios tended
to trigger different types of morally disengaged reasoning.
For example, where respondents agreed that they (like the
focal actor in the scenario) would pose as a student to get
confidential information about a competitor’s product, they
tended to explain this decision using displacement of
responsibility (n=13) more often than all other moral
disengagement mechanisms combined (n=5), reasoning,
for example that, ‘‘I would do what I was told by my boss
because it is part of my job’’ and ‘‘My boss orders me to get
the information, this makes me have to get the information
in some way.’’
Discussion
Study 1 offers three main contributions. First, by demon-
strating a link between personal gain and moral disen-
gagement in response to particular scenarios, Study 1
provides preliminary support for our hypothesis that moral
disengagement is triggered by personal gain situations and
thus extends prior work that has treated moral disengage-
ment as a general propensity in individuals (e.g., Detert
et al. 2008; Duffy et al. 2005; Moore et al. 2012). Fur-
thermore, through induction and additional analyses, the
results also suggest that situational harm to others attenu-
ates the tendency to morally disengage. We found that
when situations are characterized by high harm to others
(across levels of situational personal gain), individuals
tended to invoke moral standards and use less morally
disengaged reasoning.
Second, as an alternative to the normal survey-based
approach to studying moral disengagement (e.g., Bandura
et al. 2001; Detert et al. 2008), Study 1 examined open-
ended written responses to a range of ethically-charged
scenarios. This unique design allowed us to systematically
code and find evidence in qualitative text data that indi-
viduals do use morally disengaged reasoning when facing
morally charged situations.
Third, our results suggest that specific moral disen-
gagement mechanisms are used more often in some types
of situations than in others. This suggests that, as the study
of the situational determinants of moral disengage-
ment develops, researchers may wish to tap the most
theoretically-relevant, specific mechanisms of moral
disengagement.
1
Raters assessed (on a five-point Likert scale), ‘‘the extent to which
you agree/disagree that the scenario represents a behavior that
involves harm to others.’’ Harm ratings ranged from 2.60 to 4.47
across the scenarios; ICC(2) was .95 (Bartko 1976; LeBreton and
Senter 2008).
Situational Moral Disengagement 271
123
The conclusions we can draw from Study 1 are tempered
by design limitations. First, the method used to generate
qualitative responses involved varying the level of personal
gain across situations rather than carefully manipulating
personal gain in a single situation where all else remained
constant. This prevents us from claiming a causal link
between personal gain and the use of moral disengagement
mechanisms. Second, the use of a vignette methodology
limits the level of realism experienced by participants
(McGrath 1982). Lastly, the observed patterns and addi-
tional analyses provide suggestive evidence of the inhib-
iting role of harm to others. But, our design did not allow
for a full assessment of its relationships with personal gain
and moral disengagement. We designed Study 2 with these
limitations in mind.
Study 2
In Study 2, we placed participants in an experimental
simulation (McGrath 1982) designed to mimic real-world
circumstances. This design allows us to not only examine
the effects of common incentives on morally disengaged
reasoning (i.e., the effect of levels of personal gain on
moral disengagement—Hypothesis 1), but also to manip-
ulate situational harm to others. Before describing the
methods in detail, we draw on the preliminary findings
from Study 1 and prior theory to formally hypothesize the
attenuating effect of situational harm to others on morally
disengaged reasoning. We further propose that conscien-
tiousness, a potential dispositional countervailing force,
will also attenuate the personal gain—moral disengage-
ment relationship.
Situational Harm to Others as a Countervailing Force
According to prior research, human beings—through evo-
lution and socialization—have learned to automatically
recognize and respond when others are being harmed. An
evolutionary perspective argues that human beings have
developed a natural altruistic impulse and empathetic
concern for others because such concern supports cooper-
ation and survival in social groups (Haidt and Kesebir
2010; Hoffman 2000; Maruna and Copes 2005). In addi-
tion, as individuals are socialized from childhood onward,
social rules including the directive to ‘‘do no harm’’
become internalized and affect moral motivations (Hoff-
man 2000). Behavioral ethicists have emphasized the
importance of ‘‘harm to others’’ considerations in ethical
decision making (Haidt 2001; Jones 1991). Haidt (2001)
and others (Batson et al. 2003; Graham et al. 2011) have
identified concern about harm to others as one of five key
evolution-based human moral motivators. Indeed, Haidt’s
(2001) highly-cited intuitionist approach to ethical decision
making and the argued instinctive response to avoid
causing harm to others is increasingly supported by a
growing body of research documenting brain region
activity and response time in studies using fMRI and other
sophisticated measurement approaches (e.g., see Glovin
et al. 2011). Furthermore, harm to others has been identi-
fied as a key component of a multi-dimensional construct
termed moral intensity by Jones (1991), who argued that
high moral intensity influences ethical thought by making
Table 2 Moral disengagement tactics by sample scenarios in Study 1
Sample scenario Top 3 mechanisms of moral
disengagement identified
You’re preparing for the final exam
in a class where the professor uses
the same exam in both sections.
Some of your friends somehow get
a copy of the exam after the first
section. They are now trying to
memorize the right answers. You
don’t look at the exam, but just ask
them what topics you should focus
your studying on.
Advantageous
comparison =37
Displacement of
responsibility =3
Distortion of
consequences =3
You are assigned a team project in
one of your courses. Your team
waits until the last minute to begin
working. Several team members
suggest using an old project out of
their fraternity/sorority files. You
go along with this plan.
Advantageous
comparison =23
Displacement or
responsibility =8
Distortion of
consequences =2
You work in a fast-food restaurant in
downtown [City X]. It’s against
policy to eat food without paying
for it. You came straight from
classes and are therefore hungry.
Your supervisor isn’t around, so
you make something for yourself
and eat it without paying.
Distortion of
consequences =13
Moral justification =8
Advantageous
comparison =7
You work as an office assistant for a
university department. You’re
alone in the office making copies
and realize you’re out of copy
paper at home. You therefore slip a
ream of paper into your backpack.
Distortion of
consequences =10
Advantageous
comparison =6
Attribution of blame =4
Your boss at your summer job asks
you to get confidential information
about a competitor’s product. You
therefore pose as a student doing a
research project on the
competitor’s company and ask for
the information.
Displacement of
responsibility =13
Euphemistic labeling =3
Distortion of
consequences =2
272 J. Kish-Gephart et al.
123
harm salient and thus reducing situational ambiguity and
increasing felt responsibility.
In the presence of a personal gain opportunity, we have
argued that people are motivated to disengage their inter-
nalized moral standards (Bandura 1986) to reach a self-
interested outcome and to still appear ‘‘good’’ or ‘‘moral’’.
We now nuance that argument by theorizing that when
situational harm to others is concurrently present, it intro-
duces a ‘‘restraining force’’ (Kruglanski et al. 2012,p.1)
that should lessen the effect of personal gain on morally
disengaged reasoning. First, given its roots in evolution and
socialization, the possibility of harm to others will likely
trigger and reinforce internalized ‘‘do no harm’’ standards
(Hoffman 2000), making it more difficult for individuals to
deactivate those moral standards in the face of personal
gain. Second, a key component of motivated reasoning is
‘‘reasonable justification’’ (Kunda 1990, p. 480). Because
individuals desire to appear moral, they are ‘‘constrained by
plausibility’’ and can ‘‘only bend data and the laws of logic’’
so far before the appearance of objectivity (and thus,
morality) is compromised (Ditto et al. 2009, p. 314). By
increasing, intuitively and automatically, felt responsibility
and reducing situational ambiguity (i.e., harmful acts are
more clearly agreed upon as ‘‘wrong’’) (Bandura 1986;
Ditto et al. 2009; Jones 1991), situational harm to others
erodes the individual’s ability to plausibly use morally
disengaged reasoning to justify (internally or to another)
self-interested decisions. Therefore, we hypothesize that
situations involving harm to others will weaken the effects
of personal gain on moral disengagement:
Hypothesis 2 Situational harm to others will moderate
the relationship between situational personal gain and
moral disengagement, such that situational harm to others
will weaken the influence of personal gain situations on the
use of morally disengaged reasoning.
Conscientiousness as a Countervailing Force
Following prior theoretical and empirical work demon-
strating the importance of considering both situational and
individual level factors in behavioral ethics models (e.g.,
Kish-Gephart et al. 2010; Trevin
˜o1986), we now turn to a
potential countervailing force at the individual level—
conscientiousness.
Conscientiousness refers to the degree to which indi-
viduals are dependable, hard-working, and organized
(Berry et al. 2007; Funder and Fast 2010). In addition to its
strong and consistent relationship with general job perfor-
mance (Barrick and Mount 1991; Barrick et al. 2001),
conscientiousness is also thought to have a moral compo-
nent (Cawley et al. 2000; McAdams 2009) that affects
moral cognition. According to Walumbwa and Schau-
broeck (2009), ‘‘conscientious individuals experience a
high degree of moral obligation: they value truth and
honesty, are less easily corrupted by others, and maintain a
high regard for duties and responsibilities.’’ This perspec-
tive is consistent with prior research that links conscien-
tiousness to honesty and helping behaviors (Lodi-Smith
and Roberts 2007; Roberts and Hogan 2001), as well as
empathy in adolescents (Del Barrio et al. 2004). Moreover,
those with high conscientiousness are self-disciplined and
have a strong sense of self-control (McCrae and John 1992;
Tangney et al. 2004). Together, these characteristics sug-
gest that, when faced with a high personal gain opportunity
in a work situation, highly conscientious individuals will be
less likely than their counterparts to succumb to the deac-
tivation of internal moral standards. Instead, we expect that
highly conscientious individuals will exercise more self-
discipline and remain more focused on their responsibili-
ties. As such, conscientiousness represents a likely dispo-
sitional candidate for weakening the influence of
situational personal gain on moral disengagement.
Hypothesis 3 Conscientiousness moderates the relation-
ship between personal gain and moral disengagement such
that high conscientiousness will weaken the influence of
personal gain situations on the use of morally disengaged
reasoning.
Methods: Study 2
To test our hypotheses and expand the results from Study 1,
we developed an experimental simulation that led partici-
pants to believe that they were involved in an actual work
task (rather than a laboratory experiment) with an on-
campus, university consulting group. Our goal was to create
an environment where we could manipulate personal gain
and harm to others in as close to a real work context as
possible so as to maximize the combination of internal and
external validity. We recruited junior and senior under-
graduate students from a business school class, informing
them that New Technology Research Consortium (NTRC),
a research consulting team of business school faculty,
needed student help to complete a large consulting project.
Participants were told that they could earn an average of $10
for less than an hour of work by helping the NTRC. The job
would require proofreading an NTRC client’s industrial
machine manual for errors using a specialized computer
software program (designed by the authors for this experi-
ment). While the NTRC is a fictitious entity created solely
for the purpose of the experiment, the faculty member
teaching the class from which participants were recruited
further supported the experiment’s realism by encouraging
Situational Moral Disengagement 273
123
students to help with this important type of field-based
consulting occurring on campus. He also offered students
course extra credit for helping. In addition, several months
before the ‘‘work opportunity’’ with the NTRC, the faculty
member offered course extra credit for participation in an
ostensibly unconnected survey. Embedded in this survey
were measures of our focal individual difference (consci-
entiousness) and control variables. After the ‘‘work oppor-
tunity’’ with the NTRC, students were debriefed (see details
below) and informed that the early semester survey was
connected to the experimental simulation. All study pro-
cedures received approval after a full review by the authors’
Institutional Review Board.
Participants
Study 2 was conducted at a large Northeastern public
university. Participants were recruited from an undergrad-
uate introductory Management course that included non-
business majors and business minor students. Because the
task involved proofreading manuals written in English, it
was explained that only native English speakers were
invited to participate. In total, 151 students participated in
the experiment by signing up for 1 h sessions. After
removing four subjects,
2
the final sample was 147. Just
over half of the participants were male (51 %), and the
majority of participants were white (86 %). The average
age was 20.77 years (SD =1.23). Participants had an
average of 2.78 years of work experience (SD =3.13).
Procedure
Participants arrived at their designated session time at one
of two computer labs on campus. To minimize the potential
for participants to talk about the characteristics of their
specific condition, session times were staggered and the
classrooms were located on two separate floors of the
building. Upon arrival, participants checked in with an
assistant waiting in the classroom and were given a packet
of materials that included a welcome card, payment form,
and a copy of a ‘‘one-page dissertation survey’’. The wel-
come card included a unique username and password for
each participant; the numbers on the password served to
randomly assign participants to conditions. Participants
were led to believe that all of the materials were anony-
mous and that the computer program could not identify the
specific user. In actuality, invisible ink was used to mark all
packet materials with a unique number so that the authors
could connect participants’ work (i.e., all written and on-
line materials) after the experiment.
Before the participants began their proofreading job, a
representative from the NTRC introduced herself and
provided background information on the NTRC, the pro-
ject, and the participants’ role. Participants were informed
that NTRC had been recently hired by a large Asian
manufacturer, specializing in industrial machinery, to
insure that the manufacturer’s transition into the U.S.
market was smooth. As part of this work for the client, the
NTRC noticed that the client’s product manuals, having
been written by non-native English speakers, contained
numerous mistakes and were therefore in need of signifi-
cant, immediate revision. Therefore, the NTRC was asking
participants for proofreading help:
This is where you come in. We are hiring you to help
with our contract by reading through a small part of
these manuals to identify wording that is not written
in easily understandable, correct English. We would
have hired professional editors to do this job, but that
process would take several months and we need this
done immediately.
We’ve created a basic proofreading program that will
display pages from various product manuals from our
client (of course, to protect our client’s identity, we
have used a fictitious company name in the manuals).
As you read through each page on your screen,
identify any sentences that need to be corrected by
simply clicking the ‘‘check box’’ at the end of the
sentence. You won’t be making any changes your-
self; we just want to get a sense of which areas
repeatedly strike the average reader as problematic.
This will shorten the amount of time it takes for our
staff to make the necessary corrections over the next
couple of weeks.
Participants were also informed that they would be paid
$1.25 for each page that they completed during the 30 min
session. If they worked at a steady and careful pace, the
NTRC representative believed the participants could earn at
least $10 for their work.
3
To help participants keep track of
time, the computer was programmed to notify them with a
2
Two participants’ data were removed from the analysis because
they indicated they were suspicious about the true purpose of the
‘‘work opportunity’’. Two additional participants’ data were removed
from the analysis for unruly behavior, suggesting that they were not
treating the task or the paperwork responsibly.
3
We selected a rate of $1.25/page after conducting pilot sessions at
two universities not involved in the Study 2 experiment. We
determined (by paying a generous flat rate and exhorting students
to do their best for 15 min) that at a rate of $1.25/page, most students
doing the job carefully (i.e., accurately finding the errors) would be
able to earn $10 or more for just 30 min of work. Thus, even the
baseline personal gain condition was quite generous considering the
worker population in this task.
274 J. Kish-Gephart et al.
123
pop-up box at the halfway point and then again when the
task was complete. To insure participants saw the pop-up
box, the pop-up box covered the entire screen and could
only be removed by clicking directly on the screen. Addi-
tionally, to maintain the perception of anonymity, partici-
pants were told that the program was not designed to accept
their personal information and they would need to report
how many pages they completed, along with their mailing
address, on the separate payment form in their packet at the
end of the session. Participants were told that a check for the
appropriate amount would be mailed to them.
The NTRC representative then instructed participants to
remove the welcome card from their packet, sign into the
computer program, and begin the task. Sample screen shots
of the proofreading activity are provided in the Appendix.
As described in detail below, pop-up boxes were used to
manipulate personal gain and make salient potential harm
to others.
Although the sessions were scheduled for 1 h, partici-
pants were asked to work on the proofreading task for only
30 min to reduce potential fatigue or loss of focus. Once
students began the activity, the NTRC representative left
the room, ostensibly to help other participants. The grad-
uate student assistant remained in the room throughout the
session to note any students who were disruptive or
appeared suspicious, and to insure that students did not talk
with each other during the task. However, this assistant was
instructed to act disinterested (by reading or working on
homework) so that participants did not feel scrutinized
during the activity.
At the end of the session, a ‘‘Ph.D. student’’ returned to
the room to give final instructions and to dismiss the session.
The participants were asked to complete two forms at this
time. One was the payment form described earlier which
included a series of process improvement items ostensibly
designed to help the NTRC personnel understand and
improve the way they conducted the sessions. Embedded in
this short survey were the manipulation check items.
The second form, represented as a dissertation survey,
was designed to discretely measure morally disengaged
thinking. Specifically, the Ph.D. student informed partici-
pants that she had been given permission by the NTRC, in
exchange for her help conducting these sessions, to collect
some data for her dissertation. Her dissertation focused on
‘‘employees’ reactions to new positive and negative
workplace experiences,’’ similar to the new experience the
participants had just undergone. To encourage honest
responses about the experience, the Ph.D. student
explained that she was not affiliated with the NTRC:
I’m hoping that you might be willing to help me out
and answer a few quick questions on the purple sheet
about your general impressions of today’s work and
your ‘‘employer,’’ NTRC. Please note that I don’t
work for NTRC and that they won’t know what you
said on this form – your name is not on this survey
and you’re giving this survey to me, not them. So,
please be honest because otherwise your responses
aren’t useful for my research.
To further reiterate the point, the instructions on the
‘‘dissertation survey’’ were as follows: ‘‘Remember, I’m
studying employees’ negative and positive reactions to new
workplace experiences, so you don’t need to think much
about this. Just give me your honest first impressions of this
new experience.’’ Embedded among filler items, the survey
included questions tapping moral disengagement (see
below). This survey was collected separately from the first
form and included no visible personal identifiers. Thus,
participants were led to believe that this survey was for the
graduate student and unrelated to the work they did for
NTRC, and that it could not later be connected to the other
forms they completed.
To prevent diffusion of information about the experi-
ment, participants were not debriefed at the end of the
experiment. Instead, a debrief letter providing a full
explanation of the experiment’s purpose and results was
included with each participants’ mailed payment check.
Personal Gain Manipulation
To mimic real-world work conditions, the pay system was
used to manipulate personal gain. In the baseline personal
gain condition, participants were paid on a piece-rate
system, receiving $1.25 for each full page that was proofed.
In the enhanced personal gain condition, participants were
paid on the piece rate system and had the opportunity to
receive a bonus if they reached a certain level of perfor-
mance. Unbeknownst to participants, the level of perfor-
mance needed to earn the bonus was calculated by the
computer and set in real time to be unattainable without
seriously compromising quality. Thus, we surmised that the
enhanced personal gain condition would trigger self-inter-
est and motivate participants to morally disengage.
Specifically, the enhanced personal gain manipulation
involved some (randomly selected) participants receiving
the opportunity to earn a $10 bonus.
4
At the halfway point
4
To determine an appropriate level for the enhanced personal gain
bonus condition, a pilot study was conducted at a private liberal arts
college in the Midwest. The 68 pilot participants were recruited from
three courses offered in management and accounting, and ranged
from sophomores to seniors. Participants were randomly assigned to
one of three personal gain conditions—no bonus beyond piece rate,
$5 bonus, and $10 bonus. Based on analysis—e.g., changes in
behavior in response to the different personal gain conditions—and
information gleaned from debriefs, it was decided that a $10 bonus
condition (essentially doubling the amount of money that could be
Situational Moral Disengagement 275
123
of the work task, all participants received a pop-up notifi-
cation. In the baseline personal gain condition, the pop-up
box stated, ‘‘You are now halfway through the 30 min
session.’’ In the enhanced personal gain condition, the pop-
up box stated, ‘‘You are now halfway through the 30 min
session. If you reach page [X] by the end of the session, we
will pay you a $10 bonus for your hard work.’’ The target
page number [X] was uniquely set by the computer pro-
gram for each participant in the enhanced personal gain
condition by doubling the individual’s progress during the
first 15 min of the task and adding it to the page number
s/he was currently working on. For example, if the par-
ticipant was on page 4 at minute 15, s/he was told in the
pop-up box that a $10 bonus would be paid for completing
12 pages (calculated as 4 ?[4 92]) by the end of the
30 min session.
Harm Manipulation
Harm to others was manipulated using two methods—a
statement in the pop-up box, and a verbal statement by the
NTRC representative. In the high harm condition, partici-
pants received the following statement in the mid-point pop-
up box: ‘‘Please stay focused—our company’s success in the
U.S. and our customers’ safety depend on the quality of your
work.’’ As shown in Table 3(which shows the manipula-
tions for all four conditions), this statement was coupled
with the other condition-related statements (e.g., those for
baseline personal gain versus enhanced personal gain)
included in the mid-point pop-up message. In addition,
12 min into the activity (3 min prior to the pop-up box
appearing), the NTRC representative briefly returned to the
rooms containing high harm condition participants and sta-
ted, ‘‘I hope everything is okay. Keep it up. This client is
really important to our success, so I appreciate your careful
work. I’ll let you get back to work now…’’ Combined, the
harm manipulation stressed the importance of the partici-
pants’ work both to the safety of the end user (i.e., employees
who would use the client’s machines and rely on the manuals
to avoid serious injury) and to the NTRC’s success. We
expected that the emphasis on harm in these manipulations
would act as a countervailing force, weakening the rela-
tionship between enhanced personal gain and the use of
moral disengagement mechanisms (Hypothesis 2).
In contrast, in the no harm condition, participants
received only the basic pop-up box (‘‘You are now halfway
through the 30 min session’’) and no verbal statement (at
the 12 min mark) was made to the group by the NTRC
representative.
Measures
Situationally-Induced Moral Disengagement
To measure our dependent variable, we focused on two (of
eight) moral disengagement mechanisms because our
results from Study 1 suggested that specific situations
evoke use of particular moral disengagement mechanisms
rather than any or all of them equally. Specifically, we
assessed respondents’ attribution of blame and distortion of
consequences because the nature of the task participants
engaged in (correcting manuals for a company that had
made lots of mistakes, working under a payment scheme
Footnote 4 continued
earned) significantly enhanced recognition of opportunity for personal
gain without evoking high suspicion.
Table 3 Pop-up box and verbal statements by condition
No harm High harm
Baseline personal gain
Pop-up box ‘‘You are now halfway
through the 30 min
session.’’
‘‘You are now halfway
through the 30 min
session. Please stay
focused – our
company’s success in
the U.S. and our
customers’ safety
depend on the quality
of your work’’
Verbal
instructions
None ‘‘I hope everything is
okay. Keep it up. This
client is really
important to our
success, so I appreciate
your careful work. I’ll
let you get back to work
now…’’
Enhanced personal gain
Pop-up box ‘‘You are now halfway
through the 30 min
session. If you reach
page X by the end of
this session we will pay
you a $10 bonus for
your hard work.’’
‘‘You are now halfway
through the 30 min
session. Please stay
focused – our
company’s success in
the U.S. and our
customers’ safety
depend on the quality
of your work. If you
reach page X by the end
of this session we will
pay you a $10 bonus for
your hard work’’
Verbal
instructions
None ‘‘I hope everything is
okay. Keep it up. This
client is really
important to our
success, so I appreciate
your careful work. I’ll
let you get back to work
now…’’
276 J. Kish-Gephart et al.
123
that allowed unmonitored inappropriate behavior, etc.)
seemed most likely to evoke these two types of moral
disengagement. Thus, a total of six items (a=.76)
appeared on the supposed ‘‘dissertation survey’’ to measure
the attribution of blame and distortion of consequences
mechanisms. All items were measured on a seven-point
Likert scale from strongly disagree (1) to strongly agree
(7). Sample items include: ‘‘NTRC should supervise people
more closely if they want them to do a good job’’ and ‘‘This
seemed like a job where my performance didn’t matter
very much to the overall success of the organization.’’
Conscientiousness
Conscientiousness was measured on the course survey
completed several weeks prior to the experiment with ten-
items from Goldberg’s (1999) international personality
inventory pool (IPIP). The items were anchored on a five-
point Likert scale from strongly disagree (1) to strongly
agree (5); and had an estimated reliability of .85. Sample
items included, ‘‘I am always prepared,’’ ‘‘I pay attention to
details,’’ and ‘‘I am exacting in my work.’’
Controls
Three control variables that might account for additional
variance in the dependent variable (situationally-induced
moral disengagement mechanisms) were also entered into
the regression equation. First, prior research has suggested
that gender may be related to differences in moral rea-
soning (e.g., Gilligan 1977). Although empirical results
have been mixed (Tenbrunsel and Smith-Crowe 2008), a
recent meta-analysis found that gender was modestly
related to unethical choices (Kish-Gephart et al. 2010).
Thus, we included gender as a control variable; gender was
coded as zero for female and one for male. Second, we
measured general mental ability by obtaining participants’
overall SAT score. General mental ability was included in
our analysis because it has been linked to other ethical
cognitions such as cognitive moral development (Kohlberg
1969; Rest 1986). Lastly, based on prior work that has
conceptualized moral disengagement as a general tendency
to morally disengage across situations (i.e., a personality
trait) (Duffy et al. 2005; Moore 2008; Moore et al. 2012),
dispositional moral disengagement was included as a
control variable in our analyses. Dispositional moral dis-
engagement was measured with Detert et al’s. (2008)
24-item scale. Sample items included, ‘‘If someone is
pressured into doing something, they shouldn’t be blamed
for it’’ and ‘‘Damaging some property is no big deal when
you consider that others are beating up people’’ (a=.83).
The items were anchored from strongly disagree (1) to
strongly agree (5) on a five-point Likert scale.
Results
Table 4includes the correlations, means, standard devia-
tions, and scale reliabilities (where appropriate) for the
Study 2 variables.
Manipulation Checks
Two items were used to check the personal gain manipu-
lation: ‘‘There were significant pay incentives for working
really quickly’’ and, ‘‘You were aware that you could earn
much more money if you got to a specific page number.’’
The means of the personal gain manipulation check
items followed the expected pattern (M
baseline
=4.26 vs.
M
enhanced
=4.47), and results from a one-way ANOVA
Table 4 Means, standard deviations, correlations, and reliabilities for Study 2 variables
Variables MSD1234567
1. Gender
a
0.51 0.50 –
2. General mental ability 1103.00 158.98 0.18* –
3. Dispositional MD 2.03 0.43 0.47** 0.06 (.83)
4. Personal gain condition
b
0.46 0.50 -0.02 0.01 -0.07 –
5. Harm condition
c
0.48 0.50 0.05 -0.03 0.01 -0.16 –
6. Conscientiousness 3.74 0.64 -0.17* 0.00 -0.33** -0.07 -0.03 (.85)
7. Situationally-induced MD 4.01 1.05 -0.10 0.07 0.01 0.07 -0.22** -0.12 (.76)
Reliability estimates appear on the diagonal; N=136–147
a
Female =0, male =1
b
Baseline personal gain =0, enhanced personal gain =1
c
Low harm =0, high harm =1
*p\.05, ** p\.01
Situational Moral Disengagement 277
123
comparing the baseline and enhanced personal gain con-
ditions were marginally significant (F=3.46; p=.06).
Also, consistent with expectations of the study design,
participants in the enhanced personal gain condition were
far more likely than those in the control condition to
increase their work pace after receiving the opportunity to
earn the bonus—32 % in the bonus (enhanced personal
gain) condition sped up their work rate in the second
15 min by greater than 50 % versus only 11 % in the
control condition (F=9.5; p=.002). Conversely, those
in the high harm conditions were less likely than those in
the control condition to increase their work rate by 50 % or
greater in the second half (13 vs. 26 %; F=3.66;
p=.058). These changes in pace were not innocuous
changes in motivation to work effectively: the correlation
between the first half—second half change in work rate
ratio and ratio of number of errors made by a participant
was .69 (p\.001). In fact, no participant who significantly
sped up his/her work rate (1.5 times or faster on the second
half) was able to maintain the average overall accuracy rate
of 69 % of errors correctly detected.
To assess the success of the harm manipulation, the
following two items were used: ‘‘It was clear that harm
could be done to others if you didn’t do quality work’’ and,
‘‘We made it clear that the quality of your work was very
important to our success and the success of our clients.’’
Supporting this manipulation, the results of a one-way
ANOVA revealed a significant difference between harm
conditions (F=8.59; p\.01), with those in the high
harm condition rating harm as significantly higher
(M
high harm
=4.26 vs. M
low harm
=3.81). These analyses
suggest that the manipulations were generally successful—
that is, participants differentiated between the conditions
with varying levels of harm and personal gain.
Hypotheses Tests
Hypothesis 1 proposes a direct relationship between per-
sonal gain and the use of moral disengagement mecha-
nisms. To test this hypothesis, we used multivariate
regression wherein the use of situationally induced moral
disengagement mechanisms was regressed onto personal
gain while also controlling for participants’ gender, general
mental ability (total SAT score), and dispositional moral
disengagement. As shown in Table 5(Model 1), Hypoth-
esis 1 is not supported (b=.12; ns).
Hypotheses 2 and 3 predict that situational harm to
others and high conscientiousness (respectively) will
attenuate the relationship between situational personal
gain and moral disengagement. These hypotheses were
first assessed individually by adding the respective inter-
action term (i.e., the product of personal gain and harm to
others and the product of personal gain and conscien-
tiousness) to a baseline model containing the independent
and control variables (Baron and Kenny 1986; Kutner
et al. 2005).
As shown in Model 2, the interaction between personal
gain and harm is significant (b=.29; p\.05). A graph of
the interaction is presented in Fig. 1. A simple effects test
using Tukey–Kramer multiple comparisons reveals a
Table 5 Results of regression analyses testing direct and moderating effects for Study 2
Variables Model 1 Model 2 Model 3 Model 4
Step 1 Step 2 Step 1 Step 2 Step 1 Step 2
Gender
a
-0.16 -0.14 -0.13 -0.17 -0.17 -0.14 -0.14
General mental ability 0.09 0.08 0.08 0.10 0.11 0.08 0.10
Dispositional moral disengagement 0.08 0.07 0.11 0.03 0.05 0.03 0.09
Personal gain condition
b
0.12 0.08 -0.09 0.11 1.51** 0.07 1.28*
Harm condition
c
-0.20* -0.37** -0.21* -0.36**
Conscientiousness -0.12 0.10 -0.12 0.11
Personal gain 9harm 0.29* 0.27*
Personal gain 9conscientiousness -1.43** -1.40**
R
2
0.04 0.08 0.11 0.05 0.11 0.09 0.18
DR
2
0.03* 0.06** 0.09**
Standardized beta coefficients are reported
a
Female =0, male =1
b
Baseline personal gain =0, enhanced personal gain =1
c
Low harm =0, high harm =1
*p\.05, ** p\.01
278 J. Kish-Gephart et al.
123
significant difference in the use of moral disengagement
mechanisms between harm conditions when personal gain
is at its baseline condition (p\.05). However, when per-
sonal gain is significantly enhanced (via a bonus opportu-
nity to essentially double one’s earnings), the difference in
the use of moral disengagement mechanisms between harm
conditions is not statistically significant. In other words,
harm to others reduces situational moral disengagement in
a baseline personal gain condition (i.e., a piece-rate pay
system encouraging speed), but does not affect the level of
situational moral disengagement in the enhanced personal
gain condition. Thus, the results provide partial support for
Hypothesis 2.
In support of Hypothesis 3, the interaction between
personal gain and conscientiousness is significant (b=
-1.43; p\.01) as shown in Model 3 (Table 5). The
Hypothesis 3 interaction is presented in Fig. 2. A simple
slopes analysis (Aiken and West 1991) reveals that the
relation between personal gain and situational moral dis-
engagement is significant for low conscientious individu-
als, t(139) =2.84, p\.05, but not for high conscientious
individuals, t(139) =1.02, ns. Therefore, while the overall
effect of enhanced personal gain is more intense for low
conscientious individuals, high conscientious individuals
appear to be less susceptible to moral disengagement when
faced with high personal gain opportunities.
For a more conservative test of Hypotheses 2 and 3, we
examined the hypothesized interactions (e.g., personal
gain 9harm to others and personal gain 9conscien-
tiousness) in the same regression analysis. As shown in
Model 4 (Table 5), the interaction terms were added to a
baseline model of the independent and control variables in
a two-step process (Baron and Kenny 1986; Kutner et al.
2005). Consistent with the results of the earlier tests, the
interaction terms remain significant (Model 4, Step 2), and
explain an additional nine percent of the variance (beyond
the baseline of Model 4, Step 1) in situationally induced
morally disengaged thinking.
We also note that when entered into the regression
equation along with the two manipulated situational vari-
ables (refer to Step 1, Model 2 in Table 5), we do not
discover a significant direct effect on situational moral
disengagement of any of our control variables, including
dispositional moral disengagement. This finding is some-
what surprising given the primary focus of prior research
on the trait (individual propensity) approach to studying
moral disengagement. As a robustness check, we also
tested for the possible attenuating effects of dispositional
moral disengagement on the personal gain—situational
moral disengagement relationship. This interaction is also
non-significant (b=0.08; ns).
Discussion
Study 2 was designed to complement and extend our
findings from Study 1. In particular, we sought to test all
three hypotheses in an experimental simulation that closely
mirrored a real-world work task, thereby increasing inter-
nal and external validity. Contrary to our expectations and
the results of Study 1, we did not find support for the
personal gain hypothesis (Hypothesis 1) in Study 2. This is
a surprising finding given the strength of the enhanced
personal gain condition—participants could essentially
double the money they earned for their work, walking away
with more than $20 for 30 min of their time. A likely
explanation involves the baseline personal gain condition.
The personal gain manipulation was designed to mimic the
type of ‘‘incentive environment’’ commonly faced by
Fig. 1 Changes in situational moral disengagement based on manip-
ulated levels of personal gain and harm to others in Study 2
Fig. 2 Changes in situational moral disengagement based on manip-
ulated levels of personal gain and measured levels of conscientious-
ness in Study 2
Situational Moral Disengagement 279
123
employees in work organizations: employees are typically
offered a base incentive for regular work performance in
addition to the potential to earn a bonus for superior per-
formance. We chose a piece-rate system as the base
incentive because we reasoned that offering a simple
hourly pay rate just for showing up would reduce partici-
pants’ seriousness in completing the task. It is not sur-
prising (as noted when discussing the manipulation check)
that those in the ‘‘baseline’’ personal gain condition
reported a high level of personal gain opportunity (a mean
of 4.24 on a five-point scale). Thus, our inability to detect a
significant relationship between level of personal gain and
the use of moral disengagement mechanisms in Study 2
may represent a restriction of range on the personal gain
variable rather than evidence that the theory underlying
Hypothesis 1 is flawed. Clearly, additional research is
required.
In partial support of Hypothesis 2, we found that making
harm to others salient lessens the effect of personal gain on
the use of moral disengagement mechanisms, but only in
the baseline personal gain condition. This result may be
seen as cause for either optimism or pessimism. On the
positive side, because the harm manipulation consisted of
relatively subtle verbal and written statements, this result
appears promising: organizations may be able to reduce
morally disengaged reasoning in the face of common
baseline personal gain incentives—including piece-rate
systems—via the modest effort needed to make salient the
potential for harm to relevant stakeholders. On the negative
side, the results of Study 2 suggest that when self-interest is
sufficiently stoked by the opportunity for more significant
personal gain, it is likely that morally disengaged reasoning
will increase irrespective of warnings about the potential
harm done, or at least until harm bells are rung much
louder than we did here.
In support of Hypothesis 3, we found that individuals
characterized by high conscientiousness were more likely
to resist the temptation of an enhanced personal gain
opportunity. This finding is consistent with suggestions that
conscientiousness inherently includes a moral component
(Cawley et al. 2000; McAdams 2009) and ‘‘involves
dependably doing what one has promised to do’’ (Becker
1998, p. 158). Thus, organizations may find that highly
conscientiousness individuals will be more task-focused
and less reward-focused, helping to counteract the effects
of enhanced personal gain on morally disengaged
reasoning.
General Discussion
In this research, we proposed two factors that can act as
countervailing forces to minimize self-interest’s effects on
morally disengaged reasoning. Drawing from multiple
literatures, we theorized that situational harm to others
and dispositional conscientiousness would act as restrain-
ing forces that weaken the effects of personal gain situa-
tions on morally disengaged reasoning. Across two
studies, we found some support for our theorizing. We
demonstrated that when personal gain incentives are rel-
atively moderate, making harm to others salient can
reduce the likelihood that employees will morally disen-
gage. Furthermore, when strong personal gain incentives
are present in a situation, highly conscientious individuals
are less apt than their counterparts to engage in morally
disengaged reasoning.
Implications for Theory and Research
Our findings offer several important implications for theory
and research. First, by understanding the effects of situa-
tionally-motivated self-interest through the lens of moral
disengagement and motivated cognition theories (Bandura
2001; Bandura et al. 1996; Kunda 1990), we demonstrate
that the effects of self-interested motivations can be min-
imized in certain circumstances. This extends prior
research that has acknowledged the importance and
potential consequences of self-interest (e.g., Moore and
Lowenstein 2004; Schweitzer et al. 2004), but has not
empirically studied the relationship between self-interest
and morally disengaged reasoning (Bersoff 1999). Further,
by exploring the countervailing force of situational harm to
others, we answer calls for additional work that goes
beyond a singular focus on self-interest as a driver of
ethical reasoning (e.g., Cropanzano et al. 2007).
Although the countervailing effect of situational harm to
others occurred only at the baseline level of personal gain
in Study 2, it is important to note that the harm manipu-
lation in Study 2 was relatively subtle, involving only
verbal and written statements of potential harm. The fact
that even this manipulation reduced the incidence of
morally disengaged reasoning in the face of performance
incentives is promising. It is possible that stronger and
more emotionally charged harm manipulations—such as
pictures of victims or time spent with potential victims (for
example, see Kunda 1990)—will be more effective in
countering the effects of situations involving higher levels
of personal gain. Following from Jones’ (1991) work, such
manipulations may further emphasize the cultural and
psychological nearness to potential victims. Such under-
standing is especially important given today’s global
business environment where potential victims are often
both psychologically and physically distant from
employees.
Future research should consider other situational and
dispositional factors that may act as countervailing forces
280 J. Kish-Gephart et al.
123
that minimize the relationship between personal gain and
morally disengaged reasoning. For example, Graham
et al. (2011) noted that, along with concerns about harm,
fairness is an extremely powerful moral standard and
motive that operates across cultures. Thus, as suggested
by our additional analyses in Study 1, where statements
coded as ‘‘moral standards’’ often involved respondent
concerns about fairness, it is possible that situational
features that focus attention explicitly on unfair treatment
of stakeholders will also attenuate the effects of self-
interest.
Second, our research expands the currently dominant
trait approach to studying morally disengaged reasoning,
demonstrating that being predisposed to morally disengage
is not the sole explanation for the use of moral disen-
gagement mechanisms and that certain types of situations
(in this case, self-interested ones) are likely to trigger this
type of thinking. Indeed, despite previous research support,
a measure of the propensity to morally disengage did not
predict morally disengaged reasoning in Study 2 while
situational harm to others did. Future research should
combine dispositional and situational approaches to better
understand when dispositional moral disengagement is
more likely to predict moral disengagement and when it is
more subject to or overwhelmed by situational triggers. In
short, a more complete picture of why employees morally
disengage will likely include dispositional and situational
components and their interactions.
Third, our study may also help to shed light on one of
the ambiguities in the current moral disengagement lit-
erature. Bandura’s theory (1986) does not explicitly dis-
cuss when moral disengagement occurs—before, during,
or after behavior. Others have argued that this type of
reasoning can occur pre- or post-unethical action (e.g.,
Ashforth and Anand 2003; Cressey 1953; Shu et al. 2009;
Sykes and Matza 1957). Our studies were not designed to
directly address this question. However, our results sug-
gest that people can and do consciously state or endorse
reasons for their decisions that are consistent with moral
disengagement theory. In our studies, participants were
never confronted about their behavioral intentions or
behavior being ethically questionable and thus had no
clear external stimulus to use morally disengaged rea-
soning. Yet, across both studies, participants either pro-
vided explanations for their decisions that were rife with
moral disengagement mechanisms (Study 1) or they
agreed that their own thoughts matched morally disen-
gaged reasoning (Study 2).
Last, in studying moral disengagement as a disposition,
prior research has tended to use general measures of
moral disengagement, combining Bandura’s (1986) eight
moral disengagement mechanisms into one overall com-
posite measure (e.g., Detert et al. 2008; Moore et al.
2012). While this composite approach appears to be
appropriate when moral disengagement is treated as a
general disposition, our findings suggest that situational
features are more likely to trigger the use of specific and
select moral disengagement mechanisms. For example, in
Study 1, individuals were more likely to use a subset of
moral disengagement mechanisms based on the scenario
presented (e.g., diffusion of responsibility in group deci-
sion situations). This suggests that researchers should
carefully consider whether a general or more specific
moral disengagement measure is appropriate, especially
when studying situational antecedents such as personal
gain opportunities. Additional research will be necessary
to elucidate relationships between situational features and
specific moral disengagement mechanisms, and to develop
valid measures of each mechanism for use in this type of
research.
Implications for Practice
Our study suggests that organizations should consider how
they might be creating personal gain situations that,
whatever their merits, may also lead to morally disengaged
reasoning. For example, pay systems such as sales com-
mission systems or year-end bonus systems that offer the
majority of one’s pay based on performance levels may
create ideal conditions for moral disengagement if
achieving performance goals is difficult or impossible
without engaging in unethical behavior (e.g., lying about
one’s performance or lying to customers). Thus, when
creating reward systems, managers should ask themselves
(and their employees) whether the system is likely to create
such conditions and, if the answer is yes, they should
attempt to alter them. The challenge remains how to do so
without eliminating entirely the positive motivational
benefits of self-interest.
Focusing solely on minimizing the effects of reward
systems, however, may not be enough. Most work contexts
have an underlying personal gain frame—regularly incen-
tivizing employees with challenging goals and accompa-
nying rewards (Moore and Lowenstein 2004). Our results
suggest two specific ways to reduce the likelihood that
personal gain incentives will trigger morally disengaged
reasoning. First, organizations can attempt to hire highly
conscientious employees who are not only strong per-
formers (Barrick and Mount 1991; Barrick et al. 2001), but
who also appear to have a natural resistance to personal
gain incentives’ effect on morally disengaged reasoning.
Second, organizations should take steps to emphasize
Situational Moral Disengagement 281
123
potential harm to stakeholders, particularly when pay
incentives are triggering a moderate or higher level of
motivated pursuit of self-interest. For example, requiring
employees to analyze potential harm to stakeholders in any
new project, product, or decision may keep the ‘‘do no
harm’’ standard in the forefront, thus reducing the likeli-
hood of personal gain opportunities leading to moral dis-
engagement. Focusing attention on specific harmed
individuals, such as those hurt or killed by preventable
errors, may be particularly effective (LeBreton and Senter
2008). Ideally, of course, organizations would simply avoid
offering exorbitant pay incentives, especially those that
cannot routinely be achieved without compromising ethical
standards, because under such conditions there may be
almost nothing that will inhibit the tendency to morally
disengage.
Strengths and Limitations
In addition to the aforementioned theoretical contributions,
our work features several methodological strengths. First,
we utilized multiple methods, triangulating the results from
analysis of qualitative data in Study 1 with an experimental
simulation in Study 2. To our knowledge, Study 1 is the
first moral disengagement study that uses rigorous analysis
of qualitative data to glean valuable insight into partici-
pants’ cognition. And, in Study 2, we exposed participants
to a work scenario, in which they were unaware that they
were involved in an experiment. This technique allowed us
to directly manipulate conditions in a way that provided
external realism (McGrath 1982).
Despite these methodological strengths, our work also
has limitations. First, across both studies, we found a
positive relationship between personal gain and morally
disengaged reasoning (as hypothesized). However, this
relationship was not statistically significant in Study 2. In
our attempt to simulate a realistic work environment, we
created a baseline personal gain condition in which all
participants were paid by piece-rate, as opposed to com-
paring enhanced personal gain to a ‘‘no personal gain’’
situation. But, by doing so, we likely limited our ability to
find a statistically significant effect of situational personal
gain on moral disengagement. Future research should
extend our results by examining the relationship between
moral disengagement mechanisms and more nuanced lev-
els of personal gain in situations. Further inquiry might also
compare the effects of different types of incentive systems.
Although we found support for our two countervailing
forces in Study 2, our study design does not allow inference
about temporal effects and, in particular, the extent to
which the salience of harm to others would continue to
attenuate the effects of moderate personal gain over time.
Future research should consider various initiatives that
emphasize harm and examine the effectiveness of those
initiatives over extended time periods. For example, might
employees develop a schema over time that automatically
includes that consideration of harm in decision making
(Gioia 1992) or, rather, become immune to the message
(especially when seduced with high personal gain
incentives)?
Conclusion
Self-interest is a powerful motivator. Indeed, shortly before
the financial meltdown in 2008, a Wall Street trader
received a $3 million dollar bonus (and later, multiple job
offers) for unloading toxic mortgage-backed securities off
the company’s books and onto unwitting investors (Ressler
and Mitchell 2011). Later, such traders and other Wall
Street insiders were quoted as diffusing responsibility and
blaming their victims (McLean and Nocera 2010; Ressler
and Mitchell 2011). These examples are vivid reminders
that personal gain opportunities are likely to trigger mor-
ally disengaged reasoning. In this research, we aimed to
begin clarifying what forces, if any, can counteract this
tendency. Our research demonstrates that the effect of
personal gain situations on moral disengagement can, in
some situations, be weakened by certain dispositional (e.g.,
conscientiousness) and situational (e.g., situational harm to
others) factors. We hope that this and further efforts to
understand these countervailing forces will help managers
learn how to minimize moral disengagement in their
organizations.
Acknowledgments We extend our thanks to Vikas Anand, Mike
Brown, Dan Chiaburu, David Harrison, Nate Petitt, and the members
of the ORG seminar at Penn State for their feedback on earlier drafts.
Appendix
Study 2 Proofreading Activity: Screens One and Two
282 J. Kish-Gephart et al.
123
Situational Moral Disengagement 283
123
References
Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and
interpreting interactions. Thousand Oaks: SAGE Publications
Inc.
Ashforth, B. E., & Anand, V. (2003). The normalization of corruption
in organizations. Research in Organizational Behavior, 25, 1–52.
Bandura, A. (1986). Social foundations of thought and action. Upper
Saddle River: Prentice Hall.
Bandura, A. (2001). Social cognitive theory: An agentic perspective.
Annual Review of Psychology, 52, 1–26.
Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996).
Mechanisms of moral disengagement in the exercise of moral
agency. Journal of Personality and Social Psychology, 71(2),
364–374.
Bandura, A., Caprara, G. V., & Zsolnai, L. (2000). Corporate
transgressions through moral disengagement. Journal of Human
Values, 6(1), 57–64.
Bandura, A., Caprara, G. V., Barbaranelli, C., Pastorelli, C., &
Regalia, C. (2001). Sociocognitive self-regulatory mechanisms
governing transgressive behavior. Journal of Personality and
Social Psychology, 80(1), 125–135.
Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator
variable distinction in social psychological research: Conceptual,
strategic, and statistical considerations. Journal of Personality
and Social Psychology, 51(6), 1173–1182.
Barrick, M., & Mount, M. K. (1991). The big five personality
dimensions and job performance: A meta-analysis. Personality
Psychology, 44, 1–26.
Barrick, M., Mount, M. K., & Judge, T. (2001). Personality and
performance at the beginning of the new millennium: What do
we know and where do we go next? International Journal of
Selection Assessment, 9, 9–30.
Bartko, J. J. (1976). On various interclass correlation reliability
coefficients. Psychological Bulletin, 83, 762–765.
Batson, C. D., Lishner, D. A., Carpenter, A., Dulin, L., Harjusola-
Webb, S., Stocks, E. L., et al. (2003). ‘‘…As you would have
them to unto you’’: Does imagining yourself in the other’s place
stimulate moral action? Personality and Social Psychology,
29(9), 1190–1201.
Becker, T. E. (1998). Integrity in organizations: Beyond honesty and
conscientiousness. Academy of Management Review, 23(1), 154–161.
Berry, C. M., Ones, D. S., & Sackett, P. R. (2007). Interpersonal
deviance, organizational deviance, and their common correlates:
A review and meta-analysis. Journal of Applied Psychology,
92(2), 410–424.
Bersoff, D. M. (1999). Explaining unethical behavior among people
motivated to act prosocially. Journal of Moral Education, 28(4),
413–428.
Cawley, M. J., I. I. I., Martin, J. E., & Johnson, J. A. (2000). A virtues
approach to personality. Personality and Individual Differences,
28, 997–1013.
Claybourn, M. (2011). Relationships between moral disengagement,
work characteristics, and workplace harassment. Journal of
Business Ethics, 100, 283–301.
Cooper, J. (2001). Motivating cognitive change: The self-standards
model of dissonance. In J. P. Forgas, K. D. Williams, & S.
C. Wheeler (Eds.), The social mind: Cognitive and motivational
aspects of interpersonal behavior (pp. 72–91). New York:
Cambridge University Press.
Cressey, D. R. (1953). A study in the social psychology of
embezzlement: Other people’s money. Glencoe, IL: Free Press.
Cropanzano, R., Stein, J., & Goldman, B. M. (2007). Self-interest. In
J. Bailey (Ed.), Handbook of organizational and managerial
wisdom. Thousand Oaks: Sage Publications.
de Waal, F. B. M. (2008). Putting the altruism back into altruism: The
evolution of empathy. Annual Review of Psychology, 59,
279–300.
Del Barrio, V., Aluja, A., & Garcia, L. F. (2004). Relationship
between empathy and the big five personality traits in a sample
of Spanish adolescents. Social Behavior and Personality, 32(7),
677–682.
Detert, J. R., Trevin
˜o, L. K., & Sweitzer, V. L. (2008). Moral
disengagement in ethical decision making: A study of anteced-
ents and outcomes. Journal of Applied Psychology, 93(2),
374–391.
Ditto, P. H., Pizarro, D. A., & Tannenbaum, D. (2009). Motivated
moral reasoning. Psychology of Learning and Motivation, 50,
307–338.
Duffy, M. K., Aquino, K., Tepper, B. J., Reed, A., & O’Leary-Kelly,
A. M. (2005). Moral disengagement and social identification:
When does being similar result in harm doing? Paper presented
at the annual meeting of the Academy of Management.
Eisenberg, N. (2000). Emotion, regulation, and moral development.
Annual Review of Psychology, 51, 665–697.
Funder, D. C., & Fast, L. A. (2010). Personality in social psychology.
In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of
social psychology (Vol. 1, pp. 668–697). Hoboken, NJ: John
Wiley & Sons Inc.
Gilligan, C. (1977). In a different voice: Women’s conceptions of the
self and morality. Harvard Educational Review, 49, 431–446.
Gioia, D. A. (1992). Pinto fires and personal ethics: A script analysis
of missed opportunities. Journal of Business Ethics, 11(5–6),
379–389.
Glovin, D., Kishan, S., Hurtado, P., & Burton, K. (2011, February 9).
SAC Ex-portfolio managers accused in trading probe. Bloom-
berg BusinessWeek.
Goldberg, L. R. (1999). A broad-bandwidth, public domain,
personality inventory measuring the lower-level facets of
several five-factor models. In I. Mervielde, I. Deary, F. De
Fruyt, & F. Ostendorf (Eds.), Personality psychology in Europe
(Vol. 7, pp. 7–28). Tilburg, The Netherlands: Tilburg Univer-
sity Press.
Graham, J. W., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto,
P. H. (2011). Mapping the moral domain. Journal of Personality
and Social Psychology, 101(2), 366–385.
Grover, S. L., & Hui, C. (1994). The influence of role conflict and
self-interest on lying in organizations. Journal of Business
Ethics, 13(4), 295–303.
Haidt, J. (2001). The emotional dog and its rational tail: A social
intuitionist approach to moral judgment. Psychological Review,
108(4), 814–834.
Haidt, J., & Kesebir, S. (2010). Morality. In S. T. Fiske, D. T. Gilbert,
& G. Lindzey (Eds.), Handbook of social psychology (5th ed.,
Vol. 2). Hoboken, NJ: John Wiley & Sons.
Hinrichs, K. T., Wang, L., Hinrichs, A. T., & Romero, E. J. (2012).
Moral disengagement through displacement of responsibility:
The role of leadership beliefs. Journal of Applied Social
Psychology, 42(1), 62–80.
Hoffman, M. L. (2000). Empathy and moral development: Implica-
tions for caring and justice. New York: Cambridge University
Press.
Jones, T. M. (1991). Ethical decision making by individuals in
organizations: An issue-contingent model. Academy of Manage-
ment Review, 16(2), 366–395.
Jones, T. M., & Ryan, L. V. (1997). The link between ethical
judgment and action in organizations: A moral approbation
approach. Organization Science, 8(6), 663–680.
Kelman, H. C., & Hamilton, V. L. (1989). Crimes of obedience. New
Haven: Yale University Press.
284 J. Kish-Gephart et al.
123
Kish-Gephart, J. J., Harrison, D. A., & Trevin
˜o, L. K. (2010). Bad
apples, bad cases, and bad barrels: Meta-analytic evidence about
sources of unethical decisions at work. Journal of Applied
Psychology, 95(1), 1–31.
Kohlberg, L. (1969). Stage and sequence: The cognitive-develop-
mental approach to socialization. In D. A. Goslin (Ed.),
Handbook of socialization theory and research. Chicago: Rand
McNally.
Kruglanski, A. W., Belanger, J. J., Chen, X., Kopetz, C., Pierro, A., &
Mannetti, L. (2012). The energetics of motivated cognition: A
force-field analysis. Psychological Review, 119(1), 1–20.
Kunda, Z. (1990). The case for motivated reasoning. Psychological
Bulletin, 108(3), 480–498.
Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2005). Applied
linear statistical models (5th ed.). New York: McGraw-Hill.
LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions
about interrater reliability and interrater agreement. Organiza-
tional Research Methods, 11(4), 815–852.
Lodi-Smith, J., & Roberts, B. W. (2007). Social investment and
personality: A meta-analysis of the relationship of personality
traits to investment in work, family, religion, and volunteerism.
Personality and Social Psychology Review, 11, 68–86.
Maruna, S., & Copes, H. (2005). Excuses, excuses: What have we
learned from five decades of neutralization research? Crime and
Justice, 32, 221–320.
McAdams, D. P. (2009). The moral personality. In D. Narvaez & D.
K. Lapsley (Eds.), Personality, identity, and character: Explo-
rations in moral psychology (pp. 11–29). New York: Cambridge
University Press.
McCrae, R. R., & John, O. P. (1992). An introduction to the five-
factor model and its applications. Journal of Personality, 60(2),
175–215.
McGrath, J. E. (1982). Dilemmatics: The study of research choices. In
J. E. McGrath, J. Martin, & R. A. Kulka (Eds.), Judgment calls
in research (pp. 69–102). Beverly Hills: Sage Publications.
McLean, B., & Nocera, J. (2010). All the devils are here: The hidden
history of the financial crisis. New York: Penguin Group.
Miller, D. (1999). The norm of self-interest. American Psychologist,
54(12), 1053–1060.
Moore, C. (2008). Moral disengagement in processes of organiza-
tional corruption. Journal of Business Ethics, 80, 129–139.
Moore, D. A., & Lowenstein, G. (2004). Self-interest, automaticity,
and the psychology of conflict of interest. Social Justice
Research, 17(2), 189–202.
Moore, C., Detert, J. R., Trevin
˜o, L. K., Baker, V. L., & Mayer, D. M.
(2012). Why employees do bad things: Moral disengagement
and unethical organizational behavior. Personnel Psychology,
65(1), 1–48.
Nocera, J. (2011, March 25). In prison for taking a liar loan. New York
Times.
Ressler, P., & Mitchell, M. (2011). Conversations with Wall Street:
The inside story of the financial Armageddon and how to prevent
the next one. New York: FastPencil Inc.
Rest, J. R. (1986). Moral development: Advances in research and
theory. New York: Praeger Publishers.
Roberts, B. W., & Hogan, R. (2001). Personality psychology in the
workplace. Washington, DC: American Psychological Associa-
tion Press.
Schweitzer, M. E., Ordonez, L., & Douma, B. (2004). Goal setting as
a motivator of unethical behavior. Academy of Management
Journal, 47(3), 422–432.
Sen, A. K. (1977). Rational fools: A critique of the behavioral
foundations of economic theory. Philosophy & Public Affairs,
6(4), 317–344.
Shu, L. L., Gino, F., & Bazerman, M. H. (2009). Dishonest deed, clear
conscience: Self-preservation through moral disengagement and
motivated forgetting. Harvard Business School Working Paper
Series, no. 09-078.
Smith, A. (2011, February 28). Madoff says his victims were
‘greedy’. CNNMoney.
Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A
theory of delinquency. American Sociological Review, 22,
664–670.
Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-
control predicts good adjustment, less pathology, better grades,
and interpersonal success. Journal of Personality, 72(2),
271–322.
Tenbrunsel, A., & Messick, D. M. (1999). Sanctioning systems,
decision frames, and cooperation. Administrative Science Quar-
terly, 44(4), 684–707.
Tenbrunsel, A., & Smith-Crowe, K. (2008). Ethical decision making:
Where we’ve been and where we’re going. Academy of
Management Annals, 2(1), 545–607.
Trevin
˜o, L. K. (1986). Ethical decision making in organizations: A
person–situation interactionist model. Academy of Management
Review, 11(3), 601–617.
Tsang, J. (2002). Moral rationalization and the integration of
situational factors and psychological processes in immoral
behavior. Review of General Psychology, 6(1), 25–50.
Walumbwa, F. O., & Schaubroeck, J. (2009). Leader personality traits
and employee voice behavior: Mediating roles of ethical
leadership and work group psychological safety. Journal of
Applied Psychology, 94(5), 1275–1286.
Wang, L., & Murnighan, J. K. (2011). On greed. Academy of
Management Annals, 5(1), 279–316.
Situational Moral Disengagement 285
123