ChapterPDF Available

Organizations Matter

University of Utah
Boston University
Chapter prepared for The Social Psychology of Good and Evil (2nd ed., New York: Guilford Press)
Editor: Arthur G. Miller
Acknowledgements: We thank Art Miller and Alex Romney for their helpful comments on earlier versions of
this chapter, and to Teng Zhang for his bibliography assistance.
Organizations are collectivities with more or less identifiable boundaries, hierarchies, rules and
procedures, and communication systems. They engage in activities related to a set of goals, and the
results of those activities have consequences for the organizations themselves, their members, and
the societies in which they are embedded (e.g., Etzioni, 1964; Hall, 1977; Scott & Davis, 2007;
Simon, 1964). Organizations come in a variety of shapes, sizes and types, including for instance,
governments, armies, religious denominations, charities, and businesses. Largely because of our
familiarity, this chapter focuses on the latter.
The principal message we want to convey is that understanding organizations matters for the social
psychology of good and evil. We make two cases for our claim. One is that organizations have the
capacity to create widespread good, as well as to perpetrate horrific evil. Here the case we make is an
intuitive one: we give examples of companies doing wrong and doing good. The second case we
make is based on our knowledge as social scientists: we argue that organizations are distinct from
individuals and small groups. That is, understanding individuals and groups does not equate to
understanding organizations; indeed, the “atomistic fallacy” is to generalize from individual (or
group) behavior to that of higher-level entities (Alker, 1969). To this end, we review features of
organizations likely to promote the production of good and evil: hierarchy and power, “tone at the
top” and socialization, and moral blindness and automaticity. Further, we discuss the concept of
emergence, which serves to connect these features of organizations to organizations’ capacity to do
good and evil. The chapter closes with a discussion of where to go from here.
We begin with what the news media makes difficult to ignore: the capacity for organizations to
perpetrate evil. For instance, although it happened more than 30 years ago, the Union Carbide
pesticide plant gas leak in Bhopal, India remains headline news today (Editorial Board, 2014). More
than 5,000 were killed in the aftermath, and the leak has continued to plague Bhopal for generations
due to environmental contamination, causing cancer, birth defects, and developmental problems.
The toll is now at 600,000 people affected. The site has still not been cleaned. Nor has everyone
faced justice. While eight Indian executives were convicted of negligence (Editorial Board, 2014),
India considered Warren Anderson, the then CEO of Union Carbide, to be a fugitive (Martin, 2014).
Having been released on bail in the days after the tragedy, he left India never to return. He died in
2014 (Martin, 2014).
Walmart, the largest private employer in the U.S. (Hess, 2013), comprised of 11,000 stores and 2.2
million employees worldwide (Walmart, n.d.), is a plentiful source of corporate wrongdoing
examples. They are routinely embroiled in controversies, including bribery (Barstow, 2012),
mishandling hazardous waste (Clifford, 2013), sex discrimination (Hines, 2012), and violating
workers’ rights (Greenhouse, 2014; Trottman & Banjo, 2015). Walmart’s treatment of its workers in
particular made it an obvious context for Ehrenreich’s (2001) investigation into the plight of the
working poor. Notably, she used words such as dictatorship and authoritarian to describe the
working conditions she experienced.
Her choice of terms recalls the too recent past when companies including BMW and Siemens, with
little apparent respect for human rights and dignity, utilized Jewish slave labor (Cohen, 1999). It also
recalls a lesser known chapter of history: the continuance of black slavery in the American South
after the Emancipation Proclamation and into the 1940s (Blackmon, 2008). Economically devastated
by the Civil War, the South rebuilt itself through alternative means of coerced labor. Among other
schemes, local governments created convict leasing programs. Black men especially were targeted
for arrests for trivial “offenses” (e.g., vagrancy, meaning not having immediate proof of
employment) and severe punishments. Once in the system they were “leased” to private enterprises
including mines, lumber camps, quarries, farms, factories, railroads, and foundries, creating a source
of revenue for government and a source of very cheap labor for business. To illustrate the cruelty of
the system, Blackmon (2008) tells the story of Green Cottenham. Arrested in 1908 in Shelby
County, Alabama for vagrancy, the twenty-two year old black man was found guilty and sentenced
to 30 days of hard labor. Because he was unable to pay the numerous fees assessed for the
processing of his case, his sentence was extended to almost one year of hard labor, which he served
in a coal mine operated by a subsidiary of U.S. Steel. As dangerous as it was unjust, many prisoners
leased to businesses died during their servitude of injury, disease, and murder; many more were
brutalized by practices such as whippings.
At the other end of this spectrum are companies that are engaged in doing widespread good.
Patagonia is one such company. Despite being in the retail industry, one of their causes is
responsible consumerism. The average American buys 64 pieces of new clothing a year (Cline,
2012); correspondingly, the Environmental Protection Agency (n.d.) estimates that in 2012 textiles
alone generated over 14 million tons of municipal solid waste. In response, Patagonia began their
“Don’t Buy This Jacket” (unless you really need it) campaign (Nudd, 2011) and promoted their
Common Thread Initiative (The Cleanest Line, n.d.) aimed at reducing unnecessary purchases of
their products, and the repair and recycling of existing products. They have reportedly repaired and
reused more than 26,000 and 41,000 items, respectively, and recycled more than 56 tons of worn out
Patagonia clothes (Landwehr, 2013).
Microsoft is another company engaged in doing good on a large scale by tackling a significant
societal evil: they are combating child sex trafficking. The Department of Homeland Security (2014)
estimates that human trafficking is a $32 billion per year industry, making it the second most
profitable criminal industry. Microsoft, finding existing knowledge about the use of technology in
child sex trafficking lacking, created a research grant program in 2011 to fund research into
understanding how child victims are recruited, abducted, and sold via technology, ultimately
addressing questions including “how do ‘pimps’ advertise victims for sale online,” “how to ‘johns’
search for their victims,” and “how do ‘pimps’ confirm that a ‘john’ is not a law enforcement
officer” (Microsoft Research, 2011). Johnson-Stempson, the Education and Scholarly
Communication Principal Research Director for Microsoft Research, described the first steps of the
initiative as follows: “We thought the best way we could help is to first define the problem. Then it’s
much easier to build a technology solution that may disrupt it” (Microsoft, 2012). To date this
sponsored research has led to progress toward defining the problem, including understanding that
the use of technology is more varied than previously thought (“spreading across multiple online sites
and digital platforms”) and that despite the increased access of mobile technology throughout the
world, variation in access implies the need for approaches tailored to targeted groups (Latonero et
al., 2012).
The examples we provide are illustrative of the capacity for organizations to do good and evil, but
they do not address the questions of why or how organizations might do good and evil. Turning
now to making our second case for why understanding organizations matters for the social
psychology of good and evil, we identify various features of organizations that can facilitate both
outcomes. These features are not necessarily unique to organizations, but they are key features of
organizations, and they support our claim that organizations are not simply reducible to individuals
or even small groups. Rather, organizations, particularly in terms of the horrific evils they may
perpetrate and the widespread good they can produce, are emergent phenomena, albeit phenomena
grounded in the dynamics of individuals and groups.
Before proceeding, we must express our considerable gratitude to John Darley (1992, 2001, 2005).
There is no doubt that most of what we have to say is borrowed, directly or indirectly, from Darley’s
(1992) review of several important works (e.g., Arendt, 1963; Kelman & Hamilton, 1989; Lifton,
1986; Milgram, 1974) and, more importantly, the creative insights he drew from this collection. For
instance, regarding the atomistic fallacy, Darley (1992), in focusing on the social production of evil,
asserted that people tend to see that “behind evil actions lay evil doers who can be identified as
possessing an inherent inward evilness” (p. 203). He goes on to argue that doing so “preserves our
belief in a just and ordered world” (p. 203), for the evil individual is viewed as an exception; an
outlier that can be dismissed or contained as a threat to the principles of order and justice. Darley
(1992) concluded that
individual-level psychology is largely irrelevant to the occurrence of a much more common
source of evil actions produced by what I call ‘organizational pathology’. We now need to
create…a psychology and sociology of how human institutions can purposely move or
accidently lurch toward causing these actions, somehow neutralizing, suspending, overriding, or
replacing the moral scruples of their members. That psychology will inevitably be a social and
organizational one, rather than one centered on the individual…. (p. 217)
As our argument unfolds below, the rationale for his conclusions will become apparent.
While some organizations may be flatter with fewer hierarchical levels and others taller with more,
all organizations are hierarchical by definition (e.g., Blau, 1968). Hierarchies are central to
understanding how organizations produce evil, and we contend, also good. This is so, in part,
because hierarchical position tends to co-vary naturally, for example, with power, status1 and what
economists and sociologists call “decision rights” (Zuckerman, 2010). Decision-rights reside at the
top of organizational hierarchies but may be loaned to (but not owned by) those lower in the
hierarchy (Baker, Gibbons, & Murphy, 1999). These rights are very broad, including, for example,
hiring, training, job design, sourcing, capital, operating procedures, pricing, advertising, and product
design (Baker et al., 1999). Of course, some of these decisions entail those higher in the hierarchy
issuing orders to those below.
By examining the bases of power (French & Raven, 1959) embedded in superior hierarchical
positions, the import of hierarchy becomes even more apparent. Superior hierarchical positions
entail legitimate power by definition (Emerson, 1962). For some individuals, especially those with
politically conservative views, complying with this authority represents a moral obligation (e.g., Haidt,
2007). And, as noted by Darley (1992), Milgram (1974) argued that because of the necessity for
organized action, humans have evolved to be obedient to authority. While legitimate power does not
always produce obedience, it obviously can be a potent force in the good and evil actions of
organizations. Also potent are coercive power (influencing subordinates through punishments such
as verbal abuse, reduction in paid hours worked, threats of firing, and even dismissal) and reward
power (influencing subordinates through rewards such as praise, pay increases, and promotions).
Those in superior positions can and do materially and otherwise reward and punish their
In organizations what power produces is obedience among lower level organizational members (e.g.,
Brief, Buttram, Elliott, Reizenstein, & McCline, 1995; Brief, Dietz, Cohen, Pugh, & Vaslow, 2000;
Kelman & Hamilton, 1989) perhaps in proportion to the economic need of those below, those
whose families depend on their employment and those living on the margins (e.g., Brief & Aldag,
1989; 1994; Brief, Konovsky, George, Goodwin, & Link, 1995). According to Milgram (1974) this
occurs because people have defined themselves in a manner that renders them open to regulation:
“… the individual no longer views himself as responsible for his own actions but defines himself as
an instrument for carrying out the wishes of others...” (p. 134). Simon (1976) was even more blunt:
“personal considerations determine whether a person will participate [in organizations], but if he
decides to participate, they will not determine the content of his organizational behavior” (p. 203).
Consistently, the sociologists, Hamilton and Sanders (1992, p. 49) observed that persons in
subordinate organizational positions do not see their situation as one of choice “but of role
requirements and obligations.”
Referent and expert bases of power (French & Raven, 1959) also can be seen to bolster the potential
influence of those in superior hierarchal positions. Lower organizational participants can identify
with those higher up. Such identification, which leads to referent power, could be rooted in the
higher material rewards and status possessed by those higher up the hierarchy. Finally, due to the
1 For more on hierarchy, power, and status co-varying “naturally” in organizations see Magee and Galinsky (2008).
knowledge that organizational superiors have about the coordination of complex tasks they also may
have power based on their expertise (Darley, 2001).
Relevant to our essay, a seemingly common way of manipulating power in the social psychology
laboratory is to have participants recall an incident when they had power over another individual or
individuals and to describe that situation and how they felt (e.g., Galinsky, Gruenfeld, & Magee,
2003). While this manipulation and others like it clearly address how a person with power feels, it
unknown to what extent the power episodes recalled by individuals relate to power as it exists in
organizations. Moreover and obviously, the exercise of real and lasting influence over others on a
daily basis as entailed in a superior hierarchical position may not equate to the conditions commonly
created in the laboratory. Of course, this ultimately is an empirical question (also see Darley, 2001,
on such concerns).
Often the exercise of reward and punishment power within organizations involves making rewards
and punishments contingent on some measure of performance, commonly defined in terms of goal
attainment (Locke & Latham, 1990). While this can be an effective motivational tool, it can also lead
to wrongdoing (Schweitzer, Ordóñez, and Douma, 2004). Ordóñez, Schweitzer, Galinsky, and
Bazerman (2009) give these examples:
At Sears’ automotive unit, employees charged customers for unnecessary repairs in order to
meet specific, challenging goals. In the late 1980s, Miniscribe employees shipped actual bricks to
customers instead of disk drives to meet shipping targets. And in 1993, Bausch and Lomb
employees falsified financial statements to meet earnings goals. In each of these cases, specific,
challenging goals motivated employees to engage in unethical behavior. (p. 10)
Further, the exercise of power can create accountability “… the implied or explicit expectation
that one may be called upon to justify one’s belief, feelings, and actions to others” (Lerner &
Tetlock, 1999, p. 255). A considerable amount of data shows that when the views of those to whom
one is accountable are known, accountable individuals tailor their message to be in line with these
views (e.g., an auditor’s opinion reflecting the wishes of the client, see Buchman, Tetlock, & Reed,
1996; Hackenbrock & Nelson, 1996). Anticipating such findings, Tetlock (1985) argued that the
study of judgment and choice needs to be broadened to take into consideration the impact of social
and organizational context. Later, Tetlock and Mitchell (2010) similarly argued that understanding
moral decision making requires a “…macrosocial psychological perspective that stresses the efforts
of social beings to preserve their claims to desired identities in the eyes of the constituencies to
whom they feel accountable” (p. 207). As examples of research consistent with this perspective, they
cite Aquino, Freeman, Reed, Lim, and Felps (2009) and Warren and Smith-Crowe (2008). The
former project focuses on the interactive effects of individuals’ moral identity and situational primes
and incentives on moral judgment and behavior; the latter focuses on the effects of internal
emotional processes and external sanctions on shifts in moral judgments. In contrast to these
examples, they characterize the typical research paradigm as entailing merely the observation of
“only one step of one partner” (p. 207), and, thus, as not accounting for the reality of social,
situational, and reciprocal influences.2
2 Likewise, the study of power in the social psychology lab may provide a limited, or even inaccurate, view of power as it
exists in organizations (see also Darley, 2001).
Others have made more general points along the same lines. Anteby (2013) in his ethnographic
study of Harvard Business School puts it this way:
By constructing moral dilemmas as solely individual decision-making problems, we often forget
that individuals operate in collective settings. Yet moral conduct rarely occurs ex cathedra or in
context-free behavioral laboratories, as some scholarship might lead us to believe…. If we wish
to discuss individuals’ moral conduct we need in particular to look into the organizational
conditions that give rise to or hinder such conduct. (p. 125)
Palmer and Yenkey (in press) drew on Granovetter’s (1985) concepts of under-socialized and over-
socialized theories of behavior to make the point that disregarding context is not the only possible
form of myopia. They argue that much research on wrongdoing in organizations falls into one of
two categories: either under-socialized, roughly where individuals are assumed to essentially follow
their self-interest in a social vacuum, and over-socialized, roughly where individuals are assumed to
enact the assumptions, values, beliefs, and norms of the groups in which they are embedded. In an
effort to study organizational wrongdoing from a more integrated perspective, they conducted a
multi-level study of the use of illegal performance-enhancing drugs among 198 cyclists embedded in
22 teams in the 2010 Tour de France. They theorized and found support for the hypothesis that
individuals occupying roles within their teams that are crucial to their individual and team success
are more likely to engage in wrongdoing due to both the very high performance expectations placed
on them by subordinates, peers, and superiors, as well as their own drive to perform at a high level.
We turn next to discuss further common sources of expectations in organizations and how
individuals come to learn of these expectations.
We borrow the term “tone at the top” from the accounting literature where it commonly refers to
“the ethical atmosphere created in the workplace by the organization’s leadership” (Association of
Certified Fraud Examiners, n.d., p. 1). The term was introduced by the National Commission on
Fraudulent Financial Reporting (called the Treadway Commission), which claimed it is a prime
causal factor leading to fraudulent behavior and financial statement fraud (but see Brief, Dukerich,
Brown, and Brett, 1996, who found null results for the effects of codes of conduct on willingness to
engage in financial fraud). Tone at the top encompasses the normative idea of “ethical leadership”
referring to how leaders ought to behave both in terms of demonstrating appropriate conduct and
promoting such conduct among followers (Brown & Treviño, 2006; Ciulla, 2004). The empirical
evidence, however, is mixed. Kish-Gephart, Harrison, and Treviño (2010) found no significant
independent effect of code of conduct existence on ethical choice in their meta-analysis. Yet more
recently there is evidence that formal systems, including codes of conduct, are more likely to be
effective when they are reacting against informal systems that promote unethical behavior (Smith-
Crowe et al., in press). Further, Treviño, Den Nieuwenboer, and Kish-Gephart (2014) speculated
that codes of conduct could be rendered effective if employees were to sign pledges before acting.
Their speculation is based on an extrapolation of Shu, Mazar, Gino, Ariely, and Bazerman’s (2012)
finding that pledging honesty before acting made ethics more salient and reduced dishonesty. Also
several studies have demonstrated significant, positive correlations between measures of ethical
leadership and such follower outcomes as job satisfaction, affective organizational commitment, and
work engagement (Treviño et al., 2014). Importantly, however, it seems very few such survey
research studies have shown significant relationships between ethical leadership and the ethicality of
follower behavior (Detert, Treviño, Burris & Andiappan, 2007; for exceptions see Mayer, Aquino,
Greenbaum, & Kuenzi, 2012; Mayer, Kuenzi, Greenbaum, Bardes & Salvador, 2009).
Another way to consider tone at the top is through the lens of organizational climate – shared
perceptions of the goals management wants accomplished and the appropriate means for
accomplishing them, and the organizational rewards employees can expect in return for pleasing
management (e.g., Kopelman, Brief & Guzzo, 1990; Reichers & Schneider, 1990).3 Organizational
climate necessarily has a referent, for instance a climate for ethics (Victor & Cullen, 1987, 1988).
Mayer, Kuenzi and Greenbaum (2010), for example, using a sample of employees and their
supervisors drawn from 300 organization units, reported a statistically significant, negative
relationship (r = -.29, p < .001) between organizational ethical climate (e.g., “department employees
have a lot of skills in recognizing ethical climate”) and employee misconduct (e.g., “said or did
something to purposely hurt someone at work”; Robinson & O’Leary-Kelly, 1998, p. 663). More
recently, Arnaud and Schminke (2012) studied 648 individuals in 117 organization units, gauging two
dimensions of organizational ethical climate (self-focused and other-focused reasoning) and ethical
behavior (e.g., “padding an expense account”, Treviño & Weaver, 2001, p. 659). They found the
self-focused climate dimension correlated positively with unethical behaviors and the other-focused
dimension correlated negatively. Thus, research on a climate for ethics indicates that tone at the top
influences ethical outcomes in organizations.
While research on organizational climate deals with perceptions of what is valued and expected by
management, research on socialization deals with how organizational members come to have these
perceptions (Chao, 2012). That is, socialization entails learning the value systems and norms of the
organization (Darley, 1996; Palmer, 2012; Schein, 2004), which obviously could support the
production of good or evil. Building, for instance, on the works of Brief, Buttram, and Dukerich
(2001) and Darley (2001), Ashforth and Anand (2003) perhaps provide the most comprehensive
conceptual treatment of organizational socialization resulting in corrupt behaviors.4 They argue that
groups within organizations often create a psychological, and even a physical, “social cocoon,” a
micro culture where the norms may be very different than those of the wider organization or even
the society in which the organization is embedded (Greil & Rudy, 1984; also see Cressey, 1986;
Sutherland, 1949). Consistent with this idea, Aven (in press) found that corruption at Enron relied
on far smaller social networks than did legitimate projects, which involved six times as many
organizational members. Those members of corrupt networks initially communicated sparingly as
each communication regarding corrupt activity entailed the risk of detection. Over time, however,
they learned to trust each other and became more “cavalier in their communications” (Aven, in
press, p. 28) – less secretive and more communicative.
In addition to developing trust, organizational members can be corrupted via role modeling,
ideology, framing, reinforcement, and punishment. For instance, Palmer and Yenkey (in press)
found that proximity to Tour de France cyclists unpunished for illegal performance-enhancing drug
use was positively related to drug use, while proximity to cyclists punished for use of illegal drugs
3 Related to the concept or organizational climate is organizational culture. Yet the notion of culture is confused: “[f]or
every definition of what culture is there is an important contrary view” (Schneider, Ehrhart, & Macey, 2013, p. 370).
Moreover, much has been written about quantitatively measuring climate at the organizational level of analysis
(Schneider et al., 2013). For these reasons, we focus on organizational climate.
4 Numerous case studies have appeared in the popular press describing how people are socialized into wrongdoing at
work including accounts of Salomon Brothers (Lewis, 1989), Enron (McLean & Elkind, 2003), and Arthur Andersen
(Toffler, 2003).
was negatively related to drug use. Members can “surrender” to or be “seduced” by such
socialization (Moore & Gino, 2013): they can resist objectionable practices until yielding to them as
inevitable (i.e., surrendering) or they can submit to the powerful attraction of the benefits (material
and psychological) associated with engaging in them (i.e., being seduced). Members may also
participate in wrongdoing despite their strong misgivings because they relinquish their agency to an
authority figure, such as their boss (Werhane, Hartman, Archer, Englehartdt, & Pritchard, 2013).
Smith-Crowe and Warren (2014) theorize another possibility: well-meaning organizational members
may be guilted, shamed, or embarrassed into coming to think of corrupt practices as ethical. They
argue that in organizations in which corrupt practices are normative, members who do not engage in
these practices may be sanctioned for allegedly harming the organization because they are not doing
things as they should be done in the organization. These nonparticipants may then come to feel that
they have morally transgressed against the organization and that they should engage in these
We are unaware of research on the socialization of doing good, but an example suggests that the
process may be different (more formal and sequenced; Van Maanen & Schein 1979) than the
socialization of doing evil. An example of such a structured socialization program is seen in the
efforts of is a publicly traded corporation that provides cloud
computing applications. It is headquartered in San Francisco and has 13,300 employees and $4
billion in sales (Forbes, 2014). Forbes named the world’s most innovative company
four years in a row, beginning in 2011. The company also is known for its philanthropic efforts. In
1999, when was incorporated, Foundation was created as a public
charity. The charity’s resources reflected the company’s 1/1/1 integrated corporate philanthropy
model committing 1 percent equity (1 percent of founding stock to offer grants and monetary
assistance to the needy), 1 percent time (for each employee, six paid days off a year to volunteer at a
nonprofit), and 1 percent product (donated to nonprofits) (Beato, 2014). Their five-day orientation
includes one day at the Foundation where new employees not only are exposed to the 1/1/1 model
in a classroom setting, but also by actually volunteering at a nonprofit. The volunteering portion of
the model is reinforced in a number of ways, including $1,000 Champion Grants for nonprofits
given in an employee’s name when that employee completes six full days of volunteering at the
nonprofit (Beato, 2014).
It is important for us to note at this point that while tone at the top and socialization tell
organizational members what they should be doing and hierarchy and power dynamics compel them
to heed these messages, none of these factors is deterministic. Organizational members are to some
degree autonomous beings who make choices. Their autonomy is arguably limited, however, not
only by these previously discussed features of organizations but also to the extent that individuals’
cognitive capacity is compromised. Next, we turn to a discussion of processes of a more insidious
kind that contribute particularly to organizations doing evil by limiting individuals’ cognitive
Here we consider the influence of organizations on members’ cognitive capacity via the notions of
System 1 and System 2 thinking (Stanovich, 1999; also see Kahneman & Frederick, 2002; Evans,
2008). System 1 thinking is characterized as unconscious, rapid, and automatic; System 2 thinking is
characterized as conscious, slow, and deliberate (e.g., Evans, 2008). The two systems can be active
concurrently, in competition for control of overt behavior (Kahneman & Frederick, 2002; also see
Baumeister, Masicampo, & Vohs, 2011).
Assuming a directive from above is judged to have moral content (for instance, the directive pertains
to donating time or money to a charity or it entails harming customers or clients) and System 2 is
engaged, recipients of the directive can be thought of as consciously choosing between right and
wrong. In such cases, a recipient’s moral character, “an individual’s characteristic patterns of
thought, emotion, and behavior associated with moral/ethical and immoral/unethical behavior”
(Cohen, Panter, Turan, Morse, & Kim, in press, p. 6), would likely come into play. This is to say that
individual differences matter. But, in organizations, the assumption that issues are readily identified
as having moral content, even among those with a strong moral character, often is questionable
(Brief et al., 2001; Brief et al., 2002; Darley, 2001; Gioia, 1992). Several reasons for this are evident.
First, for example, organizational superiors may intentionally obscure the moral content of their
directives (e.g., Bandura, 1999; Kreps & Monin, 2011; Tenbrunsel & Messick, 2004). One way this
might be done is by the use of “euphemistic labeling” (Bandura 1990a, 1990b; or hygienic language).
Examples in business abound, at least pertaining to harmful acts: cotton dust in a textile factory
becomes “airborne particulates” (Jackall, 1988), financial fraud becomes “creative accounting” and
mass firings becomes “right-sizing” (Kreps & Monin, 2011). Another way is to provide implicit
rather than explicit directives. As Yeager (1986) pointed out, in results oriented environments, “The
implicit message received from top management may be that much more weight is attached to job
completion than to legal or ethical means of accomplishment” (p. 10). When a superior manager sets
a profit, sales, or market-share target for a subordinate and adds “and, I won’t take any excuses for
failure,” an implicit message is being sent.
A final mechanism for obscuring the moral content of a directive may be less intentional, relying
simply on division of labor. The division of a task into discrete subtasks understandably can limit the
perspectives taken by those assigned subtasks (e.g., Ashforth & Anand, 2003; Brief et al., 2001;
Darley, 1992; Kelman, 1973). Thus, lower organizational participants simply may not be able to see
the moral content of the subtasks they are performing. For instance, it is difficult to imagine that the
mill workers focused on producing the punch cards sold by IBM to Nazi Germany for use on IBM
census machines to identify, incarcerate, and murder millions of Jews and others (Black, 2001) really
understood their personal role in these atrocities. In contrast, IBM President Thomas Watson knew
well what the Nazis were doing with IBM’s technology; not only did IBM customize their
technology for the Nazis’ purposes, but he even visited the Bergen-Belsen concentration camp
where one of IBM’s machines was in use.
One would anticipate that System 1 thinking is especially common in organizations whose members
are busy, rushed, and otherwise cognitively preoccupied (e.g., juggling multiple tasks at the same
time; Bazerman & Moore, 2013; also see Chugh, 2004). While both Systems 1 and 2 have their
advantages and disadvantages (including when System 2 monitors System 1), System 1, in particular,
is seen by some to be open to biases (Kahneman, 2003; Milkman, Chugh, & Bazerman, 2009; Soll,
Milkman, & Payne, in press). One of the biases that likely leads organizational members to
erroneous intuitive judgments regarding the morality of received directives is self-interest (Darley,
2005; but see Miller, 1999). According to Moore and Loewenstein (2004), “self-interest is automatic,
viscerally compelling, and often unconscious” (p. 189). Arendt (1978), the moral philosopher,
observed that an organizational member “for the sake of his pension, his life insurance, the security
of his wife and children [is] prepared to do literally anything” (p. 232).
More generally speaking, a number of related phenomena associated with System 1 thinking have
been labeled “bounded ethicality,” referring to people acting unethically without their own
awareness and failing to notice the unethical behavior of those around them (e.g., Bazerman &
Gino, 2012; Bazerman & Tenbrunsel, 2011; Chugh, Banaji, & Bazerman, 2005; Gino, Moore, &
Bazerman, 2009). An example of such is people acting in racist and sexist ways based upon their
implicit attitudes and associations without being aware of doing so (Banaji & Greenwald, 2013;
Chugh et al., 2005). Relatedly, Brief (2012) discusses the relevance of Haidt’s (2001) influential
theorizing to understand organizational behavior. Essentially, Haidt argues that moral judgments
often take the form of emotionally laden intuitions (and, if need be, rationales for the judgments are
constructed post-hoc, rather than judgments being the product of rational thought processes).
Taking the case of judging the morality of a directive from above, we would expect, according to
Haidt’s thinking, intuitions laden with positive emotions (e.g., pride) would be judged the right thing
to do and those with negative emotions (e.g., shame) would be judged as wrong (see also Smith-
Crowe & Warren, 2014; Warren & Smith-Crowe, 2008). Brief (2012) also notes that emotions may
be experienced unconsciously (Barsade, Ramarajan, & Westen, 2009).
Thus, organizational members’ cognitive capacity can be compromised, leaving them vulnerable to
engaging unintended unethicality, or unethical outcomes resulting from an amoral decision making
process, meaning that moral aspects of the situation are not considered (Tenbrunsel & Smith-
Crowe, 2008). For instance, Gioia (1992) recounts his and others’ lack of moral awareness around
the Ford Pinto case involving the 1970s car with a deadly design flaw. He notes the euphemistic
language (“condition” never “problem”) dictated by Ford’s legal department as well as the
“overwhelming complexity and pace” of his job as Ford’s Recall Coordinator. He argues that these
factors plus a corporate context that promoted business decisions based on cost-benefit analyses, in
which human injury and death could be simply monetized, acted to obscure the moral implications
of the dangers of the Pinto, which outside of the organizational context, he argues, would have been
visible to him.
Before I went to Ford I would have argued strongly that Ford had an ethical obligation to recall.
After I left Ford I now argue and teach that Ford had an ethical obligation to recall. But, while I
was there, I perceived no strong obligation to recall and I remember no strong ethical overtones to
the case whatsoever. (p. 388)
That unethicality can be unintended and even unnoticed in the face of severe consequences makes
these processes insidious in their influence – they can lead good people astray such that they can end
up unknowingly doing things against their own conscience. Also insidious is the way that small
decisions and actions at the individual level can render significant evil at the organizational level.
Researchers of corruption have discussed this phenomenon in terms of a number of individuals
each making small contributions to what adds up to organizational corruption (e.g., Ashforth &
Anand, 2003; Brief et al., 2001). We suspect that organizations can achieve significant good through
a similar process. Next, we turn to the concept of emergence, our final point in support of our
contention that organizations are not simply reducible.
Emergent phenomena are those that arise from lower level phenomena. According to Kozlowski
and Klein (2000, p. 15), “many phenomena in organizations have their theoretical foundation in the
cognition, affect, behavior, and characteristics of individuals, which through social interaction,
exchange, and amplification have emergent properties that manifest at higher levels” (cf.
Kozlowski, Chao, Grand, Braun, & Kuljanin, 2013). They further contend that too often emergent
phenomena in organizations are treated as isomorphic to the lower level phenomena in which they
are grounded, whereas isomorphism is more often the exception than the rule, suggesting that the
atomistic fallacy is a pervasive problem. Instead, emergent phenomena are conceptually different
than the lower level phenomena from which they emerged (see Eidelson, 1997, for an
interdisciplinary review of emergent phenomena). This difference is in part due to the contextual
influences and constraints on emergence (Kozlowski & Klein, 2000), including the organizational
features discussed above: hierarchy and power, tone at the top and socialization, and moral
blindness and automaticity. Emergence is process oriented, it unfolds over time, and it is dynamic
(Kozlowski et al., 2013; Kozlowski & Chao, 2012).
Our thinking about emergence in the context of the social psychology of good and evil has been
influenced strongly by Martell, Emrich, and Robison-Cox’s (2012) emergent theory of vertical
gender segregation in organizations, which they define as “…the overrepresentation of one group
(e.g., men) versus another (e.g., women) in higher status managerial positions...” (p. 52). They argue
that vertical gender segregation is an emergent property of only small individual preferences for men
over women. Their argument is in response to the seemingly paradoxical findings that while gender
segregation at higher organizational levels is striking in its magnitude (e.g., there are more chief
executives named John than there are female chief executives; Wolfers, 2015), the magnitude of
individuals’ gender bias is only slight, meaning that underlying the gender disparity at the upper
echelons of organizations is not rabid misogynists bent on the oppression of women. Rather it
seems that individuals generally have a mild preference for men over women. These findings have
led some to conclude that gender bias does not explain vertical gender segregation in organizations.
Martell and colleagues, considering the problem from the perspective of emergence, argue that even
modest individual-level gender bias can produce significant aggregate discriminatory effects.
In part they rely on Schelling’s (1971) work on racial diversity in neighborhoods. Schelling ran
simulations to test the effects of individuals’ preferences for neighbors of the same or a different
race on overall patterns of neighborhood segregation. He constructed a “checker board” of 13 rows
and 16 columns. Except for those on the edges, individuals each have eight adjacent neighbors; each
individual belongs in one of two racial categories equally represented. Schelling began by placing
individuals in a random pattern on the board with about 25-30% of the squares left empty to allow
individuals to move in order to satisfy their preferences. Assuming an individual level preference
(e.g., that 50% of the adjacent neighbors be of the same race), he then identified the dissatisfied
individuals and began moving them on the board to spaces that would accommodate their
preference. Of course, the more “demanding” the individual preference (i.e., the higher the
proportion of same-race neighbors preferred), the greater the resulting levels of overall
neighborhood segregation, but even relatively undemanding preferences for same-race neighbors
resulted in segregation. Applying Schelling’s simulation, for instance, one can see that even an
individual level preference for 25% (or 2 out of 8) of one’s neighbors to be same-race, creates racial
segregation out of the initial random pattern. As Martell et al. (2012) point out, such segregation is
produced in the absence of any individual desire to live in a racially segregated neighborhood.
Schelling demonstrates further that to the extent that one racial group outnumbers another (as
Whites do relative to Blacks), individuals’ preferences to be adjacent to same-race neighbors
exacerbates the resulting segregation.
Martell et al. (2012) further rely on signaling theory to explain how substantial vertical gender
segregation could be based on only mild gender bias at the individual level. Signaling theory suggests
that in situations of information asymmetry, where not everyone has access to the same information,
decision makers look to signals to fill in the gaps (Connelly, Certo, Ireland, & Reutzel, 2011). For
instance, employers cannot know in a very direct or definitive sense how motivated or intelligent a
job candidate is. Instead, they can rely on signals such as the prestige of a candidate’s bachelor
degree granting institution. Martell et al. argue that the context of managerial promotions is
necessarily ambiguous because the content of and criteria for managerial work is ambiguous. As
such, signals become important. For instance, less positive performance appraisals for women, even
if differences are slight, indicate that they are less worthy of promotion. Failure to be promoted can
be compounded over time with being in a particular position for too long or appearing to be a late
bloomer or generally moving too slowly up the corporate ladder, all of which signal they are
unworthy of promotion. Eventually, the lack of promotion may suggest that one should not even be
considered for future promotions. Thus, they argue that in the aggregate such processes produce
gender segregation at the top of the organization, despite the absence of any strong bias against
women. Indeed, Martell et al. suggest that these processes can be unintended, even invisible (see also
Ditomaso, 2012).
Likewise, we assume that in many cases, organizations do not deliberately pursue the production of
evil. In contrast, our examples of organizations doing good represent a conscious intention to
produce good. Here too we assume emergence is a useful way to think about collective action and
outcomes, but whether the emergent processes that produce good would be the same as those that
produce evil is not something current research addresses.
We sincerely hope you have been persuaded that organizations matter, that a social psychology of
individuals or groups is inadequate to understanding when, why, or how organizations produce good
or evil. Rather, what is required is an organizational psychology, resting on and influenced by social
psychology. Historically, such social-organizational approaches to the study of organizations were
plentiful and influential (e.g., Katz & Kahn, 1966; Porter, Lawler, & Hackman, 1975; Weick, 1979).
Today this is considerably less the case, with social psychologists interested in organizational
phenomena seeking to generalize an individual-based social psychology developed via experiments
conducted in the laboratory to the complex world of organizations. We think this shift is notable not
because we do not value laboratory experiments or decontextualized research. We value both
greatly. Indeed, some of our own work falls into these categories, and these kinds of research
certainly inform our thinking and research more generally. Rather we think this shift is notable
because laboratory experiments and decontextualized research are simply not sufficient for
understanding organizations as Darley (e.g., 1992) has so clearly argued.
Given the tremendous good and evil organizations are capable of producing, we urge social
psychology to once again turn its astute eye towards organizations. While there are obvious and
great needs for the development of fundamental theory and methodological innovations, it is
important to strive for more. That more, is the formulation and evaluation of organizational
interventions designed to promote doing good and to inhibit behaving badly. For instance, it is one
thing to demonstrate in a series of laboratory experiments that perspective-taking reduces prejudice
(e.g., Galinsky & Moskowitz, 2000) and another to devise a perspective-taking intervention to be
implemented in a large, complex organization and to empirically ascertain if that intervention yielded
a meaningful decrease in prejudice and related behaviors. In other words, one needs to know if the
intervention caused a reduction in prejudice. Therefore, we are calling for social psychologists to take
a stronger turn towards field experimentation like their counterparts in behavioral economics (e.g.,
Harrison & List, 2004; Levitt & List, 2009; List, 2011). In the end, good and evil, in their many
forms, are too crucial for social psychologists to limit themselves by choice of theoretical perspective
(individuals, groups, and/or organizations), or research setting (laboratory or field).
Alker, H. R. (1969). A typology of ecological fallacies. In M. Duggan & S. Rokkan (Eds.),
Quantitative ecological analysis in the social sciences (pp. 69-86). Cambridge, MA: MIT Press.
Anteby, M. (2013). Manufacturing morals: The values of silence in business school education. Chicago: University of
Chicago Press.
Aquino, K., Freeman, D., Reed, A., Lim, V. K. G., II, and Felps, W. (2009). Testing a social-cognitive
model of moral behavior: The interactive influence of situations and moral identity concerns. Journal of
Personality and Social Psychology, 97, 123-141.
Arendt, H. (1963). Eichmann in Jerusalem: A report on the banality of evil. New York: Penguin Books.
Arendt, H. E. (1978). The life of the mind. New York: Harcourt Brace Jovanovich.
Arnaud, A. & Schminke, M. (2012). The ethical climate and context of organizations: A comprehensive
model. Organization Science, 23, 1267-1780.
Ashforth, B., & Anand, V. (2003). The normalization of corruption in organizations. Research in
Organizational Behavior, 25, 1-52.
Association of Certified Fraud Examiners. (n.d.). Tone at the top. Retrieved from
Aven, B. L. (in press). The paradox of corrupt networks: An analysis of organizational crime at Enron.
Organization Science.
Baker, G., Gibbons, R., & Murphy, K. (1999). Informal authority in organizations. Journal of Law,
Economics, and Organization, 15, 56-73.
Banaji, M., & Greenwald, A. (2013). Blindspot: Hidden biases of good people. New York, NY: Random House.
Bandura, A. (1990a). Mechanisms of moral disengagement. In W. Reich (Ed.), Origins of terrorism:
Psychologies, ideologies, theologies, states of mind (pp. 161-191). Cambridge, UK: Cambridge University
Bandura, A. (1990b). Selective activation and disengagement of moral control. Journal of Social Issues,
46, 27-46.
Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social
Psychology Review, 3, 193-209.
Barsade, S. G., Ramarajan, L., & Westen, D. (2009). Implicit affect in organizations. Research in
Organizational Behavior, 29, 135-162.
Barstow, D. (2012, April 21). Vast Mexico bribery case hushed up by Wal-Mart after top-level struggle.
The New York Times.
Baumeister, R., Masciampo, E., & Vohs, K. (2011). Do conscious thoughts cause behavior? Annual
Review of Psychology, 62, 331-361.
Bazerman, M. H., & Gino, F. (2012). Behavioral ethics: Toward a deeper understanding of moral
judgment and dishonesty. Annual Review of Law and Social Science, 8, 85-104.
Bazerman, M. H., & Moore, D. A. (2013). Judgment in managerial decision making. (8th ed.). Hoboken, NJ:
John Wiley & Sons.
Bazerman, M. H., & Tenbrunsel, A. E. (2011). Blind Spots: Why we fail to do what’s right and what to do about it.
Princeton, NJ: Princeton University Press.
Beato, G. (2014). Growth force. Stanford Social Innovation Review. Retrieved from
Black, E. (2001). IBM and the Holocaust: The strategic alliance between Nazi Germany and America’s most powerful
corporation. New York: Crown Publishers.
Blackmon, D. A. (2008). Slavery by another name: The re-enslavement of black Americans from the Civil War to
World War II. New York: Anchor Books.
Blau, P. M. (1968). The hierarchy of authority in organizations. American Journal of Sociology, 73, 453-467.
Brief, A. (2012). The good the bad and the ugly: What behavioral business ethics researchers ought
to be studying. In D. DeCremer & Tenbrunsel, A. (Eds.), Behavioral business ethics & shaping an emerging
field. New York: Routledge.
Brief, A. P., & Aldag, R. J. (1989). The economic functions of work. In K. Rowland and G. R. Ferris
(Eds.), Research in personnel and human resources management. Greenwich, CT: JAI Press.
Brief, A. P. & Aldag, R. J. (1994). The study of work values: A call for a more balanced perspective. In I.
Borg and P. P. Mohler (Eds.), Trends and perspectives in empirical social research, Berlin: Walter deGruyter.
Brief, A. P., Buttram, R. T., Elliot, J. D., Reizenstein, R. M., & McCline, R. L. (1995). Releasing the beast:
A study of compliance with orders to use race as a selection criterion. Journal of Social Issues, 51, 177-
Brief, A. P., Dukerich, J., Brown, P., & Brett, J. (1996). What’s wrong with the Treadway commission
report? Experimental analyses of the effects of personal values and codes of conduct or fraudulent
financial reporting. Journal of Business Ethics, 15, 183-198.
Brief, A. P., Dietz, J., Cohen, R., Pugh, D., & Vaslow, J. (2000). Just doing business modern racism
and obedience to authority as explanations for employment discrimination. Organizational Behavior and
Human Decision Processes, 81, 72-97.
Brief, A. P., Buttram, R., & Dukerich, J. (2001). Collective corruption in the corporate world: Toward
a process model. In M. E. Turner (Ed.), Groups at work: Advances in theory and research. Hillsdale, NJ:
Lawrence Erlbaum & Associates.
Brief, A. P., Konovsky, M. A., George, J., Goodwin, R., & Link, K. (1995). Inferring the meaning of
work from the effects of unemployment. Journal of Applied Social Psychology, 25, 693-711.
Brown, M., & Treviño, L. (2006). Ethical leadership: A review and future directions. Leadership
Quarterly, 17, 595-616.
Buchman, T., Tetlock, P. E., & Reed, R. O. (1996). Accountability and auditors' judgments about
contingent events. Journal of Business Finance and Accounting, 23, 379-398.
Chao, G. (2012). Organizational socialization: Background, basics, and a blueprint for adjustment at
work. In Steve Kozlowski (Ed.), The Oxford handbook of organizational psychology (pp. 579-614). Oxford,
UK: Oxford University Press.
Chugh, D. (2004). Societal and managerial implications of implicit social cognition: Why milliseconds
matter. Social Justice Research, 17, 203-222.
Chugh, D., Banaji, M., & Bazerman, M. (2005). Bounded ethicality as a psychological barrier to
recognizing conflicts of Interest. In Moore, D., Cain, D., Loewenstein, G., & Bazerman, M. (Eds.),
Conflicts of interest: Challenges and solutions in business, law, medicine, and public policy. New York, NY:
Cambridge University Press.
Ciulla, J. (2004). Ethics, the Heart of Leadership. Westport, CT: Praeger.
Clifford, S. (2013, May 28). Wal-Mart is fined $82 million over mishandling of hazardous wastes. The New
York Times.
Cline, E. L. (2012). Overdressed: The shockingly high cost of cheap fashion. New York:
Cohen, R. (1999, Nov. 16). Germany adds $555 million to offer in Nazi slave cases. The New York Times.
Cohen, T., Panter, A., Turan, N., Morse, L., & Kim, Y. (in press). Moral character in the workplace.
Journal of Personality and Social Psychology.
Connelly, B. L., Certo, S. T., Ireland, R. D., & Reutzel, C. R. (2011). Signaling theory: A review and
assessment. Journal of Management, 37, 39-67.
Cressey, D. (1986). Why managers commit fraud. Australian and New Zealand Journal of Criminology, 19, 195-
Darley, J. (1992). Social organization for the production of evil. Psychological Inquiry, 3, 199-218.
Darley, J. (2001). The dynamics of authority influence in organizations and the unintended action
consequences. In J. Darley, D. M. Messick, & T. R. Tyler (Eds.), Social influences of ethical behavior
in organizations (pp. 37-52). New Jersey: Lawrence Erlbaum Associates.
Darley, J. (2005). The cognitive and social psychology of contagious organizational corruption. Brooklyn
Law Review, 70, 1177-1194.
Department of Homeland Security. (2014). Definition of human trafficking. Retrieved from
Detert, J.R., Treviño, L.K., Burris, E.R., & Andiappan, M. (2007). Managerial modes of influence and
counter productivity in organizations: A longitudinal business-unit-level investigation. Journal of
Applied Psychology, 92(4), 993-1005.
Ditomaso, N. (2012). American non-dilemma: Racial inequality without racism (13th ed.). Ithaca, NY: Cornell
University Press.
Editorial Board. (2014, Dec. 4). Bhopal’s deadly legacy. The New York Times.
Eidelson, R. J. (1997). Complex adaptive systems in the behavioral and social sciences. Review of General
Psychology, 1, 42-71.
Ehrenreich, B. (2001). Nickel and dimed: On (not) getting by in America. New York: Picador.
Emerson, R. (1962). Power-dependence relations. American Sociological Review, 27(1) 31-41.
Environmental Protection Agency. (n.d.). Municipal solid waste generation, recycling and disposal in the
United States: Facts and figures for 2012. Retrieved from
Etzioni, A. (1964). Modern organizations. Englewood Cliffs, NJ: Prentice Hall.
Evans, J. (2008). Dual processing accounts of reasoning, judgment, and social cognition. Annual Review of
Psychology, 59, 255-278.
Forbes. (2014). The world’s most innovative companies. Retrieved from
French, J. R. P., Jr., & Raven, B. H. (1959). The bases of social power. In D. Cartwright (Ed.), Studies
in social power (pp. 150167). Ann Arbor, MI: Institute for Social Research.
Galinsky, A. D., Gruenfeld, D. H., & Magee, J. C. (2003). From power to action. Journal of Personality and
Social Psychology, 85, 453-466.
Galinsky, A. D., & Moskowitz, G. B. (2000). Perspective-taking: Decreasing stereotype expression,
stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology, 78, 708-724.
Gino, F., Moore, D. A. & Bazerman, M. H. (2009). See no evil: Why we fail to notice unethical
behavior. In R. M. Kramer, A. E. Tenbrunsel and M. H. Bazerman (Eds.), Social decision making:
Social dilemmas, social values, and ethical judgments (pp. 241 263). New York, NY: Psychology
Gioia, D. (1992). Pinto fires and personal ethics: A script analysis of missed opportunities. Journal
of Business Ethics, 11, 379-389.
Granovetter, M. S. (1985). Economic action and social structure: The problem of embeddedness.
American Journal of Sociology, 91, 481-493.
Greenhouse, S. (2014, Dec. 10). Walmart illegally punished workers, judge rules. The New York Times.
Greil, A.L., & Rudy, D.R. (1984). Social cocoons: Encapsulation and identity transformation
organizations. Sociological Inquiry, 54, 260-278.
Hackenbrack, K., & Nelson, M. W. (1996). Auditors’ incentives and their application of financial
accounting standards. The Accounting Review, 71(1) 4359.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Review, 108, 814-834.
Haidt, J. (2007). The new synthesis in moral psychology. Science, 316, 998-1002.
Hall, R. (1977). Organizations: structure and process. Englewood Cliffs, CA: Prentice Hall.
Hamilton, V., & Sanders, J. (1992). Responsibility and risk in organizations crimes of obedience. Research
in Organizational Behavior, 14, 49-90.
Harrison, G. W., & List, J. A. (2004). Field experiments. Journal of Economic Literature, 42, 1009-1055.
Hess, A. E. M. (2013, Aug. 22). The 10 largest employers in America. USA Today.
Hines, A. (2012, June 6). Walmart sex discrimination claims filed by 2,000 women. Huffington Post.
Jackall, R. (1988). Moral mazes: The world of corporate managers. New York: Oxford University Press.
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics.
American Economic Review, 93, 1449-1475.
Kahneman, D., & Frederick, S., (2002). In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics of
intuitive judgment (pp. 49-81). New York, NY: Cambridge University Press.
Katz, D., & Kahn, R. L. (1966). Organizations and the system concept. The Social Psychology of
Organizations, 1, 14-29.
Kelman, H. (1973). Violence without moral restraint: Reflections on the dehumanization of victims
and victimizers. Journal of Social Issues, 29, 25-61.
Kelman, H., & Hamilton, V. (1989). Crimes of obedience. New Haven, CT: Yale University Press.
Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, and bad barrels:
Meta-analytic evidence about sources of unethical decisions at work. Journal of Applied Psychology, 95(1),
Kopelman, R. E., Brief, A. P., & Guzzo, R. A. (1990). The role of climate and culture in productivity.
Organizational Climate and Culture, 282-318.
Kozlowski, S. W. J., Chao, G., Grand, J., Braun, M., & Kuljanin, G. (2013). Advancing multilevel research
design: Capturing the dynamics of emergence. Organizational Research Methods, 16, 581-615.
Kozlowski, S. W. J., & Chao, G. (2012). The dynamics of emergence: Cognition and cohesion in work
teams. Managerial and Decision Economics, 33, 335-354.
Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations:
Contextual, temporal, and emergent processes. In K. J. Klein and S. W. J. Kozlowski (Eds.), Multilevel
theory, research, and methods in organizations (pp. 3-90). San Francisco: Jossey-Bass.
Kreps, T., & Monin, B. (2011). Doing well by doing good? Ambivalent moral framing in organizations.
Research in Organizational Behavior, 31, 99-123.
Landwehr, D. (2013, Sept. 22). Don’t buy this jacket (Web blog post). Retrieved from
Latonero, M., et al. (2012). The rise of mobile and the diffusion of technology-facilitated trafficking.
Retrieved from
Lerner, J. S., & Tetlock, P. E. (1999). Accounting for the effects of accountability. Psychological Bulletin,
125, 255-275.
Levitt, S. D., & List, J. A. (2009). Field experiments in economics: The past, the present, and the future.
European Economic Review, 53, 1-18.
Lewis, M. (1989). Liar’s poker: Rising through the wreckage on Wall Street. New York, NY: W.W. Norton.
Lifton, R. J. (1986). The Nazi doctors: Medical killing and the psychology of genocide. New York, NY: Basic
List, J. A. (2011). Why economists should conduct field experiments and 14 tips for pulling one off.
Journal of Economic Perspectives, 25, 3-15.
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting and task performance. Englewood Cliffs, NJ:
Prentice Hall.
Magee, J. C., & Galinsky, A. D. (2008). Social hierarchy: The self-reinforcing nature of power and
status. Academy of Management Annals, 2(1), 351-398.
Martell, R. F., Emrich, C. G., & Robison-Cox, J. (2012). From bias to exclusion: A multilevel emergent
theory of gender segregation in organizations. Research in Organizational Behavior, 32, 137-162.
Martin, D. (2014, Oct. 30). Warren Anderson, 92, dies; Faced India plant disaster. The New York Times.
Mayer, D. M., Aquino, K., Greenbaum, R. L., & Kuenzi, M. (2012). Who displays ethical leadership, and
why does it matter? An examination of antecedents and consequences of ethical leadership. Academy of
Management Journal, 55(1), 151-171.
Mayer, D. M., Kuenzi, M., & Greenbaum, R. L. (2010). Examining the link between ethical leadership
and employee misconduct: The mediating role of ethical climate. Journal of Business Ethics, 95, 7-16.
Mayer, D. M., Kuenzi, M., Greenbaum, R., Bardes, M., & Salvador, R. (2009). How low does ethical
leadership flow? Test of a trickle-down model. Organizational Behavior and Human Decision Processes, 108,
McLean, B., & Elkind, P. (2003). The smartest guys in the room: The amazing rise and scandalous fall of Enron.
Portfolio, NY: Portfolio Trade.
Microsoft. (2012). Shedding light on the role of technology in child sex trafficking. Retrieved from
Microsoft Research. (2011). The role of technology in human trafficking RFP. Retrieved from
Milgram, S. (1974). Obedience to authority. New York: Harper & Row.
Milkman, K., Chugh, D., & Bazerman, M. (2009). How can decision making be improved? Perspectives of
Psychological Science, 4, 379-383.
Miller, D. (1999). Principles of social justice. Cambridge, MA: Harvard University Press.
Moore, C., & Gino, F. (2013). Ethically adrift: How others pull our moral compass from True North,
and how we can fix it. Research in Organizational Behavior, 33, 53-77.
Moore, D. A., & Loewenstein, G. (2004). Self-interest, automaticity, and the psychology of conflict of
interest. Social Justice Research, 17(2), 189-202.
Nudd, T. (2011, Nov. 28). Ad of the day: Patagonia. Adweek.
Ordóñez, L. D., Schweitzer, M. E., Galinsky, A. D., & Bazerman, M. H. (2009). Goals gone wild: The
systematic side effects of overprescribing goal setting. The Academy of Management Perspectives, 23(1), 6-
Palmer, D. (2012) Normal organizational wrongdoing: A critical analysis of theories of misconduct in and by
organizations. Oxford, UK: Oxford University Press.
Palmer, D., & Yenkey, C. (in press). Drugs, sweat, and gears: An organizational analysis of performance
enhancing drug use in the 2010 Tour de France. Social Forces.
Porter, L. W., Lawler, E. E., & Hackman, J. R. (1975). Behavior in organizations. New York: McGraw Hill.
Reichers, A. E., & Schneider, B. (1990). Climate and culture: An evolution of constructs. Organizational
Climate and Culture, 1, 5-39.
Robinson, S. L., & O'Leary-Kelly, A. M. (1998). Monkey see, monkey do: The influence of work groups
on the antisocial behavior of employees. Academy of Management Journal, 41(6), 658-672.
Schneider, B., Erhart, M., & Macey, W. (2013). Organizational climate and culture. Annual Review
Psychology, 64, 361-388.
Schein, E. H. (2004). The role of the founder in creating organizational culture. Modern Classics on
Leadership, 443.
Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1, 143-186.
Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual Review
of Psychology, 64, 361-388.
Schweitzer, M. E., Ordóñez, L., & Douma, B. (2004). Goal setting as a motivator of unethical
behavior. Academy of Management Journal, 47(3), 422-432.
Scott, W. R., & Davis, G. F. (2007). Organizations and organizing. Upper Saddle River, NJ: Pearson Prentice
Shu, L. L., Mazar, N., Gino, F., Ariely, D., Bazerman, M. H. (2012). Signing at the beginning makes ethics
salient and decreases dishonest self-reports in comparison to signing at the end. Proceedings of the
National Academy of Sciences, 109, 197-200.
Simon, H. A. (1964). On the concept of organizational goal. Administrative Science Quarterly, 9, 1-22.
Simon, H. A. (1976). From substantive to procedural rationality. In T. J., Kastelein,
S. K., Kuipers, W. A. Nijenhuis, & G. R. Wagenaar (Eds.), 25 Years of economic theory (pp. 65-86).
Oxford, UK: Oxford University Press.
Smith-Crowe, K., Tenbrunsel, A. E., Chan-Serafin, S., Brief, A. P., Umphress, E. E., & Joseph, J.
(forthcoming). The ethics “fix”: When formal systems make a difference. Journal of Business Ethics.
Smith-Crowe, K., & Warren, D. E. (2014). The emotion-evoked collective corruption model: The role of
emotion in the spread of corruption within organizations. Organization Science, 25, 1154-1171.
Soll, J.B., Milkman, K.L. & Payne, J. W. (in press). A user's guide to debiasing. In K. Gideon & G. Wu
(Eds.), Wiley-Blackwell handbook of judgment and decision making.
Stanovich, K. E. (1999). Who is rational? Studies of individual differences in reasoning. Mahwah, NJ: Lawrence
Erlbaum Associates.
Sutherland, E. H. (1949). White collar crime. New York, NY: Dryden Press.
Tenbrunsel, A. E., & Messick, D. M. (2004). Ethical fading: The role of self-deception in unethical
behavior. Social Justice Research, 17(2), 223-236.
Tenbrunsel, A. E., & Smith-Crowe, K. (2008). Ethical decision making: Where we’ve been and where
we’re going. Academy of Management Annals, 2, 545-607.
Tetlock, P. E. (1985). Accountability: The neglected social context of judgment and choice. Research in
Organizational Behavior, 7, 297-332.
Tetlock, P. E., & Mitchell, G. (2010). Situated social identities constrain morally defensible choices:
Commentary on Bennis, Medin, & Bartel (2010). Perspectives on Psychological Science, 5, 206-208.
The Cleanest Line. (n.d.). Introducing the common threads initiative Reduce, repair, reuse, recycle,
reimagin. Retrieved from
Toffler, B.L. (2003). Final accounting: Ambition, greed, and the fall of Arthur Anderson. New York, NY: Random
Treviño, L. K., & Weaver, G. R. (2001). Organizational justice and ethics program "follow-through":
Influences on employees' harmful and helpful behavior. Business Ethics Quarterly, 11(4), 651-671.
Treviño, L., Den Nieuwenboer, N., & Kish-Gephart, J. (2014). (Un)ethical behavior in organizations.
Annual Review of Psychology, 65, 635-660.
Trottman, M., & Banjo, S. (2015, Jan. 15). Wal-mart accused of violating workers rights. Wall Street
Van Maanen, J., & Schein, E. H. (1979). Toward a theory of organizational socialization. Research in
Organizational Behavior, 1, 209-264.
Victor, B., & Cullen, J. B. (1987). A theory and measure of ethical climate in organizations. In W.C.
Fredrick & L. Preston (Ed.), Research in corporate social performance and policy (pp. 51-71). London, UK:
JAI Press.
Victor, B., & Cullen, J. B. (1988). The organizational bases of ethical work climates. Administrative Science
Quarterly, 33, 101-125.
Walmart. (n.d.) Our story. Retrieved from
Warren, D. E., & Smith-Crowe, K. (2008). Deciding what’s right: The role of external sanctions and
embarrassment in shaping moral judgments in the workplace. Research in Organizational Behavior, 28, 81-
Weick, K. E. (1979). The social psychology of organizing (2nd ed.). New York: McGraw Hill.
Werhane, P. H., Hartman, L. P., Archer, C., Englehardt, E. E., & Pritchard, M. S. (2013). Obstacles to ethical
decision-making: Mental models, Milgram, and the problem of obedience. Cambridge: Cambridge University
Wolfers, J. (2015, March 2). Fewer women run big companies than men named John. The New York
Yeager, P. (1986) Analyzing corporate offenses: Progress and prospects. In J. Post (Ed.), Research in
corporate social performance and policy (pp. 93-120). Greenwich, CT: JAI Press.
Zuckerman, E. (2010). Speaking with one voice: A Stanford School approach to organizational hierarchy.
Research in the Sociology of Organizations, 28, 289-307.
... As for the second barrier, some methodological improvements within the confines of explanatory science can increase usability. Behavioral ethics researchers themselves offer various ways to do so: field experiments in real-life organizations (Brief & Smith-Crowe, 2016), "mixed context" experiments in both the lab and the field (Zhang et al., 2014, p. 74), long-term field experiments that can help to assess habituation to interventions (Houdek, 2019, p. 51), and conceptual instead of literal replication studies (Amir et al., 2018). Because such research designs improve ecological validity and allow for more complexity, they are indeed likely to generate insights that are more relevant for practice. ...
Full-text available
Research on behavioral ethics is thriving and intends to offer advice that can be used by practitioners to improve the practice of ethics management. However, three barriers prevent this research from generating genuinely useful advice. It does not sufficiently focus on interventions that can be directly designed by management. The typical research designs used in behavioral ethics research require such a reduction of complexity that the resulting findings are not very useful for practitioners. Worse still, attempts to make behavioral ethics research more useful by formulating simple recommendations are potentially very damaging. In response to these limitations, this article proposes to complement the current behavioral ethics research agenda that takes an ‘explanatory science’ approach with a research agenda that uses a ‘design science’ approach. Proposed by Joan van Aken and building on earlier work by Herbert Simon, this approach aims to develop field-tested ‘design propositions’ that present often complex but useful recommendations for practitioners. Using a ‘CIMO-logic’, these propositions specify how an ‘intervention’ can generate very different ‘outcomes’ through various ‘mechanisms’, depending on the ‘context’. An illustration and a discussion of the contours of this new research agenda for ethics management demonstrate its advantages as well as its feasibility. The article concludes with a reflection on the feasibility of embracing complexity without drowning in a sea of complicated contingencies and without being paralyzed by the awareness that all interventions can have both desirable and undesirable effects.
... Although individual-level models of ethical decision making frequently acknowledge that others' behavior has an important influence on (un)ethical behavior, relatively little attention has been given to articulating the processes and conditions under which one actor's behavior affects others' behavior. Social processes (e.g., group membership, identity, hierarchy, socialization) are especially influential in organizations (Brief & Smith-Crowe, 2016). Therefore, it is important to better understand the causal linkages across instances of unethical behavior, which often are underspecified in models of individuals' ethical decision making. ...
Full-text available
Do bad role models exonerate others’ unethical behavior? Based on social learning theory and psychological theories of blame, we predicted that unethical behavior by higher-ranking individuals changes how people respond to lower-ranking individuals who subsequently commit the same transgression. Five studies explored when and why this rank-dependent imitation effect occurs. Across all five studies, we found that people were less punitive when low-ranking transgressors imitated high-ranking members of their organization. However, imitation only reduced punishment when the two transgressors were from the same organization (Study 2), when the transgressions were highly similar (Study 3), and when it was unclear whether the initial transgressor was punished (Study 5). Results also indicated that imitation affects punishment because it influences whom people blame for the transgression. These findings reveal actor-observer differences in social learning and identify a way that unethical behavior spreads through organizations.
Organizations—especially small businesses—are vulnerable to social and economic upheaval. When misfortune befalls organizations, how much do we empathize with them? Here we present a framework for understanding the causes, mechanisms, and consequences of empathy for organizations. One key cause of empathy is framing: Although any organization is comprised of its constituent members, six studies find that the members frame (“members comprising an organization”) evokes more empathy than the organization frame (“an organization comprised of its members”). The effect of framing on empathy is mediated through anthropomorphism—how humanlike an organization seems. Studies also reveal moral consequences of framing. Increased empathy towards an organization translates to increased perceptions that its suffering is unfair, and to increased helping behavior to address that suffering. Theoretically, these results provide a multi-stage model of empathy for organizations. Practically, these results reveal how struggling organizations can increase empathy for their plight.
What causes leaders to punish subordinates unjustly? And why might leaders keep punishing subordinates unjustly, even when this increases workplace misconduct? In the current paper we address these questions by suggesting that power and status cause leaders to punish unjustly. We review evidence on the effects of power and status on punishment, review how unjust punishments foster misconduct, and highlight how this creates a self-perpetuating feedback loop—leaders are more likely to punish in an unjust manner when subordinates engage in misconduct, but subordinates’ misconduct is partly caused by unjust punishments. We also discuss how leader-subordinate distrust may be at the heart of this phenomenon and how organizations may counteract unjust punishments. We draw attention to research areas that have received little attention and draw up an agenda for future research. Taken together, we integrate the literatures on power, status, punishment and trust, review evidence on when unjust punishments become perpetuating, challenge research suggesting that leaders are cautious when punishing, and guide future research on the topic of punishment in organizations.
Organizational scholarship centers on understanding organizational context, usually captured through field studies, as well as determining causality, typically with laboratory experiments. We argue that field experiments can bridge these approaches, bringing causality to field research and developing organizational theory in novel ways. We present a taxonomy that proposes when to use an audit field experiment (AFE), procedural field experiment (PFE) or innovation field experiment (IFE) in organizational research and argue that field experiments are more feasible than ever before. With advances in technology, behavioral data has become more available and randomized changes are easier to implement, allowing field experiments to more easily create value—and impact—for scholars and organizations alike.
Full-text available
We review the new and growing body of work on power in teams and use this review to develop an emergent theory of how power impacts team outcomes. Our paper offers three primary contributions. First, our review highlights potentially incorrect assumptions that have arisen around the topic of power in teams and documents the areas and findings that appear most robust in explaining the effects of power on teams. Second, we contrast the findings of this review with what is known about the effects of power on individuals and highlight the directionally oppositional effects of power that emerge across different levels of analysis. Third, we integrate findings across levels of analysis into an emergent theory which explains why and when the benefits of power for individuals may paradoxically explain the potentially negative effects of power on team outcomes. We elaborate on how individual social comparisons within teams where at least one member has power increase intra-team power sensitivity, which we define as a state in which team members are excessively perceptive of, affected by, and responsive to resources. We theorize that when power-sensitized teams experience resource threats (either stemming from external threats or personal threats within the team), these threats will ignite internal power sensitivities and set into play performance-detracting intra-team power struggles. This conflict account of power in teams integrates and organizes past findings in this area to explain why and when power negatively affects team-level outcomes, and opens the door for future research to better understand why and when power may benefit team outcomes when power’s dark side for teams is removed.
We contend that the field of behavioral ethics in organizations, in large part, has not explicitly attended to the theoretical substance of outcome variables in studies of individuals' morally relevant judgments, decisions, and behaviors, and that we should do so. We review the recent literature, finding that conceptual definitions of the moral domain are provided in only a small percentage of articles and that the operationalizations employed are drawn from a narrow set, with the majority of studies focusing on cheating or lying. We also review relevant conversations on the theoretical substance of morally relevant outcome variables found largely outside of this literature. We present these conversations in the context of a meta-conversation, or a conversation about conversations in which we point to the potential fruitfulness of discussing the theoretical substance of our outcome variables, and the forms that such discussions might take. We conclude with suggestions for carrying the conversation forward within the field of behavioral ethics in organizations.
Research on moral judgment has been dominated by rationalist models, in which moral judgment is thought to be caused by moral reasoning. The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached. The social intuitionist model is presented as an alternative to rationalist models. The model is a social model in that it deemphasizes the private reasoning done by individuals and emphasizes instead the importance of social and cultural influences. The model is an intuitionist model in that it states that moral judgment is generally the result of quick, automatic evaluations (intuitions). The model is more consistent than rationalist models with recent findings in social, cultural, evolutionary, and biological psychology, as well as in anthropology and primatology.
This collection explores the subject of conflicts of interest. It investigates how to manage conflicts of interest, how they can affect well-meaning professionals, and how they can limit the effectiveness of corporate boards, undermine professional ethics, and corrupt expert opinion. Legal and policy responses are considered, some of which (e.g. disclosure) are shown to backfire and even fail. The results offer a sobering prognosis for professional ethics and for anyone who relies on professionals who have conflicts of interest. The contributors are leading authorities on the subject in the fields of law, medicine, management, public policy, and psychology. The nuances of the problems posed by conflicts of interest will be highlighted for readers in an effort to demonstrate the many ways that structuring incentives can affect decision making and organizations’ financial well-being.