1050 Public Administration Review • November | December 2008
After the demise of the space shuttle Columbia on Febru-
ary 1, 2003, the Columbia Accident Investigation Board
sharply criticized NASA’s safety culture. Adopting the
high-reliability organization as a benchmark, the board
concluded that NASA did not possess the organizational
characteristics that could have prevented this disaster.
Furthermore, the board determined that high-reliabil-
ity theory is “extremely useful in
describing the culture that should
exist in the human spaceﬂ ight
organization.” In this article, we
argue that this conclusion is based
on a misreading and misapplica-
tion of high-reliability research. We
conclude that in its human spaceﬂ ight programs, NASA
has never been, nor could it be, a high-reliability orga-
nization. We propose an alternative framework to assess
reliability and safety in what we refer to as reliability-
I n January 2001, the National Aeronautics and
Space Administration (NASA) discovered a wiring
problem in the solid rocket booster of the space
shuttle Endeavor. e wire was “mission critical,” so
NASA replaced it before launching the shuttle. But
NASA did not take any chances: It inspected more
than 6,000 similar connections and discovered that
four were loose ( Clarke 2006, 45 ). e thorough
inspection may well have prevented a shuttle disaster.
is mindfulness and the commitment to follow-
through concerning this safety issue could be taken as
indicators of a strong safety climate within NASA. It
would conﬁ rm the many observations, academic and
popular, with regard to NASA’s strong commitment
to safety in its human spaceﬂ ight programs ( Johnson
2002; McCurdy 1993, 2001; Murray and Cox 1989;
Vaughan 1996 ).
Two years later, the space shuttle Columbia disinte-
grated above the southern skies of the United States.
e subsequent inquiry into this disaster revealed that
a piece of insulating foam (the size of a cooler) had
come loose during the launch, then struck and dam-
aged several tiles covering a panel door that protected
the wing from the extreme heat that reentry into the
earth’s atmosphere generates. e compromised de-
fense at this single spot caused the demise of Columbia.
e Columbia Accident Investigation Board (CAIB)
strongly criticized NASA’s safety culture. After discov-
ering that the “foam problem”
had a long history in the space
shuttle program, the board asked
how NASA could “have missed
the signals that the foam was
sending?” ( CAIB 2003 , 184).
Moreover, the board learned that
several NASA engineers had tried to warn NASA
management of an impending disaster after the launch
of the doomed shuttle, but the project managers in
question had reportedly failed to act on these
Delving into the organizational causes of this disaster,
the board made extensive use of the body of insights
known as high-reliability theory (HRT). e board
“selected certain well-known traits” from HRT and
used these “as a yardstick to assess the Space Shuttle
Program” ( CAIB 2003 , 180).
1 e board concluded
that NASA did not qualify as a “high-reliability orga-
nization” (HRO) and recommended an overhaul of
the organization to bring NASA to the coveted status
of an HRO.
In adopting the HRO model as a benchmark
for past and future safety performance, CAIB
tapped into a wider trend. It is only slightly hyper-
bolic to describe the quest for high-reliability
cultures in large-scale organizations — in energy,
medical, and military circles — in terms of a Holy
Grail ( Bourrier 2001; Reason 1997 ; Roberts et al.,
forthcoming ; Weick and Sutcliﬀ e 2001 ). An entire
consultancy industry has sprouted up around the
notion that public and private organizations can
be made more reliable by adopting the characteristics
Louisiana State University
Assessing NASA’s Safety Culture: e Limits and Possibilities
of High-Reliability eory
Arjen Boin is the director of the
Stephenson Disaster Management
Institute and an associate professor in
the Public Administration Institute at
Louisiana State University. He writes
about crisis management, public
leadership, and institutional design.
E-mail : email@example.com
Paul Schulman is a professor of
government at Mills College in Oakland,
California. He has done extensive research
on high-reliability organizations and
Large-Scale Policy Making
(Elsevier, 1980) and, with Emery Roe,
High Reliability Management
E-mail : firstname.lastname@example.org
e Columbia Accident
Investigation Board . . . strongly
criticized NASA’s safety culture.
Assessing NASA‘s Safety Culture 1051
All this raises starkly the question, what, exactly, does
high-reliability theory entail? Does this theory explain
organizational disasters? Does it provide a tool for
assessment? Does it oﬀ er a set of prescriptions that can
help organizational leaders design their organizations
into HROs? If so, has HRT been applied in real-life
cases before? If NASA is to be reformed on the basis
of a theoretical assessment, some assessment of the
theory itself seems to be in order.
Interestingly, and importantly, HRT does not oﬀ er
clear-cut answers to these critical questions (cf. La-
Porte 1994, 1996, 2006 ). e small group of high-
reliability theorists (as they have come to be known)
has never claimed that HRT could provide these
answers, nor has the theory been developed to this
degree by others. is is not to say that HRT is irrel-
evant. In this article, we argue that HRT contains
much that is potentially useful, but its application to
evaluate the organizational performance of non-
HROs requires a great deal of further research. We
oﬀ er a way forward to fulﬁ ll this potential.
We begin by brieﬂ y revisiting the CAIB report and
outlining the main precepts of high-reliability theory.
Building on this overview, we argue that NASA, in its
human spaceﬂ ight program, never did adopt, nor
could it ever have adopted, the characteristics of an
HRO. 2 We suggest that NASA is better understood as
a public organization that has to serve multiple and
conﬂ icting aims in a politically volatile environment
( Wilson 1989 ). We oﬀ er the beginnings of an alterna-
tive assessment model, which allows us to inspect for
threats to reliability in those organizations that seek
reliability but by their nature cannot be HROs.
The CAIB Report: A Summary of Findings
e CAIB presented its ﬁ ndings in remarkably speedy
fashion, within seven months of the Columbia ’s de-
mise. 3 e board uncovered the direct technical cause
of the disaster, the hard-hitting foam. It then took its
analysis one step further, because the board subscribed
to the view “that NASA’s organizational culture had as
much to do with this accident as foam did” ( CAIB
2003 , 12). e board correctly noted that many ac-
cident investigations make the mistake of deﬁ ning
causes in terms of technical ﬂ aws and individual fail-
ures ( CAIB 2003 , 77). As the board did not want to
commit a similar error, it set out to discover the orga-
nizational causes of this accident.
e board arrived at some far-reaching conclusions.
According to the CAIB, NASA did not have in place
eﬀ ective checks and balances between technical and
managerial priorities, did not have an independent
safety program, and had not demonstrated the charac-
teristics of a learning organization. e board found
that the very same factors that had caused the Chal-
lenger disaster 17 years earlier, on January 28, 1986,
were at work in the Columbia tragedy ( Rogers Com-
mission 1986 ). Let us brieﬂ y revisit the main ﬁ ndings.
Acceptance of escalated risk . e Rogers Commis-
sion ( 1986 ) had found that NASA operated with a
deeply ﬂ awed risk philosophy. is philosophy pre-
vented NASA from properly investigating anomalies
that emerged during previous shuttle ﬂ ights. One
member of the Rogers Commission (oﬃ cially, the
Presidential Commission on the Space Shuttle Chal-
lenger Accident), Nobel laureate Richard Feynman,
described the core of the problem (as he saw it) in an
oﬃ cial appendix to the ﬁ nal report:
e argument that the same risk was ﬂ own be-
fore without failure is often accepted as an argu-
ment for the safety of accepting it again. Because
of this, obvious weaknesses are accepted again,
sometimes without a suﬃ ciently serious attempt
to remedy them, or to delay a ﬂ ight because of
their continued presence. (Rogers Commission
1986, 1, appendix F; emphasis added)
e CAIB found the very same philosophy at work:
“[W]ith no engineering analysis, Shuttle managers used
past success as a justiﬁ cation for future ﬂ ights” ( CAIB
2003 , 126). is explains, according to the CAIB, why
NASA “ignored” the shedding of foam, which had
occurred during most of the previous shuttle launches.
Flawed decision making . e Rogers Commission
had criticized NASA’s decision-making system, which
“did not ﬂ ag rising doubts” among the workforce with
regard to the safety of the shuttle. On the eve of the
Challenger launch, engineers at iokol (the makers of
the O-rings) suggested that cold temperatures could
undermine the eﬀ ectiveness of the O-rings. After
several rounds of discussion, NASA management
decided to proceed with the launch. Similar doubts
were raised and dismissed before Columbia’s fateful
return ﬂ ight. Several engineers alerted NASA manage-
ment to the possibility of serious damage to the ther-
mal protection system (after watching launch videos
and photographs). After several rounds of consulta-
tion, it was decided not to pursue further investiga-
tions (such as photographing the shuttle in space).
Such an investigation, the CAIB report asserts, could
have initiated a life-saving operation.
Broken safety culture . Both commissions were
deeply critical of NASA’s safety culture. e Rogers
Commission noted that NASA had “lost” its safety
program; the CAIB speaks of “a broken safety cul-
ture.” In her seminal analysis of the Challenger disas-
ter, Diane Vaughan (1996) identiﬁ ed NASA’s
susceptibility to “schedule pressure” as a factor that
induced NASA to overlook or downplay safety con-
cerns. In the case of Columbia, the CAIB observed
that the launch date was tightly coupled to the
1052 Public Administration Review • November | December 2008
completion schedule of the International Space Sta-
tion. NASA had to meet these deadlines, the CAIB
argues, because failure to do so would undercut its
legitimacy (and funding).
Dealing with Obvious Weaknesses
e common thread in the CAIB ﬁ ndings is NASA’s
lost ability to recognize and act on what, in hindsight,
seem “obvious weaknesses” (cf. Rogers Commission,
appendix F, 1). According to the CAIB, the younger
NASA of the Apollo years had possessed the right
safety culture. Ignoring the 1967 ﬁ re and the near miss
with Apollo 13 (immortalized in the blockbuster
movie), the report describes how NASA had lost its
way somewhere between the moon landing and the
new shuttle. e successes of the past, the report tells
us, had generated a culture of complacency, even hu-
bris. NASA had become an arrogant organization that
believed it could do anything (cf. Starbuck and Mil-
liken 1988 ). “ e Apollo era created at NASA an
exceptional “can-do” culture marked by tenacity in the
face of seemingly impossible challenges” ( CAIB 2003 ,
101). e Apollo moon landing “helped reinforce the
NASA staﬀ ’s faith in their organizational culture.”
However, the “continuing image of NASA as a ‘perfect
place’ … left NASA employees unable to recognize
that NASA never had been, and still was not, perfect.”
e CAIB highlighted NASA’s alleged shortcomings
by contrasting the space agency with two supposed
high-reliability organizations: e Navy Submarine
and Reactor Safety Programs and the Aerospace Cor-
poration ( CAIB 2003 , 182 – 84). ese organizations,
according to the CAIB, are “examples of organizations
that have invested in redundant technical authorities
and processes to become highly reliable” ( CAIB 2003 ,
184). e CAIB report notes “there are eﬀ ective ways
to minimize risk and limit the number of accidents”
( CAIB 2003 , 182) — the board clearly judged that
NASA had not done enough to adopt and implement
those ways. e high-reliability organization thus
became an explicit model for explaining and assessing
NASA’s safety culture. e underlying hypothesis is
clear: If NASA had been an HRO, the shuttles would
not have met their disastrous fate. How tenable is this
Revisiting High-Reliability Theory: An
Assessment of Findings and Limits
High-reliability theory began with a small group of
researchers studying a distinct and special class of
organizations — those charged with the management
of hazardous but essential technical systems ( LaPorte
and Consolini 1991; Roberts 1993; Rochlin 1996;
Schulman 1993 ). Failure in these organizations could
mean the loss of critical capacity as well as thousands
of lives both within and outside the organization. e
term “high-reliability organization” was coined to
denote those organizations that had successfully
avoided such failure while providing operational capa-
bilities under a full range of environmental conditions
(which, as of this writing, most of these designated
HROs have managed to do).
What makes HROs special is that they do not treat
reliability as a probabilistic property that can be
traded at the margins for other organizational values
such as eﬃ ciency or market competitiveness. An
HRO has identiﬁ ed a speciﬁ c set of events that must
be deterministically precluded; they must simply
never happen. ey must be prevented not by techno-
logical design alone, but by organizational strategy
is is no easy task. In his landmark study of organi-
zations that operate dangerous technologies, Charles
Perrow (1999) explained how two features — complex-
ity and tight coupling — will eventually induce and
propagate failure in ways that are unfathomable by
operators in real time (cf. Turner 1978 ). Complex and
tightly coupled technologies (think of nuclear power
plants or information technology systems) are acci-
dents waiting to happen. According to Perrow, their
occurrence should be considered “normal accidents”
with huge adverse potential.
is is what makes HROs such a fascinating research
object: ey somehow seem to avoid the unavoidable.
is ﬁ nding intrigues researchers and enthuses practi-
tioners in ﬁ elds such as aviation, chemical processing,
High-reliability theorists set out to investigate the
secret of HRO success. ey engaged in individual
case studies of nuclear aircraft carriers, nuclear power
plants, and air traﬃ c control centers. Two important
ﬁ ndings surfaced. First, the researchers found that
once a threat to safety emerges, however faint or dis-
tant, an HRO immediately “reorders” and reorganizes
to deal with that threat ( LaPorte 2006 ). Safety is the
chief value against which all decisions, practices, in-
centives, and ideas are assessed — and remains so under
Second, they discovered that HROs organize in re-
markably similar and seemingly eﬀ ective ways to serve
and service this value.
7 e distinctive features of these
organizations, as reported by high-reliability research-
ers, include the following:
● High technical competence throughout the
● A constant, widespread search for improvement
across many dimensions of reliability
● A careful analysis of core events that must be
precluded from happening
● An analyzed set of “precursor” conditions that
would lead to a precluded event, as well as a clear
Assessing NASA‘s Safety Culture 1053
demarcation between these and conditions that lie
outside prior analysis
● An elaborate and evolving set of procedures and
practices, closely linked to ongoing analysis, which
are directed toward avoiding precursor conditions
● A formal structure of roles, responsibilities, and
reporting relationships that can be transformed un-
der conditions of emergency or stress into a decen-
tralized, team-based approach to problem solving
● A “culture of reliability” that distributes and
instills the values of care and caution, respect for
procedures, attentiveness, and individual responsi-
bility for the promotion of safety among members
throughout the organization
Organization theory suggests that, in reality, such an
organization cannot take on all of these characteristics
( LaPorte 2006; LaPorte and Consolini 1991 ). Over-
whelming evidence and dominant theoretical perspec-
tives in the study of public and private organizations
assert that the perfect operation of complex and dan-
gerous technology is beyond the capacity of humans,
given their inherent imperfections and the predomi-
nance of trial-and-error learning in nearly all human
undertakings ( Hood 1976; Perrow 1986; Reason
1997; Simon 1997 ). Further, these same theories warn
that it would be incredibly hard to build these charac-
teristics, which are central to the development of
highly reliable operations, into an organization
( LaPorte and Consolini 1991; Rochlin 1996 ).
An HRO can develop these special features because
external support, constraints, and regulations allow for
it. Most public organizations
cannot aﬀ ord to prioritize safety
over all other values; they must
serve multiple, mutually contra-
dicting values ( Wilson 1989 ).
us, HROs typically exist in
closely regulated environments
that force them to take reliability
seriously but also shield them
from full exposure to the market and other forms of
environmental competition. Avoiding accidents or
critical failure is a requirement not only for societal
safety and security, but also for continued acceptance
and possibly survival in the unforgiving political and
regulatory “niche” these organizations are forced to
occupy. In fact, it would be considered illegitimate to
trade safety for other values in pursuit of market or
other competitive advantages.
The Limits of High-Reliability Research
e research on HROs has not been without contro-
versy. 8 Perrow (1994) dismissed HRT ﬁ ndings by
arguing that organizations charged with the manage-
ment of complex and tightly coupled technical sys-
tems (the type usually studied in reliability research)
can never hope to transcend the intrinsic vulnerability
to a highly interactive form of degradation. His nor-
mal accident theory gives reason to believe that no
organizational eﬀ ort can alter the risks embedded in
the technical cores of these systems ( Perrow 1999 ).
Quite the contrary: Organizational interventions
(such as centralization or adding redundancy) are
likely to escalate the risks inherent in complex and
tightly coupled technologies. In this perspective, the
very idea of “high-reliability” organizations that suc-
cessfully exploit dangerous technologies is at best a
temporary illusion ( Perrow 1994 ).
is controversy, in its most extreme form, centers
around an assertion that cannot actually be disproved
because of its tautological nature. No amount of good
performance can falsify the theory of normal accidents
because it can always be said that an organization is
only as reliable as the ﬁ rst catastrophic failure that lies
ahead, not the many successful operations that lie be-
hind. Yet ironically, this is precisely the perspective that
many managers of HROs share about their organiza-
tions. ey are constantly seeking improvement because
they are “running scared” from the accident ahead, not
complacent about the performance records compiled in
the past. is prospective approach to reliability is a
distinguishing feature that energizes many of the ex-
traordinary eﬀ orts undertaken within HROs.
e high-reliability theory/normal accident theory
controversy aside, it is clear that HRT has limits both
in terms of explanation and prescription. High-reliability
researchers readily acknowledge that they have studied
a fairly limited number of individual organizations
at what amounts to a single
snapshot in time.
features of high-reliability orga-
nizations can persist throughout
the lifecycle of an organization is
as yet unknown. Moreover, we
only know a limited amount
about the origins of these charac-
teristics ( LaPorte 2006 ): Are they
imposed by regulatory environments, the outcome of
institutional evolution, or perhaps the product of
Questions also surround the relation between organi-
zational characteristics and reliability. High reliability
has been taken as a deﬁ ning characteristic of the spe-
cial organizations selected for study by HRO research-
ers. However, the descriptive features uncovered in
these organizations have not been conclusively tied to
the reliability of their performance. High-reliability
theory thus stands not as a theory of causation regard-
ing high reliability but rather as a careful description
of a special set of organizations.
Even if HROs understand which critical events must
be avoided, it remains unclear how they evolve the
Most public organizations
cannot aﬀ ord to prioritize safety
over all other values; they must
serve multiple, mutually
1054 Public Administration Review • November | December 2008
capacity to avoid these events. Trial-and-error learn-
ing — the most conventional mode of organizational
learning — is sharply constrained, particularly in rela-
tion to those core events that they are trying to pre-
clude. 10 Moreover, learning is impeded by the problem
of few cases and many variables: Because HROs expe-
rience few, if any, major failures (or they would not
survive as HROs), it is diﬃ cult to understand which
of the many variables they manage can cause them.
HROs could conceivably learn from other organiza-
tions, but that would require a fair amount of (near)
disasters somewhere else (and somewhere conve-
niently far away). If this is true, learners automatically
All this makes HRT-based prescription a rather
sketchy enterprise, well beyond the arguments of
HRT itself. It remains for future researchers to iden-
tify which subset of properties is necessary or suﬃ -
cient to produce high reliability and to determine
which variables and in what degree might contribute
to higher and lower reliability among a wider variety
of organizations. We will now consider in particular
why HRT does not provide an adequate framework
for assessing NASA’s safety practices. e reason is
simple: NASA never was, nor could it ever have been,
Why NASA Has Never Been a High-
In its assessment of NASA’s safety culture, the CAIB
adopted the characteristics of the ideal-typical HRO
11 It measured NASA’s shortcomings
against the way in which HROs reportedly organize in
the face of potential catastrophe. e board quite
understandably wondered why NASA could not oper-
ate as, for instance, the Navy Submarine and Reactor
Safety Programs had done.
We argue that NASA never has been an HRO. More
importantly, NASA could never have become such an
organization, no matter how hard it tried to organize
toward a “precluded-event” standard for reliability.
erefore, to judge NASA by these standards is both
unfair and counterproductive.
e historic backdrop against which the agency was
initiated made it impossible for reliability and safety
to become overriding values. NASA was formed in a
white-hot political environment. Space exploration
had become a focal point of Cold War competition
between the United States and the Soviet Union after
the successful ﬂ ight of the Russian Sputnik ( Logsdon
1976 ). e formation of NASA was a consolidation of
space programs under way in several agencies, notably
the U.S. Air Force, Navy, and Army. is consolida-
tion was one way of addressing the implicit scale
requirements associated with manned spaceﬂ ight
( Schulman 1980 ). So, too, was the galvanizing
national commitment made by President John F.
Kennedy in 1961 of “landing a man on the
moon by the end of the decade and returning him
safely to earth.”
While Kennedy’s commitment included the word
“safely,” safety was only one part of the initial NASA
mission. e most important part of the lunar landing
commitment was that the goal, and its intermediate
milestones, be achieved and achieved on time. In this
sense, NASA was born into an environment of sched-
ule pressure — inescapable and immensely public. is
pressure — absent in the environment of HROs —
would dog NASA through the years.
NASA’s mission commitment was thus something
quite diﬀ erent from the commitment to operational
reliability of an HRO. A public dread surrounds the
events that an HRO is trying to preclude — be they
accidents that release nuclear materials, large-scale
electrical blackouts, or collisions between large passen-
ger jets. ese events threaten not just operators or
members of the organization but potentially large
segments of the public as well. A general sense of
public vulnerability is associated with these events.
No similarly dreaded events constrained the explora-
tion of space. No set of precluded events was imposed
on NASA, which, in turn, would have required HRO
characteristics to develop in the organization. e loss
of a crew of astronauts in 1967 saddened but did not
threaten the general population; it certainly did not
cause NASA to miss the 1969 deadline. e loss of
personnel in the testing of experimental aircraft was,
in fact, not an unexpected occurrence in aeronautics
(the ﬁ rst astronauts were test pilots, a special breed of
fast men living dangerously, as portrayed in Tom
Wolfe’s novel e Right Stuf f ).
is is not to say that the safety of the crew was no
issue for NASA’s engineers. Quite the contrary. e
designers of the Apollo spacecraft worked closely with
them and thus knew well the men who were to ﬂ y
their contraptions. e initial design phases were
informed by extreme care and a heavy emphasis on
testing all the parts that made up the experimental
spacecraft. If the safety of the crew had been the sole
concern of NASA’s engineers, the space agency could
conceivably have developed into an HRO.
But unlike HROs, which have a clearly focused safety
mission that is built around a repetitive production
process and relatively stable technology, NASA’s mis-
sion has always been one of cumulatively advancing
spaceﬂ ight technology and capability ( Johnson 2002;
Logsdon 1999; Murray and Cox 1989 ). Nothing
about NASA’s human spaceﬂ ight program has been
repetitive or routine. Multiple launches of Saturn
rockets in the Apollo project each represented an
Assessing NASA‘s Safety Culture 1055
evolving technology, each rocket a custom-built sys-
tem. ey were not assembly-line copies that had
been standardized and debugged over production runs
in the thousands.
e shuttle is one of the world’s most complex ma-
chines, which is not fully understood in either its
design or production aspects ( CAIB 2003 ). After
more than 120 missions in nearly three decades, the
shuttle still delivers surprises. Further, as its compo-
nents age, the shuttle presents NASA engineers and
technicians with new challenges. Each shuttle mission
is hardly routine — there is much to learn cumulatively
with each one.
e incomplete knowledge base and the unruly nature
of space technology force NASA to be a research and
development organization, which makes heavy use of
experimental design and trial-and-error learning. Each
launch is a rationally designed and carefully orches-
trated experiment. Each successful return is consid-
ered a provisional conﬁ rmation of the “null
hypothesis” that asserts the designed contraption can
ﬂ y (cf. Petroski 1992 ).
In this design philosophy, tragedy is the inevitable
price of progress. Tragic failure came when Apollo 6
astronauts Gus Grissom, Ed White, and Roger
Chaﬀ ee (the original crew for the moon landing)
perished in a ﬁ re during a capsule test at Cape Canav-
eral. e disaster revealed many design failures that
were subsequently remedied. Within NASA, the 1969
lunar landing was considered a validation of its insti-
tutionalized way of spacecraft development. While the
general public seemed to accept
that tragedy as an unfortunate
accident, times have changed.
Shuttle disasters are now gener-
ally considered avoidable failures.
Trial-and-Error Learning in
a Politically Charged
e development of space tech-
nology is fraught with risk. Only
frequent missions can enhance a
complete understanding of this relatively new and
balky technology. A focus solely on safety and re-
liability would sharply limit the number of missions,
which would make technological progress, including
building a full knowledge base about its core technol-
ogy, arduously slow.
e political niche occupied by NASA since its cre-
ation, including the political coalitions underlying
its mission commitment and funding, has never
supported a glacial, no-risk developmental pace.
NASA must show periodic progress by ﬂ ying its
contraptions to justify the huge budgets allocated to
the space agency. is was ﬁ rst impressed upon NASA
in 1964, after NASA administrator James Webb real-
ized that progress was too slow. Webb brought in Dr.
George Mueller, who subsequently terminated the
practice of endless testing, imposing the more practi-
cal yet rationally sound philosophy of all-up testing
( Johnson 2002; Logsdon 1999; McCurdy 1993;
Murray and Cox 1989 ). is philosophy prescribes
that once rigorous engineering criteria have been
met, only actual ﬂ ight can validate the design
(cf. Petroski 1992 ).
e apparent success of this philosophy fueled expec-
tations with regard to the speedy development of
new space technology. From the moment it left the
design table, NASA has been under pressure to treat
the shuttle as if it were a routine transportation sys-
tem ( CAIB 2003 ). Rapid turnaround was a high
priority for original client agencies such as the De-
fense Department, which depended on NASA for its
satellite launching capabilities. Research commu-
nities depended on the shuttle for projects such as
the Hubble space telescope and other space explora-
tion projects. Political rationales forced NASA to
complete the International Space Station and have
led NASA to ﬂ y senators and, with tragic results, a
teacher into space.
Over time, however, NASA’s political environment
has become increasingly sensitive to the loss of astro-
nauts, certainly when such tragedies transpire in the
glaring lights of the media. A shuttle failure is no
longer mourned and accepted as the price for progress
toward that elusive goal of a reliable space transporta-
tion system. Today, NASA’s
environment scrutinizes the
paths toward disaster, identifying
“preventable” and thus condem-
nable errors, with little or no
empathy for the plight of the
organization and its members.
NASA’s political and societal
environment, in short, has placed
the agency in a catch-22 situa-
tion. It will not support a rapid
and risky shuttle ﬂ ight schedule, but it does expect
spectacular results. Stakeholders expect NASA to
prioritize safety, but they do not accept the costs and
delays that would guarantee it.
is means that NASA cannot strive to become an
HRO unless its political and societal environment
experiences a major value shift. ose values would
have to embrace, among other things, steeply higher
costs associated with continuous and major redesigns
of space vehicles, as well as the likelihood, at least in
the near term, of far fewer ﬂ ights. In other words, a
research and development organization such as NASA
Today, NASA’s environment
scrutinizes the paths toward
“preventable” and thus
condemnable errors, with little
or no empathy for the plight of
the organization and its
1056 Public Administration Review • November | December 2008
cannot develop HRO characteristics because of the
political environment in which it exists.
How to Assess Reliability-Seeking
Even if NASA cannot become an HRO, we expect
NASA at least to seek reliability. Given the combina-
tion of national interests, individual risks, and huge
spending, politicians and taxpayers deserve a way of
assessing how well NASA is performing. More gener-
ally, it is important to develop standards that can be
applied to organizations such as NASA, which have to
juggle production or time pressures, substantial tech-
nical uncertainties, safety concerns, eﬃ ciency con-
cerns, and media scrutiny. Any tool of assessment
should take all of these imposed values into account.
Based on our reading of organization theory, public
administration research, the literature on organiza-
tional crises, and the ﬁ ndings of high-reliability theo-
rists, we propose a preliminary framework for
assessing a large-scale public research and develop-
ment organization that pursues the development of
risky technology within full view of the general pub-
lic. ese assessment criteria are by no means com-
plete or deﬁ nitive. ey provide a starting point for
evaluating the commitment of reliability-seeking
organizations such as NASA. ey broaden the in-
quiry from pure safety-related questions to include the
institutional context in which reliability challenges
must be dealt with. ey oﬀ er a way to assess how the
agency — from its executive leaders down to the work
ﬂ oor — balances safety against other values.
is framework is based on the premise that space-
ﬂ ight technology is inherently hazardous to astro-
nauts, to work crews, and to bystanders. erefore,
safety should be a core value of the program, even if it
cannot be the sole, overriding value informing NASA’s
organizational processes. We accept that reliability
must always be considered a “precarious value” in its
operation ( Clark 1956 ). Reliability and safety must be
actively managed and reinforced in relation to cross-
cutting political pressures and organizational objec-
tives. With these premises in mind, we suggest three
analytical dimensions against which reliability-seeking
organizations should be assessed.
A Coherent Approach to Safety
e ﬁ rst dimension pertains to the operating philosophy
that governs the value trade-oﬀ s inherent in this type
of public organization (cf. Selznick 1957 ). is dimen-
sion prompts assessors to consider whether the organi-
zation has in place a clearly formulated and widely
shared approach that helps employees negotiate
the safety – reliability tensions that punctuate the devel-
opment and implementation phases of a new and risky
design trajectory. e presence of such an approach
furthers consistency, eases communication, and nurtures
coordination, which, in turn, increase the likelihood of
a responsible design eﬀ ort that minimizes risk. More
importantly, for our purposes, it relays whether the
organization is actively and intelligently seeking reli-
ability (whether it achieves it is another matter).
It is clear that NASA has always taken the search for
reliability very seriously ( Logsdon 1999; Swanson
2002; Vaughan 1996 ). Over time, NASA developed a
well-deﬁ ned way to assess safety concerns and weigh
them against political and societal expectations
( Vaughan 1996 ). is approach of “sound engineering,”
which has been informed and strengthened both by
heroic success and tragic failure, asserts that the combi-
nation of top-notch design and experiential learning
marks the way toward eventual success. It accepts that
even the most rational plans can be laid to waste by the
quirks and hardships of the space environment.
e NASA approach to safety prescribes that decisions
must be made on the basis of hard science only (no
room exists for gut feelings). Protocols and procedures
guide much of the decision-making process ( Vaughan
1996 ). But reliability frequently comes down to single,
real-time decisions in individual cases — to launch or
not to launch is the returning question. e NASA
philosophy oﬀ ers to its managers a way to balance in
real-time safety concerns with other organizational
and mission values. NASA clings to its safety ap-
proach, but it accepts that it is not perfect. Periodic
failure is not considered the outcome of a ﬂ awed
philosophy but a fateful materialization of the
ever-existing risk that comes with the space territory.
Rather than assessing NASA’s safety approach against
absolute reliability norms used by HROs, one should
assess it against alternative approaches. Here we may
note that a workable alternative to NASA’s heavily
criticized safety approach has yet to emerge.
Searching for Failure: A Real-Time Reliability
e second dimension focuses our attention on the
mechanisms that have been introduced to minimize
safety risks. e underlying premise holds that safety
is the outcome of an error-focused process. It is not
the valuation of safety per se, but rather the unwilling-
ness to tolerate error that drives the pursuit of high
reliability. All else being equal, the more people in an
organization who are concerned about the misidentiﬁ -
cations, the misspeciﬁ cations, and the misunderstand-
ings that can lead to potential errors, the higher the
reliability that organization can hope to achieve
( Schulman 2005 ). From this we argue that the con-
tinual search for error in day-to-day operations should
be a core organizational process ( Landau 1969; Lan-
dau and Chisholm 1995; Weick and Sutcliﬀ e 2001 ).
In NASA, the detection of critical error requires
real-time capacity on the part of individuals and
Assessing NASA‘s Safety Culture 1057
teams to read signals and make the right decision at a
critical time. As this real-time appraisal and decision
making is crucial to safety, it is important to develop
standards for the soundness of this process. A variety
of organizational studies, including studies of HROs,
oﬀ er some that appear particularly relevant.
e ﬁ rst standard involves avoiding organizational
features or practices that would directly contradict the
requirement for error detection. Because the potential
for error or surprise exists in many organizational
activities, from mission planning to hardware and
software development to maintenance and mission
support activities, information that could constitute
error signals must be widely available through communi-
cation nets that can cut across departments and hierar-
chical levels. Communication barriers or blockages can
pose a threat to feedback, evidence accumulation, and
the sharing of cautionary concerns.
A realistic evaluation of NASA’s safety system
would start with an assessment of how such a large-
scale organization can share information without
getting bogged down in a sea of data generated by
thousands of employees. We know it is often clear
only in hindsight what information constitutes a
critical “signal” and what is simply “noise.”
realistic and useful reliability assessment must rec-
ognize this built-in organizational dilemma and
establish what can be reasonably expected in the
way of feedback. e standard should not be every
possible piece of information available to every
organizational member; the organization should
have evolved a strategy so that information of high
expected value regarding potential error (in the po-
tential consequences adjusted by their likelihood)
can be available to key decision makers prior to the
point of real-time critical decisions.
Organizational studies remind us that the reporting of
error or concerns about potential errors should be
encouraged, or at least not subject to sanction or
punishment. Organizations that punish the reporting
of error can expect errors to be covered up or under-
reported, which would certainly reduce the reliability
they hope to attain ( Michael 1973; Tamuz 2001 ). A
realistic assessment would consider whether the orga-
nization has removed signiﬁ cant barriers for dissident
employees to speak up. It would also consider whether
the organization has done enough to encourage peo-
ple to step forward. One such assessment is found in
Vaughan’s (1996) analysis of the Challenger disaster, in
which she concludes that all engineers within NASA
had a real opportunity to bring doubts to the table
(provided these doubts were expressed in the concepts
and the rationales of “sound engineering”).
Another standard of reliability is a reasonably “distrib-
uted ability” to act in response to error: to adjust or
modify an error-prone organizational practice, correct
errors in technical designs, or halt a critical process if
errors are suspected. is distribution of action or
veto points does not have to be as widely distributed
as in the Japanese factory in which any assembly line
worker could stop the line, but it does have to extend
beyond top managers and probably, given past cases,
beyond program heads. A realistic analysis would
consider whether the distribution of veto points (per-
haps in the form of multiple sign-oﬀ s required within
departments) has penetrated deeply enough without
paralyzing the organization’s pursuit of other core
Beyond searching for contradictions between these
requirements and organizational practices, a reliability
assessment should also scan for contradictions in logic
that might appear in reliability perspectives and analy-
ses themselves. One such contradiction actually did
appear in NASA. It was evident in the widely diverg-
ing failure probability estimates reportedly held by top
managers and shuttle project engineers prior to the
Challenger disaster ( Vaughan 1996 ). is disparity has
been reported in other organizations as well ( Hutter
2005 ). Contradictory risk assessments cannot all be
right, and organizations that buﬀ er such contradic-
tions face a larger risk of error in their approach to
managing for higher reliability.
Another logical contradiction can develop between
prospective and retrospective orientations toward
reliability. is can be compounded by an asymmetri-
cal treatment of formal and experiential knowledge in
maintaining each orientation. NASA did in fact expe-
rience trouble with its assessment of error reports,
insofar as it has traditionally evaluated against stan-
dards of “sound engineering,” which tend to under-
value “intuitive” (e.g., experiential) concerns. When a
member of an organization expresses a “gut feeling” or
concern for the reliability or safety of a system, others
may insist that these concerns be expressed in terms of
a formal failure analysis, which places the burden of
proof on those with concerns to show in a speciﬁ c
model the ways (and probabilities) in which a failure
to occur. is approach does not do justice to the
experiential or tacit knowledge base from which a
worrisome pattern might be detected or a failure
scenario imagined ( Weick and Sutcliﬀ e 2001 ).
It is hard to bridge these two modes of assessment
( Dunbar and Garud 2005 ). But while it has dis-
counted experiential or tacit knowledge concerns in
assessing prospective error potential in the shuttle,
NASA has traditionally relied heavily on past opera-
tional experience in retrospectively assessing shuttle
reliability. is contradictory orientation — requiring
failure prospects to be formally modeled but accepting
a tacit, retrospective conﬁ rmation of reliability — in
NASA’s treatment of safety concerns about the shuttle
1058 Public Administration Review • November | December 2008
Columbia after its tile strike during the fateful launch
in 2003 has understandably drawn much criticism.
Having such a contradiction at the heart of its per-
spective on reliability has proven to be a serious im-
pediment to the detection of error ( Dunbar and
Garud 2005 ).
An additional organizational practice to be assessed in
connection with the pursuit of higher reliability in an
organization such as NASA is the generation and prop-
agation of cumulative knowledge founded on error.
Whereas high-reliability organizations may have
sharply curtailed opportunities for trial-and-error
learning, reliability-seeking organizations should
evidence a commitment to learning all that can be
learned from errors, however regrettable, and translat-
ing that learning into an ever more extensive knowl-
edge base for the organization transmitted to
successive generations of its members. Careful study
of errors to glean potential reliability improvements
should be a norm throughout the organization. While
the organization must move on and address its other
core operational values, there should be a resistance to
premature closure in error investigations before un-
dertaking some collaborative root-cause analysis in-
volving some outside perspectives.
Evidence of cumulative learning can also be found
in the treatment of organizational procedures and
the process of procedure writing and modiﬁ cation.
Procedures should be taken seriously throughout
the organization as a living documentation of the
knowledge base of the organization. ey should be
consistently corrected or enhanced in light of expe-
rience and should be “owned” not just by top man-
agers but also by employees down to the shop level.
Members of the organization should understand the
logic and purpose of a procedure and not regard it
simply as a prescription to be mindlessly followed.
Many of these error-focused standards can indeed be
observed in HROs. NASA must pursue them within a
far less supportive environment.
HROs operate within a frame-
work of settled knowledge
founded on long operational
experience and prior formal
analysis. In a nuclear power
plant, for instance, operating
“outside of analysis” is a regula-
14 Yet NASA, given
the unsettled nature of its tech-
nology and the incomplete
knowledge base governing its
operations, must operate in key respects outside of
analysis — an invitation to error. Given these limita-
tions, it is important that standards for error detection
be taken seriously, even when other organizational
values are prominent.
Preserving Institutional Integrity
e third dimension pertains to what Philip Selznick
(1957) referred to as the institutional integrity of the
organization. is dimension directs us to consider
how an organization balances its established way of
working against the shifting demands imposed on the
organization by its stakeholders. An organization’s way
of working typically is the result of trial-and-error
learning, punctuated by success and failure. Over
time, as path dependency theorists remind us ( Pierson
2004 ), established routines and procedures may well
become ends in themselves. e organization then
becomes protective of its way of working, defending
against outside critics by denial or overpromising.
NASA has not performed well on this dimension
since the early 1970s. Whereas NASA enjoyed high
levels of support during the famed Apollo years, it was
an unstable support, shifting from euphoria after a
successful manned ﬂ ight to a loss of public interest
and, ultimately, to concern about the costs of space
exploration relative to other pressing domestic policy
demands. After the moon landing, societal and politi-
cal support for highly ambitious and expensive space
Yet NASA felt compelled to keep its human space-
ﬂ ight program alive. e search for new projects that
would capture the popular imagination — a new
Apollo adventure — ran into budgetary constraints and
political hesitation (President Nixon slashed the bud-
get). Rather than adapting to this new reality by scal-
ing down ambitions, NASA overpromised and
oversold the reliability of its technology. For political
reasons, the shuttle project was presented as a highly
reliable, routine near-space transportation system
(even if space shuttle missions never became routine,
nor were they treated as such) ( Vaughan 1996 ; cf.
CAIB 2003 ). According to Vaughan (1996) , this
pursuit of goals that were just out of reach generated
pressures on the organization’s safety culture.
e explosion of Challenger
stripped NASA of whatever
mythical status it had retained.
e empty promise of a reliable
and eﬃ cient shuttle transporta-
tion system would become a key
factor in NASA’s diminishing
status. e technology of the
shuttle had never been settled
such that it could allow the
routinization of ﬂ ight. At the
same time, there was no galvaniz-
ing goal such as the lunar landing, the progression
toward which could validate the failures in the
development of this technology. As a result, there was
no support for major delays or expenditures that were
reliability and not production focused.
. . . NASA, given the unsettled
nature of its technology and the
incomplete knowledge base
governing its operations, must
operate in key respects outside
of analysis—an invitation to
Assessing NASA‘s Safety Culture 1059
Caught in the middle of an unstable environment in
which there is little tolerance for either risk or produc-
tion and scheduling delays, NASA has become a con-
demnable organization — it is being judged against
standards it is in no position to pursue or achieve. is
plight is, of course, shared by many public organiza-
tions and creates a set of leadership challenges that may
be impossible to fulﬁ ll ( Selznick 1957; Wilson 1989 ).
Yet where some public organizations make do ( Har-
grove and Glidewell 1990 ), it appears that NASA was
less adept at coping with its “impossible” predicament.
Conclusion: Toward a Realistic Assessment
of Reliability-Seeking Organizations
If, as we argue, NASA is not a high-reliability organi-
zation in the sense described by HRO theorists, some
important implications follow. First, it is both an
analytic and a practical error to assess NASA — an
agency that is expected to experiment and innovate —
by the standards of an HRO (in which experimenta-
tion is strongly discouraged). To do so is misleading
with respect to the important diﬀ erences in the mis-
sion, technology, and environment of NASA relative
to HROs ( LaPorte 2006 ).
It is also unhelpful to evaluate NASA by standards that
it is in no position to reach. Such evaluations lead to
inappropriate “reforms” and punishments. e irony is
that these could transform NASA into the opposite of
an institutionalized HRO — that is, a permanently
failing organization (cf. Meyer and Zucker 1989 ).
We may well wonder whether the recommendations
of the CAIB report would help NASA become one if
it could. In HROs, reliability is achieved through an
ever-anxious concern with core organizational pro-
cesses. It’s about awareness, making critical decisions,
sharing information, puzzling, worrying, and acting.
e CAIB recommendations, however, are of a struc-
tural nature. ey impose new bureaucratic layers
rather than designing intelligent processes. ey im-
pose new standards (“become a learning organization”)
while ignoring the imposed standards that make it
impossible to become an HRO (“bring a new Crew
Exploration Vehicle into service as soon as possible”
and “return to the moon during the next decade”).
Starting from false premises, the CAIB report thus
ends with false promises. e idea that safety is a
function of single-minded attention may hold true for
HROs, but it falls ﬂ at in organizations that can never
In this article, we have argued that reliability-seeking
organizations that simply cannot become HROs re-
quire and deserve their own metric for assessing their
safety performance. We have identiﬁ ed a preliminary
set of assessment dimensions. ese dimensions go
beyond those narrow technical factors utilized in
probabilistic risk assessments and other risk-assess-
ment methodologies, but they are not beyond assess-
ing through intensive organizational observations and
interviews, as well as survey research. In fact, the will-
ingness of NASA to accord periodic access to indepen-
dent reliability researchers would itself be a test of its
commitment to error detection. is could be done
under the auspices of an organization such as the
National Academy of Engineering or the American
Society for Public Administration with funding from
the National Science Foundation or NASA itself.
Such an assessment procedure should certainly not be
adversarial. It should be a form of cooperative re-
search. It should be ongoing and not a post hoc re-
view undertaken only on the heels of a major incident
or failure. Further, and perhaps most importantly, it
should not be grounded in unrealistic standards im-
ported inappropriately from the peculiar world of
HROs. is in itself would constitute an insuperable
contradiction for any reliability assessment — it would
be grounded at its outset in analytic error.
In the ﬁ nal analysis, reliability is a matter of organiza-
tional norms that help individual employees at all
levels in the organization to make the right decision.
e presence of such norms is often tacitly viewed as
an erosion of executive authority, which undermines
the responsiveness of public organizations to pressures
from Congress and media. It is a leadership task to
nurture and protect those norms while serving legiti-
mate stakeholders ( Selznick 1957 ).
But such leadership, in turn, requires that the organiza-
tion and its mission be institutionalized in the political
setting in which it must operate. A grant of trust must
be extended to leaders and managers of these organiza-
tions regarding their professional norms and judgment.
If the organization sits in a precarious or condemnable
position in relation to its political environment, then it
“can’t win for losing” because of the trade-oﬀ s that go
unreconciled in its operation. Participants will fail to
establish any lasting norms because of the fear of hos-
tile external reactions to the neglect of either speed or
safety in key decisions. Ultimately, then, the pursuit of
reliability in NASA depends in no small measure on
the public’s organizational assessment of it and the
foundation on which it is accorded political support.
e authors thank Todd LaPorte, Allan McConnell,
Paul ‘t Hart and the three anonymous PAR reviewers
for their perceptive comments on earlier versions of
1 . e board also made use of normal accident
theory, which some academics view in contrast to
1060 Public Administration Review • November | December 2008
HRT. e board clearly derived most of its
insights and critiques from its reading of HRT,
however. If it had adhered to normal accident
theory, we can conjecture that the CAIB would
have been more sympathetic to NASA’s plight (as
it probably would have considered the shuttle
disaster a “normal accident”).
2. NASA comprises 10 separate centers that serve
the diﬀ erent formal missions of the agency. In
this article, we are exclusively concerned with
NASA’s human spaceﬂ ight program and the
centers that serve this program. Here we follow
the CAIB report (2003).
3. See Starbuck and Farjoun (2005) for a discussion
of the ﬁ ndings of this report.
4 . is is an important step in the analysis of
organizational disasters, which sits well with the
conventional wisdom found in theoretical trea-
tises on the subject ( Perrow 1999; Smith and
Elliott 2006; Turner 1978 ).
5 . e CAIB presents no ﬁ rm evidence to back up
this claim. See McDonald (2005) for a resolute
dismissal of this claim. e accusation that NASA
would press ahead with a launch because of
“schedule pressure” is rather audacious. NASA
has a long history of safety-related launch delays;
the schedule pressure in the Columbia case was a
direct result of earlier delays. In fact, the CAIB
( 2003 , 197) acknowledged that NASA stood
down from launch on other occasions when it did
suspect problems were manifest. To NASA
people, the idea that a crew would be sent up in
the face of known deﬁ ciencies is outrageous. As
one engineer pointed out, “We know the astro-
nauts” ( Vaughan 1996 ).
6 . e CAIB takes its reference to a “perfect place”
from Gary Brewer’s (1989) essay on NASA. It
should be noted that Brewer is speaking about
external perceptions of NASA and readily admits
in his essay, “I know precious little about NASA
or space policy … the little I know about NASA
and space means that I can speak my mind
without particular preconceptions” (157). e
CAIB, however, cites from Brewer‘s essay as if he
has just completed a thorough study into the
organizational culture of this “perfect place.”
7. In fact, the closer observations were to the major
hazard points, the more similar these practices
8. See the special issue of the Journal of Contingen-
cies and Crisis Management (1994) for a heated
discussion. See also Sagan (1993) and Rijpma
9. It should be noted that the number of cases is
gradually growing, but there is very little eﬀ ort to
systematically compare cases. One notable
exception is Rochlin and Von Meier (1994) .
10. is is not to say that errors do not occur within
HROs. ey do, and HROs take them extremely
seriously. But HROs cannot adopt a trial-and-
error strategy because the political, economic, and
institutional costs of key errors are unlikely to be
oﬀ set by the beneﬁ ts of learning (but see Wil-
davsky 1988 ).
11. Several experts no doubt played an inﬂ uential
role in explicating the HRO model to the CAIB
members. Professors Karlene Roberts, Diane
Vaughan, and Karl Weick are recognized experts
on the workings of HROs and consulted with the
CAIB. See Vaughan (2006) for a behind-the-
scenes account of the CAIB deliberations. eir
involvement, of course, does not make them
responsible for CAIB’s diagnosis.
12. Even if NASA cannot operate fully as an HRO,
as a reliability-seeking organization, it cannot
ignore HRO lessons in error detection. If it is
forced to pursue values such as speed, eﬃ ciency,
or cost reductions at increased risk, it is impor-
tant to understand as clearly as possible, at the
point of decision, the character of that risk.
13. is issue is raised in Roberta Wohlstetter’s
(1962) classic analysis of intelligence “failures”
associated with the Pearl Harbor attack.
14. Nuclear Regulatory Commission, Code of
Federal Regulations, Title 10, part 50.
15. See Perin (2005) for a complementary approach.
Bourrier , Mathilde , ed . 2001 . Organiser la ﬁ abilité .
Paris : L’Harmattan .
Brewer , Gary D . 1989 . Perfect Places: NASA as an
Idealized Institution . In Space Policy Reconsidered ,
edited by Radford Byerly , Jr ., 157 – 73 . Boulder,
CO : Westview Press .
Clark , Burton R . 1956 . Organizational Adaptation
and Precarious Values: A Case Study . American
Sociological Review 21 ( 3 ): 327 – 36 .
Clarke , Lee . 2006 . Worst Cases: Terror and Catastrophe
in the Popular Imagination . Chicago : University of
Chicago Press .
Columbia Accident Investigation Board (CAIB) .
2003 . Columbia Accident Investigation Report .
Burlington, Ontario : Apogee Books .
Dunbar , Roger , and Raghu Garud . 2005 . Data
Indeterminacy: One NASA, Two Modes . I n
Organization at the Limit: Lessons from the
Columbia Accident , edited by William H. Starbuck
and Moshe Farjoun , 202 – 19 . Malden, MA :
Hargrove , Erwin C. , and John C. Glidewell, eds .
1990 . Impossible Jobs in Public Management .
Lawrence : University Press of Kansas .
Hood , Christopher C . 1976 . e Limits of
Administration . New York : Wiley .
Hutter , Bridget . 2005 . “Ways of Seeing”:
Understandings of Risk in Organisational Settings .
I n Organizational Encounters with Risk , edited by
Assessing NASA‘s Safety Culture 1061
Bridget Hutter and Michael Power , 67 – 91 .
Cambridge : Cambridge University Press .
Johnson , Stephen B . 2002 . e Secret of Apollo: Systems
Management in American and European Space
Programs . Baltimore : Johns Hopkins University Press .
Landau , Martin . 1969 . Redundancy, Rationality, and
the Problem of Duplication and Overlap . Public
Administration Review 29 ( 4 ): 346 – 58 .
Landau , Martin , and Donald Chisholm . 1995 . e
Arrogance of Optimism . Journal of Contingencies
and Crisis Management 3 ( 2 ): 67 – 80 .
LaPorte , Todd R . 1994 . A Strawman Speaks Up .
Journal of Contingencies and Crisis Management
2 ( 4 ): 207 – 11 .
— — — . 1996 . High Reliability Organizations:
Unlikely, Demanding and At Risk . Journal of
Contingencies and Crisis Management 4 ( 2 ):
60 – 71 .
— — — . 2006 . Institutional Issues for Continued
Space Exploration: High-Reliability Systems
Across Many Operational Generations —
Requisites for Public Credibility . I n Critical Issues
in the History of Spaceﬂ ight , edited by Steven J.
Dick and Roger D. Launius , 403 – 27 .
Washington, DC : National Aeronautics and
Space Administration .
LaPorte , Todd R. , and Paula M. Consolini . 1991 .
Working in Practice but Not in eory:
eoretical Challenges of “High-Reliability
Organizations.” Journal of Public Administration
Research and eory 1 ( 1 ): 19 – 48 .
e Limits to Safety: A Symposium . 1994 . Special
issue, Journal of Contingencies and Crisis
Management 2 ( 4 ).
Logsdon , John M . 1976 . e Decision to Go to the
Moon: Project Apollo and the National Interest .
Chicago : University of Chicago Press .
— — — , ed . 1999 . Managing the Moon
Program: Lessons Learned from Project Apollo .
Monographs in Aerospace History 14 ,
Washington, DC : National Aeronautics and
Space Administration .
McCurdy , Howard E . 1993 . Inside NASA: High
Technology and Organizational Change in the U.S.
Space Program . Baltimore : Johns Hopkins
University Press .
— — — . 2001 . Faster, Better, Cheaper: Low-Cost
Innovation in the U.S. Space Program . Baltimore :
Johns Hopkins University Press .
McDonald , Henry . 2005 . Observations on the
Columbia Accident . In Organization at the Limit:
Lessons from the Columbia Disaster , edited by
William H. Starbuck and Moshe Farjoun , 336 – 46 .
Malden, MA : Blackwell .
Meyer , Marshall W. , and Lynne G. Zucker . 1989 .
Permanently Failing Organizations . Newbury Park,
CA : Sage Publications .
Michael , Donald N . 1973 . On Learning to Plan — And
Planning to Learn . San Francisco : Jossey-Bass .
Murray , Charles , and Catherine Bly Cox . 1989 . Apollo:
e Race to the Moon . New York : Simon & Schuster .
Perin , Constance . 2005 . Shouldering Risks: e Culture
of Control in the Nuclear Power Industry . Princeton,
NJ : Princeton University Press .
Perrow , Charles . 1986 . Complex Organizations: A
Critical Essay . New York : McGraw-Hill .
— — — . 1994 . e Limits of Safety: e
Enhancement of a eory of Accidents . Journal of
Contingencies and Crisis Management 2 ( 4 ):
212 – 20 .
— — — . 1999 . Normal Accidents: Living with High-
Risk Technologies . Princeton, NJ : Princeton
University Press .
Petroski , Henry . 1992 . To Engineer Is Human: e Role of
Failure in Successful Design . New York : Vintage Books .
Pierson , Paul . 2004 . Politics in Time: History,
Institutions, and Social Analysis . Princeton, NJ :
Princeton University Press .
Presidential Commission on the Space Shuttle
Challenger Accident (Rogers Commission) . 1986 .
Report to the President by the Presidential Commission
on the Space Shuttle Challenger Accident .
Washington, DC : Government Printing Oﬃ ce .
Reason , James . 1997 . Managing the Risks of
Organizational Accidents . Aldershot : Ashgate .
Rijpma , Jos A . 1997 . Complexity, Tight-Coupling
and Reliability: Connecting Normal Accidents
eory and High Reliability eory . Journal of
Contingencies and Crisis Management 5 ( 1 ):
15 – 23 .
Roberts , Karlene H. , ed . 1993 . New Challenges to
Understanding Organizations . New York : Macmillan .
Roberts , Karlene H. , Peter Madsen , Vinit Desai , and
Daved Van Stralen . Forthcoming . A High
Reliability Health Care Organization Requires
Constant Attention to Organizational Processes .
Quality and Safety in Health Care .
Rochlin , Gene I . 1996 . Reliable Organizations:
Present Research and Future Directions . Journal of
Contingencies and Crisis Management , 4 ( 2 ): 55 – 59 .
Rochlin , Gene I. , and Alexandra von Meier . 1994 .
Nuclear Power Operations: A Cross-Cultural
Perspective . Annual Review of Energy and the
Environment 19 : 133 – 87 .
Sagan , Scott D . 1993 . e Limits of Safety:
Organizations, Accidents, and Nuclear Weapons .
Princeton, NJ : Princeton University Press .
Schulman , Paul R . 1980 . Large-Scale Policy-Making .
New York : Elsevier .
— — — . 1993 . e Negotiated Order of
Organizational Reliability . Administration & Society
25 ( 3 ): 353 – 72 .
— — — . 2005 . e General Attributes of Safe
Organizations . Quality and Safety in Health Care
13 ( 2 ): 39 – 44 .
Selznick , Philip . 1957 . Leadership in Administration: A
Sociological Interpretation . Berkeley : University of
California Press .
1062 Public Administration Review • November | December 2008
Simon , Herbert A . 1997 . Administrative Behavior: A
Study of Decision-Making Processes in Administrative
Organizations . 4th ed . New York : Free Press .
Smith , Denis , and Dominic Elliott , eds . 2006 .
Key Readings in Crisis Management: Systems and
Structures for Prevention and Recovery . London :
Starbuck , William H. , and Moshe Farjoun , eds .
2005 . Organization at the Limit: Lessons from the
Columbia Accident . Malden, MA : Blackwell .
Starbuck , William H. , and Frances J. Milliken . 1988 .
Challenger: Fine-Tuning the Odds Until
Something Breaks . Journal of Management Studies
25 ( 4 ): 319 – 40 .
Swanson , Glen E. , ed . 2002 . Before is Decade Is
Out … Personal Reﬂ ections on the Apollo Program .
Gainesville : University Press of Florida .
Tamuz , Michal . 2001 . Learning Disabilities for
Regulators: e Perils of Organizational Learning
in the Air Transportation Industry . Administration
& Society 33 ( 3 ): 276 – 302 .
Turner , Barry A . 1978 . Man-Made Disasters . London :
Vaughan , Diane . 1996 . e Challenger Launch
Decision: Risky Technology, Culture and Deviance at
NASA . Chicago : University of Chicago Press .
— — — . 2006 . NASA Revisited : Ethnography, eory
and Public Sociology . American Journal of Sociology
112 ( 2 ): 353 – 93 .
Weick , Karl E. , and Kathleen M. Sutcliﬀ e . 2001 .
Managing the Unexpected: Assuring High
Performance in an Age of Complexity . San Francisco :
Wildavsky , Aaron . 1988 . Searching for Safety . N e w
Brunswick, NJ : Transaction Books .
Wilson , James Q . 1989 . Bureaucracy: What
Government Agencies Do and Why ey Do It . N e w
York : Basic Books .
Wohlstetter , Roberta . 1962 . Pearl Harbor: Warning and
Decision . Stanford, CA : Stanford University Press .
Wolfe , Tom . 2005 . e Right Stuﬀ . New York : Black
Have You Noticed?
PAR is Packed!
More pages, more content, more topics, more authors,
more perspectives than ever.
Support our work and your ﬁ eld by joining ASPA today.