ArticlePDF Available

Assessing NASA’s Safety Culture: The Limits and Possibilities of High‐Reliability Theory

Authors:

Abstract

After the demise of the space shuttle Columbia on February 1, 2003, the Columbia Accident Investigation Board sharply criticized NASA’s safety culture. Adopting the high-reliability organization as a benchmark, the board concluded that NASA did not possess the organizational characteristics that could have prevented this disaster. Furthermore, the board determined that high-reliability theory is “extremely useful in describing the culture that should exist in the human spaceflight organization.” In this article, we argue that this conclusion is based on a misreading and misapplication of high-reliability research. We conclude that in its human spaceflight programs, NASA has never been, nor could it be, a high-reliability organization. We propose an alternative framework to assess reliability and safety in what we refer to as reliability-seeking organizations.
1050 Public Administration Review • November | December 2008
After the demise of the space shuttle Columbia on Febru-
ary 1, 2003, the Columbia Accident Investigation Board
sharply criticized NASA’s safety culture. Adopting the
high-reliability organization as a benchmark, the board
concluded that NASA did not possess the organizational
characteristics that could have prevented this disaster.
Furthermore, the board determined that high-reliabil-
ity theory is “extremely useful in
describing the culture that should
exist in the human spacefl ight
organization.” In this article, we
argue that this conclusion is based
on a misreading and misapplica-
tion of high-reliability research. We
conclude that in its human spacefl ight programs, NASA
has never been, nor could it be, a high-reliability orga-
nization. We propose an alternative framework to assess
reliability and safety in what we refer to as reliability-
seeking organizations.
I n January 2001, the National Aeronautics and
Space Administration (NASA) discovered a wiring
problem in the solid rocket booster of the space
shuttle Endeavor. e wire was “mission critical,” so
NASA replaced it before launching the shuttle. But
NASA did not take any chances: It inspected more
than 6,000 similar connections and discovered that
four were loose ( Clarke 2006, 45 ).  e thorough
inspection may well have prevented a shuttle disaster.
is mindfulness and the commitment to follow-
through concerning this safety issue could be taken as
indicators of a strong safety climate within NASA. It
would confi rm the many observations, academic and
popular, with regard to NASA’s strong commitment
to safety in its human spacefl ight programs ( Johnson
2002; McCurdy 1993, 2001; Murray and Cox 1989;
Vaughan 1996 ).
Two years later, the space shuttle Columbia disinte-
grated above the southern skies of the United States.
e subsequent inquiry into this disaster revealed that
a piece of insulating foam (the size of a cooler) had
come loose during the launch, then struck and dam-
aged several tiles covering a panel door that protected
the wing from the extreme heat that reentry into the
earth’s atmosphere generates.  e compromised de-
fense at this single spot caused the demise of Columbia.
e Columbia Accident Investigation Board (CAIB)
strongly criticized NASA’s safety culture. After discov-
ering that the “foam problem
had a long history in the space
shuttle program, the board asked
how NASA could “have missed
the signals that the foam was
sending?” ( CAIB 2003 , 184).
Moreover, the board learned that
several NASA engineers had tried to warn NASA
management of an impending disaster after the launch
of the doomed shuttle, but the project managers in
question had reportedly failed to act on these
warnings.
Delving into the organizational causes of this disaster,
the board made extensive use of the body of insights
known as high-reliability theory (HRT).  e board
“selected certain well-known traits” from HRT and
used these “as a yardstick to assess the Space Shuttle
Program” ( CAIB 2003 , 180).
1 e board concluded
that NASA did not qualify as a “high-reliability orga-
nization” (HRO) and recommended an overhaul of
the organization to bring NASA to the coveted status
of an HRO.
In adopting the HRO model as a benchmark
for past and future safety performance, CAIB
tapped into a wider trend. It is only slightly hyper-
bolic to describe the quest for high-reliability
cultures in large-scale organizations in energy,
medical, and military circles in terms of a Holy
Grail ( Bourrier 2001; Reason 1997 ; Roberts et al.,
forthcoming ; Weick and Sutcliff e 2001 ). An entire
consultancy industry has sprouted up around the
notion that public and private organizations can
be made more reliable by adopting the characteristics
of HROs.
Arjen Boin
Louisiana State University
Paul Schulman
Mills College
Assessing NASAs Safety Culture:  e Limits and Possibilities
of High-Reliability  eory
Arjen Boin is the director of the
Stephenson Disaster Management
Institute and an associate professor in
the Public Administration Institute at
Louisiana State University. He writes
about crisis management, public
leadership, and institutional design.
E-mail : boin@lsu.edu
Paul Schulman is a professor of
government at Mills College in Oakland,
California. He has done extensive research
on high-reliability organizations and
has written
Large-Scale Policy Making
(Elsevier, 1980) and, with Emery Roe,
High Reliability Management
(Stanford
University Press).
E-mail : schulmans1@comcast.net
Critical
Questions
about Safety
and Security
e Columbia Accident
Investigation Board . . . strongly
criticized NASAs safety culture.
Assessing NASA‘s Safety Culture 1051
All this raises starkly the question, what, exactly, does
high-reliability theory entail? Does this theory explain
organizational disasters? Does it provide a tool for
assessment? Does it off er a set of prescriptions that can
help organizational leaders design their organizations
into HROs? If so, has HRT been applied in real-life
cases before? If NASA is to be reformed on the basis
of a theoretical assessment, some assessment of the
theory itself seems to be in order.
Interestingly, and importantly, HRT does not off er
clear-cut answers to these critical questions (cf. La-
Porte 1994, 1996, 2006 ).  e small group of high-
reliability theorists (as they have come to be known)
has never claimed that HRT could provide these
answers, nor has the theory been developed to this
degree by others.  is is not to say that HRT is irrel-
evant. In this article, we argue that HRT contains
much that is potentially useful, but its application to
evaluate the organizational performance of non-
HROs requires a great deal of further research. We
off er a way forward to fulfi ll this potential.
We begin by briefl y revisiting the CAIB report and
outlining the main precepts of high-reliability theory.
Building on this overview, we argue that NASA, in its
human spacefl ight program, never did adopt, nor
could it ever have adopted, the characteristics of an
HRO. 2 We suggest that NASA is better understood as
a public organization that has to serve multiple and
confl icting aims in a politically volatile environment
( Wilson 1989 ). We off er the beginnings of an alterna-
tive assessment model, which allows us to inspect for
threats to reliability in those organizations that seek
reliability but by their nature cannot be HROs.
The CAIB Report: A Summary of Findings
e CAIB presented its fi ndings in remarkably speedy
fashion, within seven months of the Columbia ’s de-
mise. 3 e board uncovered the direct technical cause
of the disaster, the hard-hitting foam. It then took its
analysis one step further, because the board subscribed
to the view “that NASAs organizational culture had as
much to do with this accident as foam did” ( CAIB
2003 , 12). e board correctly noted that many ac-
cident investigations make the mistake of defi ning
causes in terms of technical fl aws and individual fail-
ures ( CAIB 2003 , 77). As the board did not want to
commit a similar error, it set out to discover the orga-
nizational causes of this accident.
4
e board arrived at some far-reaching conclusions.
According to the CAIB, NASA did not have in place
eff ective checks and balances between technical and
managerial priorities, did not have an independent
safety program, and had not demonstrated the charac-
teristics of a learning organization.  e board found
that the very same factors that had caused the Chal-
lenger disaster 17 years earlier, on January 28, 1986,
were at work in the Columbia tragedy ( Rogers Com-
mission 1986 ). Let us briefl y revisit the main fi ndings.
Acceptance of escalated risk . e Rogers Commis-
sion ( 1986 ) had found that NASA operated with a
deeply fl awed risk philosophy.  is philosophy pre-
vented NASA from properly investigating anomalies
that emerged during previous shuttle fl ights. One
member of the Rogers Commission (offi cially, the
Presidential Commission on the Space Shuttle Chal-
lenger Accident), Nobel laureate Richard Feynman,
described the core of the problem (as he saw it) in an
offi cial appendix to the fi nal report:
e argument that the same risk was fl own be-
fore without failure is often accepted as an argu-
ment for the safety of accepting it again. Because
of this, obvious weaknesses are accepted again,
sometimes without a suffi ciently serious attempt
to remedy them, or to delay a fl ight because of
their continued presence. (Rogers Commission
1986, 1, appendix F; emphasis added)
e CAIB found the very same philosophy at work:
“[W]ith no engineering analysis, Shuttle managers used
past success as a justifi cation for future fl ights” ( CAIB
2003 , 126). is explains, according to the CAIB, why
NASA “ignored” the shedding of foam, which had
occurred during most of the previous shuttle launches.
Flawed decision making . e Rogers Commission
had criticized NASA’s decision-making system, which
“did not fl ag rising doubts” among the workforce with
regard to the safety of the shuttle. On the eve of the
Challenger launch, engineers at  iokol (the makers of
the O-rings) suggested that cold temperatures could
undermine the eff ectiveness of the O-rings. After
several rounds of discussion, NASA management
decided to proceed with the launch. Similar doubts
were raised and dismissed before Columbia’s fateful
return fl ight. Several engineers alerted NASA manage-
ment to the possibility of serious damage to the ther-
mal protection system (after watching launch videos
and photographs). After several rounds of consulta-
tion, it was decided not to pursue further investiga-
tions (such as photographing the shuttle in space).
Such an investigation, the CAIB report asserts, could
have initiated a life-saving operation.
Broken safety culture . Both commissions were
deeply critical of NASA’s safety culture.  e Rogers
Commission noted that NASA had “lost” its safety
program; the CAIB speaks of “a broken safety cul-
ture.” In her seminal analysis of the Challenger disas-
ter, Diane Vaughan (1996) identifi ed NASA’s
susceptibility to “schedule pressure” as a factor that
induced NASA to overlook or downplay safety con-
cerns. In the case of Columbia, the CAIB observed
that the launch date was tightly coupled to the
1052 Public Administration Review • November | December 2008
completion schedule of the International Space Sta-
tion. NASA had to meet these deadlines, the CAIB
argues, because failure to do so would undercut its
legitimacy (and funding).
5
Dealing with Obvious Weaknesses
e common thread in the CAIB fi ndings is NASA’s
lost ability to recognize and act on what, in hindsight,
seem “obvious weaknesses” (cf. Rogers Commission,
appendix F, 1). According to the CAIB, the younger
NASA of the Apollo years had possessed the right
safety culture. Ignoring the 1967 fi re and the near miss
with Apollo 13 (immortalized in the blockbuster
movie), the report describes how NASA had lost its
way somewhere between the moon landing and the
new shuttle.  e successes of the past, the report tells
us, had generated a culture of complacency, even hu-
bris. NASA had become an arrogant organization that
believed it could do anything (cf. Starbuck and Mil-
liken 1988 ). “ e Apollo era created at NASA an
exceptional “can-do” culture marked by tenacity in the
face of seemingly impossible challenges” ( CAIB 2003 ,
101).  e Apollo moon landing “helped reinforce the
NASA staff s faith in their organizational culture.”
However, the “continuing image of NASA as a ‘perfect
place’ left NASA employees unable to recognize
that NASA never had been, and still was not, perfect.”
6
e CAIB highlighted NASA’s alleged shortcomings
by contrasting the space agency with two supposed
high-reliability organizations:  e Navy Submarine
and Reactor Safety Programs and the Aerospace Cor-
poration ( CAIB 2003 , 182 84).  ese organizations,
according to the CAIB, are “examples of organizations
that have invested in redundant technical authorities
and processes to become highly reliable” ( CAIB 2003 ,
184).  e CAIB report notes “there are eff ective ways
to minimize risk and limit the number of accidents”
( CAIB 2003 , 182) — the board clearly judged that
NASA had not done enough to adopt and implement
those ways.  e high-reliability organization thus
became an explicit model for explaining and assessing
NASA’s safety culture.  e underlying hypothesis is
clear: If NASA had been an HRO, the shuttles would
not have met their disastrous fate. How tenable is this
hypothesis?
Revisiting High-Reliability Theory: An
Assessment of Findings and Limits
High-reliability theory began with a small group of
researchers studying a distinct and special class of
organizations those charged with the management
of hazardous but essential technical systems ( LaPorte
and Consolini 1991; Roberts 1993; Rochlin 1996;
Schulman 1993 ). Failure in these organizations could
mean the loss of critical capacity as well as thousands
of lives both within and outside the organization.  e
term “high-reliability organization” was coined to
denote those organizations that had successfully
avoided such failure while providing operational capa-
bilities under a full range of environmental conditions
(which, as of this writing, most of these designated
HROs have managed to do).
What makes HROs special is that they do not treat
reliability as a probabilistic property that can be
traded at the margins for other organizational values
such as effi ciency or market competitiveness. An
HRO has identifi ed a specifi c set of events that must
be deterministically precluded; they must simply
never happen.  ey must be prevented not by techno-
logical design alone, but by organizational strategy
and management.
is is no easy task. In his landmark study of organi-
zations that operate dangerous technologies, Charles
Perrow (1999) explained how two features complex-
ity and tight coupling will eventually induce and
propagate failure in ways that are unfathomable by
operators in real time (cf. Turner 1978 ). Complex and
tightly coupled technologies (think of nuclear power
plants or information technology systems) are acci-
dents waiting to happen. According to Perrow, their
occurrence should be considered “normal accidents”
with huge adverse potential.
is is what makes HROs such a fascinating research
object:  ey somehow seem to avoid the unavoidable.
is nding intrigues researchers and enthuses practi-
tioners in fi elds such as aviation, chemical processing,
and medicine.
High-reliability theorists set out to investigate the
secret of HRO success.  ey engaged in individual
case studies of nuclear aircraft carriers, nuclear power
plants, and air traffi c control centers. Two important
ndings surfaced. First, the researchers found that
once a threat to safety emerges, however faint or dis-
tant, an HRO immediately “reorders” and reorganizes
to deal with that threat ( LaPorte 2006 ). Safety is the
chief value against which all decisions, practices, in-
centives, and ideas are assessed and remains so under
all circumstances.
Second, they discovered that HROs organize in re-
markably similar and seemingly eff ective ways to serve
and service this value.
7 e distinctive features of these
organizations, as reported by high-reliability research-
ers, include the following:
High technical competence throughout the
organization
A constant, widespread search for improvement
across many dimensions of reliability
A careful analysis of core events that must be
precluded from happening
An analyzed set of “precursor” conditions that
would lead to a precluded event, as well as a clear
Assessing NASA‘s Safety Culture 1053
demarcation between these and conditions that lie
outside prior analysis
An elaborate and evolving set of procedures and
practices, closely linked to ongoing analysis, which
are directed toward avoiding precursor conditions
A formal structure of roles, responsibilities, and
reporting relationships that can be transformed un-
der conditions of emergency or stress into a decen-
tralized, team-based approach to problem solving
A “culture of reliability” that distributes and
instills the values of care and caution, respect for
procedures, attentiveness, and individual responsi-
bility for the promotion of safety among members
throughout the organization
Organization theory suggests that, in reality, such an
organization cannot take on all of these characteristics
( LaPorte 2006; LaPorte and Consolini 1991 ). Over-
whelming evidence and dominant theoretical perspec-
tives in the study of public and private organizations
assert that the perfect operation of complex and dan-
gerous technology is beyond the capacity of humans,
given their inherent imperfections and the predomi-
nance of trial-and-error learning in nearly all human
undertakings ( Hood 1976; Perrow 1986; Reason
1997; Simon 1997 ). Further, these same theories warn
that it would be incredibly hard to build these charac-
teristics, which are central to the development of
highly reliable operations, into an organization
( LaPorte and Consolini 1991; Rochlin 1996 ).
An HRO can develop these special features because
external support, constraints, and regulations allow for
it. Most public organizations
cannot aff ord to prioritize safety
over all other values; they must
serve multiple, mutually contra-
dicting values ( Wilson 1989 ).
us, HROs typically exist in
closely regulated environments
that force them to take reliability
seriously but also shield them
from full exposure to the market and other forms of
environmental competition. Avoiding accidents or
critical failure is a requirement not only for societal
safety and security, but also for continued acceptance
and possibly survival in the unforgiving political and
regulatory “niche” these organizations are forced to
occupy. In fact, it would be considered illegitimate to
trade safety for other values in pursuit of market or
other competitive advantages.
The Limits of High-Reliability Research
e research on HROs has not been without contro-
versy. 8 Perrow (1994) dismissed HRT ndings by
arguing that organizations charged with the manage-
ment of complex and tightly coupled technical sys-
tems (the type usually studied in reliability research)
can never hope to transcend the intrinsic vulnerability
to a highly interactive form of degradation. His nor-
mal accident theory gives reason to believe that no
organizational eff ort can alter the risks embedded in
the technical cores of these systems ( Perrow 1999 ).
Quite the contrary: Organizational interventions
(such as centralization or adding redundancy) are
likely to escalate the risks inherent in complex and
tightly coupled technologies. In this perspective, the
very idea of “high-reliability” organizations that suc-
cessfully exploit dangerous technologies is at best a
temporary illusion ( Perrow 1994 ).
is controversy, in its most extreme form, centers
around an assertion that cannot actually be disproved
because of its tautological nature. No amount of good
performance can falsify the theory of normal accidents
because it can always be said that an organization is
only as reliable as the fi rst catastrophic failure that lies
ahead, not the many successful operations that lie be-
hind. Yet ironically, this is precisely the perspective that
many managers of HROs share about their organiza-
tions.  ey are constantly seeking improvement because
they are “running scared” from the accident ahead, not
complacent about the performance records compiled in
the past.  is prospective approach to reliability is a
distinguishing feature that energizes many of the ex-
traordinary eff orts undertaken within HROs.
e high-reliability theory/normal accident theory
controversy aside, it is clear that HRT has limits both
in terms of explanation and prescription. High-reliability
researchers readily acknowledge that they have studied
a fairly limited number of individual organizations
at what amounts to a single
snapshot in time.
9 Whether
features of high-reliability orga-
nizations can persist throughout
the lifecycle of an organization is
as yet unknown. Moreover, we
only know a limited amount
about the origins of these charac-
teristics ( LaPorte 2006 ): Are they
imposed by regulatory environments, the outcome of
institutional evolution, or perhaps the product of
clever leadership?
Questions also surround the relation between organi-
zational characteristics and reliability. High reliability
has been taken as a defi ning characteristic of the spe-
cial organizations selected for study by HRO research-
ers. However, the descriptive features uncovered in
these organizations have not been conclusively tied to
the reliability of their performance. High-reliability
theory thus stands not as a theory of causation regard-
ing high reliability but rather as a careful description
of a special set of organizations.
Even if HROs understand which critical events must
be avoided, it remains unclear how they evolve the
Most public organizations
cannot aff ord to prioritize safety
over all other values; they must
serve multiple, mutually
contradicting values.
1054 Public Administration Review • November | December 2008
capacity to avoid these events. Trial-and-error learn-
ing the most conventional mode of organizational
learning is sharply constrained, particularly in rela-
tion to those core events that they are trying to pre-
clude. 10 Moreover, learning is impeded by the problem
of few cases and many variables: Because HROs expe-
rience few, if any, major failures (or they would not
survive as HROs), it is diffi cult to understand which
of the many variables they manage can cause them.
HROs could conceivably learn from other organiza-
tions, but that would require a fair amount of (near)
disasters somewhere else (and somewhere conve-
niently far away). If this is true, learners automatically
become laggards.
All this makes HRT-based prescription a rather
sketchy enterprise, well beyond the arguments of
HRT itself. It remains for future researchers to iden-
tify which subset of properties is necessary or suffi -
cient to produce high reliability and to determine
which variables and in what degree might contribute
to higher and lower reliability among a wider variety
of organizations. We will now consider in particular
why HRT does not provide an adequate framework
for assessing NASA’s safety practices.  e reason is
simple: NASA never was, nor could it ever have been,
an HRO.
Why NASA Has Never Been a High-
Reliability Organization
In its assessment of NASA’s safety culture, the CAIB
adopted the characteristics of the ideal-typical HRO
as benchmarks.
11 It measured NASAs shortcomings
against the way in which HROs reportedly organize in
the face of potential catastrophe.  e board quite
understandably wondered why NASA could not oper-
ate as, for instance, the Navy Submarine and Reactor
Safety Programs had done.
We argue that NASA never has been an HRO. More
importantly, NASA could never have become such an
organization, no matter how hard it tried to organize
toward a “precluded-event” standard for reliability.
erefore, to judge NASA by these standards is both
unfair and counterproductive.
e historic backdrop against which the agency was
initiated made it impossible for reliability and safety
to become overriding values. NASA was formed in a
white-hot political environment. Space exploration
had become a focal point of Cold War competition
between the United States and the Soviet Union after
the successful fl ight of the Russian Sputnik ( Logsdon
1976 ). e formation of NASA was a consolidation of
space programs under way in several agencies, notably
the U.S. Air Force, Navy, and Army.  is consolida-
tion was one way of addressing the implicit scale
requirements associated with manned spacefl ight
( Schulman 1980 ). So, too, was the galvanizing
national commitment made by President John F.
Kennedy in 1961 of “landing a man on the
moon by the end of the decade and returning him
safely to earth.”
While Kennedy’s commitment included the word
“safely,” safety was only one part of the initial NASA
mission.  e most important part of the lunar landing
commitment was that the goal, and its intermediate
milestones, be achieved and achieved on time. In this
sense, NASA was born into an environment of sched-
ule pressure inescapable and immensely public.  is
pressure absent in the environment of HROs
would dog NASA through the years.
NASA’s mission commitment was thus something
quite diff erent from the commitment to operational
reliability of an HRO. A public dread surrounds the
events that an HRO is trying to preclude be they
accidents that release nuclear materials, large-scale
electrical blackouts, or collisions between large passen-
ger jets.  ese events threaten not just operators or
members of the organization but potentially large
segments of the public as well. A general sense of
public vulnerability is associated with these events.
No similarly dreaded events constrained the explora-
tion of space. No set of precluded events was imposed
on NASA, which, in turn, would have required HRO
characteristics to develop in the organization.  e loss
of a crew of astronauts in 1967 saddened but did not
threaten the general population; it certainly did not
cause NASA to miss the 1969 deadline.  e loss of
personnel in the testing of experimental aircraft was,
in fact, not an unexpected occurrence in aeronautics
(the fi rst astronauts were test pilots, a special breed of
fast men living dangerously, as portrayed in Tom
Wolfe’s novel e Right Stuf f ).
is is not to say that the safety of the crew was no
issue for NASA’s engineers. Quite the contrary.  e
designers of the Apollo spacecraft worked closely with
them and thus knew well the men who were to fl y
their contraptions.  e initial design phases were
informed by extreme care and a heavy emphasis on
testing all the parts that made up the experimental
spacecraft. If the safety of the crew had been the sole
concern of NASA’s engineers, the space agency could
conceivably have developed into an HRO.
But unlike HROs, which have a clearly focused safety
mission that is built around a repetitive production
process and relatively stable technology, NASA’s mis-
sion has always been one of cumulatively advancing
spacefl ight technology and capability ( Johnson 2002;
Logsdon 1999; Murray and Cox 1989 ). Nothing
about NASA’s human spacefl ight program has been
repetitive or routine. Multiple launches of Saturn
rockets in the Apollo project each represented an
Assessing NASA‘s Safety Culture 1055
evolving technology, each rocket a custom-built sys-
tem.  ey were not assembly-line copies that had
been standardized and debugged over production runs
in the thousands.
e shuttle is one of the world’s most complex ma-
chines, which is not fully understood in either its
design or production aspects ( CAIB 2003 ). After
more than 120 missions in nearly three decades, the
shuttle still delivers surprises. Further, as its compo-
nents age, the shuttle presents NASA engineers and
technicians with new challenges. Each shuttle mission
is hardly routine there is much to learn cumulatively
with each one.
e incomplete knowledge base and the unruly nature
of space technology force NASA to be a research and
development organization, which makes heavy use of
experimental design and trial-and-error learning. Each
launch is a rationally designed and carefully orches-
trated experiment. Each successful return is consid-
ered a provisional confi rmation of the “null
hypothesis” that asserts the designed contraption can
y (cf. Petroski 1992 ).
In this design philosophy, tragedy is the inevitable
price of progress. Tragic failure came when Apollo 6
astronauts Gus Grissom, Ed White, and Roger
Chaff ee (the original crew for the moon landing)
perished in a fi re during a capsule test at Cape Canav-
eral.  e disaster revealed many design failures that
were subsequently remedied. Within NASA, the 1969
lunar landing was considered a validation of its insti-
tutionalized way of spacecraft development. While the
general public seemed to accept
that tragedy as an unfortunate
accident, times have changed.
Shuttle disasters are now gener-
ally considered avoidable failures.
Trial-and-Error Learning in
a Politically Charged
Environment
e development of space tech-
nology is fraught with risk. Only
frequent missions can enhance a
complete understanding of this relatively new and
balky technology. A focus solely on safety and re-
liability would sharply limit the number of missions,
which would make technological progress, including
building a full knowledge base about its core technol-
ogy, arduously slow.
e political niche occupied by NASA since its cre-
ation, including the political coalitions underlying
its mission commitment and funding, has never
supported a glacial, no-risk developmental pace.
NASA must show periodic progress by fl ying its
contraptions to justify the huge budgets allocated to
the space agency.  is was fi rst impressed upon NASA
in 1964, after NASA administrator James Webb real-
ized that progress was too slow. Webb brought in Dr.
George Mueller, who subsequently terminated the
practice of endless testing, imposing the more practi-
cal yet rationally sound philosophy of all-up testing
( Johnson 2002; Logsdon 1999; McCurdy 1993;
Murray and Cox 1989 ).  is philosophy prescribes
that once rigorous engineering criteria have been
met, only actual fl ight can validate the design
(cf. Petroski 1992 ).
e apparent success of this philosophy fueled expec-
tations with regard to the speedy development of
new space technology. From the moment it left the
design table, NASA has been under pressure to treat
the shuttle as if it were a routine transportation sys-
tem ( CAIB 2003 ). Rapid turnaround was a high
priority for original client agencies such as the De-
fense Department, which depended on NASA for its
satellite launching capabilities. Research commu-
nities depended on the shuttle for projects such as
the Hubble space telescope and other space explora-
tion projects. Political rationales forced NASA to
complete the International Space Station and have
led NASA to fl y senators and, with tragic results, a
teacher into space.
Over time, however, NASA’s political environment
has become increasingly sensitive to the loss of astro-
nauts, certainly when such tragedies transpire in the
glaring lights of the media. A shuttle failure is no
longer mourned and accepted as the price for progress
toward that elusive goal of a reliable space transporta-
tion system. Today, NASA’s
environment scrutinizes the
paths toward disaster, identifying
“preventable” and thus condem-
nable errors, with little or no
empathy for the plight of the
organization and its members.
NASA’s political and societal
environment, in short, has placed
the agency in a catch-22 situa-
tion. It will not support a rapid
and risky shuttle fl ight schedule, but it does expect
spectacular results. Stakeholders expect NASA to
prioritize safety, but they do not accept the costs and
delays that would guarantee it.
is means that NASA cannot strive to become an
HRO unless its political and societal environment
experiences a major value shift.  ose values would
have to embrace, among other things, steeply higher
costs associated with continuous and major redesigns
of space vehicles, as well as the likelihood, at least in
the near term, of far fewer fl ights. In other words, a
research and development organization such as NASA
Today, NASA’s environment
scrutinizes the paths toward
disaster, identifying
“preventable” and thus
condemnable errors, with little
or no empathy for the plight of
the organization and its
members.
1056 Public Administration Review • November | December 2008
cannot develop HRO characteristics because of the
political environment in which it exists.
How to Assess Reliability-Seeking
Organizations
Even if NASA cannot become an HRO, we expect
NASA at least to seek reliability. Given the combina-
tion of national interests, individual risks, and huge
spending, politicians and taxpayers deserve a way of
assessing how well NASA is performing. More gener-
ally, it is important to develop standards that can be
applied to organizations such as NASA, which have to
juggle production or time pressures, substantial tech-
nical uncertainties, safety concerns, effi ciency con-
cerns, and media scrutiny. Any tool of assessment
should take all of these imposed values into account.
Based on our reading of organization theory, public
administration research, the literature on organiza-
tional crises, and the fi ndings of high-reliability theo-
rists, we propose a preliminary framework for
assessing a large-scale public research and develop-
ment organization that pursues the development of
risky technology within full view of the general pub-
lic.  ese assessment criteria are by no means com-
plete or defi nitive.  ey provide a starting point for
evaluating the commitment of reliability-seeking
organizations such as NASA.  ey broaden the in-
quiry from pure safety-related questions to include the
institutional context in which reliability challenges
must be dealt with.  ey off er a way to assess how the
agency from its executive leaders down to the work
oor balances safety against other values.
is framework is based on the premise that space-
ight technology is inherently hazardous to astro-
nauts, to work crews, and to bystanders.  erefore,
safety should be a core value of the program, even if it
cannot be the sole, overriding value informing NASA’s
organizational processes. We accept that reliability
must always be considered a “precarious value” in its
operation ( Clark 1956 ). Reliability and safety must be
actively managed and reinforced in relation to cross-
cutting political pressures and organizational objec-
tives. With these premises in mind, we suggest three
analytical dimensions against which reliability-seeking
organizations should be assessed.
A Coherent Approach to Safety
e rst dimension pertains to the operating philosophy
that governs the value trade-off s inherent in this type
of public organization (cf. Selznick 1957 ).  is dimen-
sion prompts assessors to consider whether the organi-
zation has in place a clearly formulated and widely
shared approach that helps employees negotiate
the safety reliability tensions that punctuate the devel-
opment and implementation phases of a new and risky
design trajectory.  e presence of such an approach
furthers consistency, eases communication, and nurtures
coordination, which, in turn, increase the likelihood of
a responsible design eff ort that minimizes risk. More
importantly, for our purposes, it relays whether the
organization is actively and intelligently seeking reli-
ability (whether it achieves it is another matter).
It is clear that NASA has always taken the search for
reliability very seriously ( Logsdon 1999; Swanson
2002; Vaughan 1996 ). Over time, NASA developed a
well-defi ned way to assess safety concerns and weigh
them against political and societal expectations
( Vaughan 1996 ).  is approach of “sound engineering,
which has been informed and strengthened both by
heroic success and tragic failure, asserts that the combi-
nation of top-notch design and experiential learning
marks the way toward eventual success. It accepts that
even the most rational plans can be laid to waste by the
quirks and hardships of the space environment.
e NASA approach to safety prescribes that decisions
must be made on the basis of hard science only (no
room exists for gut feelings). Protocols and procedures
guide much of the decision-making process ( Vaughan
1996 ). But reliability frequently comes down to single,
real-time decisions in individual cases to launch or
not to launch is the returning question.  e NASA
philosophy off ers to its managers a way to balance in
real-time safety concerns with other organizational
and mission values. NASA clings to its safety ap-
proach, but it accepts that it is not perfect. Periodic
failure is not considered the outcome of a fl awed
philosophy but a fateful materialization of the
ever-existing risk that comes with the space territory.
Rather than assessing NASA’s safety approach against
absolute reliability norms used by HROs, one should
assess it against alternative approaches. Here we may
note that a workable alternative to NASA’s heavily
criticized safety approach has yet to emerge.
Searching for Failure: A Real-Time Reliability
Capacity
e second dimension focuses our attention on the
mechanisms that have been introduced to minimize
safety risks.  e underlying premise holds that safety
is the outcome of an error-focused process. It is not
the valuation of safety per se, but rather the unwilling-
ness to tolerate error that drives the pursuit of high
reliability. All else being equal, the more people in an
organization who are concerned about the misidentifi -
cations, the misspecifi cations, and the misunderstand-
ings that can lead to potential errors, the higher the
reliability that organization can hope to achieve
( Schulman 2005 ). From this we argue that the con-
tinual search for error in day-to-day operations should
be a core organizational process ( Landau 1969; Lan-
dau and Chisholm 1995; Weick and Sutcliff e 2001 ).
In NASA, the detection of critical error requires
real-time capacity on the part of individuals and
Assessing NASA‘s Safety Culture 1057
teams to read signals and make the right decision at a
critical time. As this real-time appraisal and decision
making is crucial to safety, it is important to develop
standards for the soundness of this process. A variety
of organizational studies, including studies of HROs,
off er some that appear particularly relevant.
12
e rst standard involves avoiding organizational
features or practices that would directly contradict the
requirement for error detection. Because the potential
for error or surprise exists in many organizational
activities, from mission planning to hardware and
software development to maintenance and mission
support activities, information that could constitute
error signals must be widely available through communi-
cation nets that can cut across departments and hierar-
chical levels. Communication barriers or blockages can
pose a threat to feedback, evidence accumulation, and
the sharing of cautionary concerns.
A realistic evaluation of NASA’s safety system
would start with an assessment of how such a large-
scale organization can share information without
getting bogged down in a sea of data generated by
thousands of employees. We know it is often clear
only in hindsight what information constitutes a
critical “signal” and what is simply “noise.”
13 A
realistic and useful reliability assessment must rec-
ognize this built-in organizational dilemma and
establish what can be reasonably expected in the
way of feedback.  e standard should not be every
possible piece of information available to every
organizational member; the organization should
have evolved a strategy so that information of high
expected value regarding potential error (in the po-
tential consequences adjusted by their likelihood)
can be available to key decision makers prior to the
point of real-time critical decisions.
Organizational studies remind us that the reporting of
error or concerns about potential errors should be
encouraged, or at least not subject to sanction or
punishment. Organizations that punish the reporting
of error can expect errors to be covered up or under-
reported, which would certainly reduce the reliability
they hope to attain ( Michael 1973; Tamuz 2001 ). A
realistic assessment would consider whether the orga-
nization has removed signifi cant barriers for dissident
employees to speak up. It would also consider whether
the organization has done enough to encourage peo-
ple to step forward. One such assessment is found in
Vaughan’s (1996) analysis of the Challenger disaster, in
which she concludes that all engineers within NASA
had a real opportunity to bring doubts to the table
(provided these doubts were expressed in the concepts
and the rationales of “sound engineering”).
Another standard of reliability is a reasonably “distrib-
uted ability” to act in response to error: to adjust or
modify an error-prone organizational practice, correct
errors in technical designs, or halt a critical process if
errors are suspected.  is distribution of action or
veto points does not have to be as widely distributed
as in the Japanese factory in which any assembly line
worker could stop the line, but it does have to extend
beyond top managers and probably, given past cases,
beyond program heads. A realistic analysis would
consider whether the distribution of veto points (per-
haps in the form of multiple sign-off s required within
departments) has penetrated deeply enough without
paralyzing the organization’s pursuit of other core
values.
Beyond searching for contradictions between these
requirements and organizational practices, a reliability
assessment should also scan for contradictions in logic
that might appear in reliability perspectives and analy-
ses themselves. One such contradiction actually did
appear in NASA. It was evident in the widely diverg-
ing failure probability estimates reportedly held by top
managers and shuttle project engineers prior to the
Challenger disaster ( Vaughan 1996 ).  is disparity has
been reported in other organizations as well ( Hutter
2005 ). Contradictory risk assessments cannot all be
right, and organizations that buff er such contradic-
tions face a larger risk of error in their approach to
managing for higher reliability.
Another logical contradiction can develop between
prospective and retrospective orientations toward
reliability.  is can be compounded by an asymmetri-
cal treatment of formal and experiential knowledge in
maintaining each orientation. NASA did in fact expe-
rience trouble with its assessment of error reports,
insofar as it has traditionally evaluated against stan-
dards of “sound engineering,” which tend to under-
value “intuitive” (e.g., experiential) concerns. When a
member of an organization expresses a “gut feeling” or
concern for the reliability or safety of a system, others
may insist that these concerns be expressed in terms of
a formal failure analysis, which places the burden of
proof on those with concerns to show in a specifi c
model the ways (and probabilities) in which a failure
to occur.  is approach does not do justice to the
experiential or tacit knowledge base from which a
worrisome pattern might be detected or a failure
scenario imagined ( Weick and Sutcliff e 2001 ).
It is hard to bridge these two modes of assessment
( Dunbar and Garud 2005 ). But while it has dis-
counted experiential or tacit knowledge concerns in
assessing prospective error potential in the shuttle,
NASA has traditionally relied heavily on past opera-
tional experience in retrospectively assessing shuttle
reliability.  is contradictory orientation requiring
failure prospects to be formally modeled but accepting
a tacit, retrospective confi rmation of reliability in
NASA’s treatment of safety concerns about the shuttle
1058 Public Administration Review • November | December 2008
Columbia after its tile strike during the fateful launch
in 2003 has understandably drawn much criticism.
Having such a contradiction at the heart of its per-
spective on reliability has proven to be a serious im-
pediment to the detection of error ( Dunbar and
Garud 2005 ).
An additional organizational practice to be assessed in
connection with the pursuit of higher reliability in an
organization such as NASA is the generation and prop-
agation of cumulative knowledge founded on error.
Whereas high-reliability organizations may have
sharply curtailed opportunities for trial-and-error
learning, reliability-seeking organizations should
evidence a commitment to learning all that can be
learned from errors, however regrettable, and translat-
ing that learning into an ever more extensive knowl-
edge base for the organization transmitted to
successive generations of its members. Careful study
of errors to glean potential reliability improvements
should be a norm throughout the organization. While
the organization must move on and address its other
core operational values, there should be a resistance to
premature closure in error investigations before un-
dertaking some collaborative root-cause analysis in-
volving some outside perspectives.
Evidence of cumulative learning can also be found
in the treatment of organizational procedures and
the process of procedure writing and modifi cation.
Procedures should be taken seriously throughout
the organization as a living documentation of the
knowledge base of the organization.  ey should be
consistently corrected or enhanced in light of expe-
rience and should be “owned” not just by top man-
agers but also by employees down to the shop level.
Members of the organization should understand the
logic and purpose of a procedure and not regard it
simply as a prescription to be mindlessly followed.
Many of these error-focused standards can indeed be
observed in HROs. NASA must pursue them within a
far less supportive environment.
HROs operate within a frame-
work of settled knowledge
founded on long operational
experience and prior formal
analysis. In a nuclear power
plant, for instance, operating
“outside of analysis” is a regula-
tory violation.
14 Yet NASA, given
the unsettled nature of its tech-
nology and the incomplete
knowledge base governing its
operations, must operate in key respects outside of
analysis an invitation to error. Given these limita-
tions, it is important that standards for error detection
be taken seriously, even when other organizational
values are prominent.
Preserving Institutional Integrity
e third dimension pertains to what Philip Selznick
(1957) referred to as the institutional integrity of the
organization.  is dimension directs us to consider
how an organization balances its established way of
working against the shifting demands imposed on the
organization by its stakeholders. An organization’s way
of working typically is the result of trial-and-error
learning, punctuated by success and failure. Over
time, as path dependency theorists remind us ( Pierson
2004 ), established routines and procedures may well
become ends in themselves.  e organization then
becomes protective of its way of working, defending
against outside critics by denial or overpromising.
NASA has not performed well on this dimension
since the early 1970s. Whereas NASA enjoyed high
levels of support during the famed Apollo years, it was
an unstable support, shifting from euphoria after a
successful manned fl ight to a loss of public interest
and, ultimately, to concern about the costs of space
exploration relative to other pressing domestic policy
demands. After the moon landing, societal and politi-
cal support for highly ambitious and expensive space
missions plummeted.
Yet NASA felt compelled to keep its human space-
ight program alive.  e search for new projects that
would capture the popular imagination a new
Apollo adventure ran into budgetary constraints and
political hesitation (President Nixon slashed the bud-
get). Rather than adapting to this new reality by scal-
ing down ambitions, NASA overpromised and
oversold the reliability of its technology. For political
reasons, the shuttle project was presented as a highly
reliable, routine near-space transportation system
(even if space shuttle missions never became routine,
nor were they treated as such) ( Vaughan 1996 ; cf.
CAIB 2003 ). According to Vaughan (1996) , this
pursuit of goals that were just out of reach generated
pressures on the organization’s safety culture.
e explosion of Challenger
stripped NASA of whatever
mythical status it had retained.
e empty promise of a reliable
and effi cient shuttle transporta-
tion system would become a key
factor in NASA’s diminishing
status.  e technology of the
shuttle had never been settled
such that it could allow the
routinization of fl ight. At the
same time, there was no galvaniz-
ing goal such as the lunar landing, the progression
toward which could validate the failures in the
development of this technology. As a result, there was
no support for major delays or expenditures that were
reliability and not production focused.
. . . NASA, given the unsettled
nature of its technology and the
incomplete knowledge base
governing its operations, must
operate in key respects outside
of analysis—an invitation to
error.
Assessing NASA‘s Safety Culture 1059
Caught in the middle of an unstable environment in
which there is little tolerance for either risk or produc-
tion and scheduling delays, NASA has become a con-
demnable organization it is being judged against
standards it is in no position to pursue or achieve.  is
plight is, of course, shared by many public organiza-
tions and creates a set of leadership challenges that may
be impossible to fulfi ll ( Selznick 1957; Wilson 1989 ).
Yet where some public organizations make do ( Har-
grove and Glidewell 1990 ), it appears that NASA was
less adept at coping with its “impossible” predicament.
Conclusion: Toward a Realistic Assessment
of Reliability-Seeking Organizations
If, as we argue, NASA is not a high-reliability organi-
zation in the sense described by HRO theorists, some
important implications follow. First, it is both an
analytic and a practical error to assess NASA an
agency that is expected to experiment and innovate
by the standards of an HRO (in which experimenta-
tion is strongly discouraged). To do so is misleading
with respect to the important diff erences in the mis-
sion, technology, and environment of NASA relative
to HROs ( LaPorte 2006 ).
It is also unhelpful to evaluate NASA by standards that
it is in no position to reach. Such evaluations lead to
inappropriate “reforms” and punishments.  e irony is
that these could transform NASA into the opposite of
an institutionalized HRO that is, a permanently
failing organization (cf. Meyer and Zucker 1989 ).
We may well wonder whether the recommendations
of the CAIB report would help NASA become one if
it could. In HROs, reliability is achieved through an
ever-anxious concern with core organizational pro-
cesses. It’s about awareness, making critical decisions,
sharing information, puzzling, worrying, and acting.
e CAIB recommendations, however, are of a struc-
tural nature. ey impose new bureaucratic layers
rather than designing intelligent processes.  ey im-
pose new standards (“become a learning organization”)
while ignoring the imposed standards that make it
impossible to become an HRO (“bring a new Crew
Exploration Vehicle into service as soon as possible”
and “return to the moon during the next decade”).
Starting from false premises, the CAIB report thus
ends with false promises.  e idea that safety is a
function of single-minded attention may hold true for
HROs, but it falls fl at in organizations that can never
become HROs.
In this article, we have argued that reliability-seeking
organizations that simply cannot become HROs re-
quire and deserve their own metric for assessing their
safety performance. We have identifi ed a preliminary
set of assessment dimensions.  ese dimensions go
beyond those narrow technical factors utilized in
probabilistic risk assessments and other risk-assess-
ment methodologies, but they are not beyond assess-
ing through intensive organizational observations and
interviews, as well as survey research. In fact, the will-
ingness of NASA to accord periodic access to indepen-
dent reliability researchers would itself be a test of its
commitment to error detection.  is could be done
under the auspices of an organization such as the
National Academy of Engineering or the American
Society for Public Administration with funding from
the National Science Foundation or NASA itself.
15
Such an assessment procedure should certainly not be
adversarial. It should be a form of cooperative re-
search. It should be ongoing and not a post hoc re-
view undertaken only on the heels of a major incident
or failure. Further, and perhaps most importantly, it
should not be grounded in unrealistic standards im-
ported inappropriately from the peculiar world of
HROs.  is in itself would constitute an insuperable
contradiction for any reliability assessment it would
be grounded at its outset in analytic error.
In the fi nal analysis, reliability is a matter of organiza-
tional norms that help individual employees at all
levels in the organization to make the right decision.
e presence of such norms is often tacitly viewed as
an erosion of executive authority, which undermines
the responsiveness of public organizations to pressures
from Congress and media. It is a leadership task to
nurture and protect those norms while serving legiti-
mate stakeholders ( Selznick 1957 ).
But such leadership, in turn, requires that the organiza-
tion and its mission be institutionalized in the political
setting in which it must operate. A grant of trust must
be extended to leaders and managers of these organiza-
tions regarding their professional norms and judgment.
If the organization sits in a precarious or condemnable
position in relation to its political environment, then it
“cant win for losing” because of the trade-off s that go
unreconciled in its operation. Participants will fail to
establish any lasting norms because of the fear of hos-
tile external reactions to the neglect of either speed or
safety in key decisions. Ultimately, then, the pursuit of
reliability in NASA depends in no small measure on
the public’s organizational assessment of it and the
foundation on which it is accorded political support.
Acknowledgments
e authors thank Todd LaPorte, Allan McConnell,
Paul ‘t Hart and the three anonymous PAR reviewers
for their perceptive comments on earlier versions of
this paper.
Notes
1 . e board also made use of normal accident
theory, which some academics view in contrast to
1060 Public Administration Review • November | December 2008
HRT.  e board clearly derived most of its
insights and critiques from its reading of HRT,
however. If it had adhered to normal accident
theory, we can conjecture that the CAIB would
have been more sympathetic to NASA’s plight (as
it probably would have considered the shuttle
disaster a “normal accident”).
2. NASA comprises 10 separate centers that serve
the diff erent formal missions of the agency. In
this article, we are exclusively concerned with
NASA’s human spacefl ight program and the
centers that serve this program. Here we follow
the CAIB report (2003).
3. See Starbuck and Farjoun (2005) for a discussion
of the fi ndings of this report.
4 . is is an important step in the analysis of
organizational disasters, which sits well with the
conventional wisdom found in theoretical trea-
tises on the subject ( Perrow 1999; Smith and
Elliott 2006; Turner 1978 ).
5 . e CAIB presents no fi rm evidence to back up
this claim. See McDonald (2005) for a resolute
dismissal of this claim.  e accusation that NASA
would press ahead with a launch because of
“schedule pressure” is rather audacious. NASA
has a long history of safety-related launch delays;
the schedule pressure in the Columbia case was a
direct result of earlier delays. In fact, the CAIB
( 2003 , 197) acknowledged that NASA stood
down from launch on other occasions when it did
suspect problems were manifest. To NASA
people, the idea that a crew would be sent up in
the face of known defi ciencies is outrageous. As
one engineer pointed out, “We know the astro-
nauts” ( Vaughan 1996 ).
6 . e CAIB takes its reference to a “perfect place”
from Gary Brewer’s (1989) essay on NASA. It
should be noted that Brewer is speaking about
external perceptions of NASA and readily admits
in his essay, “I know precious little about NASA
or space policy the little I know about NASA
and space means that I can speak my mind
without particular preconceptions” (157).  e
CAIB, however, cites from Brewer‘s essay as if he
has just completed a thorough study into the
organizational culture of this “perfect place.
7. In fact, the closer observations were to the major
hazard points, the more similar these practices
became.
8. See the special issue of the Journal of Contingen-
cies and Crisis Management (1994) for a heated
discussion. See also Sagan (1993) and Rijpma
(1997) .
9. It should be noted that the number of cases is
gradually growing, but there is very little eff ort to
systematically compare cases. One notable
exception is Rochlin and Von Meier (1994) .
10. is is not to say that errors do not occur within
HROs.  ey do, and HROs take them extremely
seriously. But HROs cannot adopt a trial-and-
error strategy because the political, economic, and
institutional costs of key errors are unlikely to be
off set by the benefi ts of learning (but see Wil-
davsky 1988 ).
11. Several experts no doubt played an infl uential
role in explicating the HRO model to the CAIB
members. Professors Karlene Roberts, Diane
Vaughan, and Karl Weick are recognized experts
on the workings of HROs and consulted with the
CAIB. See Vaughan (2006) for a behind-the-
scenes account of the CAIB deliberations.  eir
involvement, of course, does not make them
responsible for CAIB’s diagnosis.
12. Even if NASA cannot operate fully as an HRO,
as a reliability-seeking organization, it cannot
ignore HRO lessons in error detection. If it is
forced to pursue values such as speed, effi ciency,
or cost reductions at increased risk, it is impor-
tant to understand as clearly as possible, at the
point of decision, the character of that risk.
13. is issue is raised in Roberta Wohlstetter’s
(1962) classic analysis of intelligence “failures”
associated with the Pearl Harbor attack.
14. Nuclear Regulatory Commission, Code of
Federal Regulations, Title 10, part 50.
15. See Perin (2005) for a complementary approach.
References
Bourrier , Mathilde , ed . 2001 . Organiser la fi abilité .
Paris : L’Harmattan .
Brewer , Gary D . 1989 . Perfect Places: NASA as an
Idealized Institution . In Space Policy Reconsidered ,
edited by Radford Byerly , Jr ., 157 – 73 . Boulder,
CO : Westview Press .
Clark , Burton R . 1956 . Organizational Adaptation
and Precarious Values: A Case Study . American
Sociological Review 21 ( 3 ): 327 – 36 .
Clarke , Lee . 2006 . Worst Cases: Terror and Catastrophe
in the Popular Imagination . Chicago : University of
Chicago Press .
Columbia Accident Investigation Board (CAIB) .
2003 . Columbia Accident Investigation Report .
Burlington, Ontario : Apogee Books .
Dunbar , Roger , and Raghu Garud . 2005 . Data
Indeterminacy: One NASA, Two Modes . I n
Organization at the Limit: Lessons from the
Columbia Accident , edited by William H. Starbuck
and Moshe Farjoun , 202 – 19 . Malden, MA :
Blackwell .
Hargrove , Erwin C. , and John C. Glidewell, eds .
1990 . Impossible Jobs in Public Management .
Lawrence : University Press of Kansas .
Hood , Christopher C . 1976 . e Limits of
Administration . New York : Wiley .
Hutter , Bridget . 2005 . “Ways of Seeing”:
Understandings of Risk in Organisational Settings .
I n Organizational Encounters with Risk , edited by
Assessing NASA‘s Safety Culture 1061
Bridget Hutter and Michael Power , 67 – 91 .
Cambridge : Cambridge University Press .
Johnson , Stephen B . 2002 . e Secret of Apollo: Systems
Management in American and European Space
Programs . Baltimore : Johns Hopkins University Press .
Landau , Martin . 1969 . Redundancy, Rationality, and
the Problem of Duplication and Overlap . Public
Administration Review 29 ( 4 ): 346 – 58 .
Landau , Martin , and Donald Chisholm . 1995 . e
Arrogance of Optimism . Journal of Contingencies
and Crisis Management 3 ( 2 ): 67 – 80 .
LaPorte , Todd R . 1994 . A Strawman Speaks Up .
Journal of Contingencies and Crisis Management
2 ( 4 ): 207 – 11 .
— — — . 1996 . High Reliability Organizations:
Unlikely, Demanding and At Risk . Journal of
Contingencies and Crisis Management 4 ( 2 ):
60 – 71 .
— — — . 2006 . Institutional Issues for Continued
Space Exploration: High-Reliability Systems
Across Many Operational Generations
Requisites for Public Credibility . I n Critical Issues
in the History of Spacefl ight , edited by Steven J.
Dick and Roger D. Launius , 403 – 27 .
Washington, DC : National Aeronautics and
Space Administration .
LaPorte , Todd R. , and Paula M. Consolini . 1991 .
Working in Practice but Not in  eory:
eoretical Challenges of “High-Reliability
Organizations.” Journal of Public Administration
Research and  eory 1 ( 1 ): 19 – 48 .
e Limits to Safety: A Symposium . 1994 . Special
issue, Journal of Contingencies and Crisis
Management 2 ( 4 ).
Logsdon , John M . 1976 . e Decision to Go to the
Moon: Project Apollo and the National Interest .
Chicago : University of Chicago Press .
— — — , ed . 1999 . Managing the Moon
Program: Lessons Learned from Project Apollo .
Monographs in Aerospace History 14 ,
Washington, DC : National Aeronautics and
Space Administration .
McCurdy , Howard E . 1993 . Inside NASA: High
Technology and Organizational Change in the U.S.
Space Program . Baltimore : Johns Hopkins
University Press .
— — — . 2001 . Faster, Better, Cheaper: Low-Cost
Innovation in the U.S. Space Program . Baltimore :
Johns Hopkins University Press .
McDonald , Henry . 2005 . Observations on the
Columbia Accident . In Organization at the Limit:
Lessons from the Columbia Disaster , edited by
William H. Starbuck and Moshe Farjoun , 336 – 46 .
Malden, MA : Blackwell .
Meyer , Marshall W. , and Lynne G. Zucker . 1989 .
Permanently Failing Organizations . Newbury Park,
CA : Sage Publications .
Michael , Donald N . 1973 . On Learning to Plan And
Planning to Learn . San Francisco : Jossey-Bass .
Murray , Charles , and Catherine Bly Cox . 1989 . Apollo:
e Race to the Moon . New York : Simon & Schuster .
Perin , Constance . 2005 . Shouldering Risks:  e Culture
of Control in the Nuclear Power Industry . Princeton,
NJ : Princeton University Press .
Perrow , Charles . 1986 . Complex Organizations: A
Critical Essay . New York : McGraw-Hill .
— — — . 1994 . e Limits of Safety:  e
Enhancement of a  eory of Accidents . Journal of
Contingencies and Crisis Management 2 ( 4 ):
212 – 20 .
— — — . 1999 . Normal Accidents: Living with High-
Risk Technologies . Princeton, NJ : Princeton
University Press .
Petroski , Henry . 1992 . To Engineer Is Human:  e Role of
Failure in Successful Design . New York : Vintage Books .
Pierson , Paul . 2004 . Politics in Time: History,
Institutions, and Social Analysis . Princeton, NJ :
Princeton University Press .
Presidential Commission on the Space Shuttle
Challenger Accident (Rogers Commission) . 1986 .
Report to the President by the Presidential Commission
on the Space Shuttle Challenger Accident .
Washington, DC : Government Printing Offi ce .
Reason , James . 1997 . Managing the Risks of
Organizational Accidents . Aldershot : Ashgate .
Rijpma , Jos A . 1997 . Complexity, Tight-Coupling
and Reliability: Connecting Normal Accidents
eory and High Reliability  eory . Journal of
Contingencies and Crisis Management 5 ( 1 ):
15 – 23 .
Roberts , Karlene H. , ed . 1993 . New Challenges to
Understanding Organizations . New York : Macmillan .
Roberts , Karlene H. , Peter Madsen , Vinit Desai , and
Daved Van Stralen . Forthcoming . A High
Reliability Health Care Organization Requires
Constant Attention to Organizational Processes .
Quality and Safety in Health Care .
Rochlin , Gene I . 1996 . Reliable Organizations:
Present Research and Future Directions . Journal of
Contingencies and Crisis Management , 4 ( 2 ): 55 – 59 .
Rochlin , Gene I. , and Alexandra von Meier . 1994 .
Nuclear Power Operations: A Cross-Cultural
Perspective . Annual Review of Energy and the
Environment 19 : 133 – 87 .
Sagan , Scott D . 1993 . e Limits of Safety:
Organizations, Accidents, and Nuclear Weapons .
Princeton, NJ : Princeton University Press .
Schulman , Paul R . 1980 . Large-Scale Policy-Making .
New York : Elsevier .
— — — . 1993 . e Negotiated Order of
Organizational Reliability . Administration & Society
25 ( 3 ): 353 – 72 .
— — — . 2005 . e General Attributes of Safe
Organizations . Quality and Safety in Health Care
13 ( 2 ): 39 – 44 .
Selznick , Philip . 1957 . Leadership in Administration: A
Sociological Interpretation . Berkeley : University of
California Press .
1062 Public Administration Review • November | December 2008
Simon , Herbert A . 1997 . Administrative Behavior: A
Study of Decision-Making Processes in Administrative
Organizations . 4th ed . New York : Free Press .
Smith , Denis , and Dominic Elliott , eds . 2006 .
Key Readings in Crisis Management: Systems and
Structures for Prevention and Recovery . London :
Routledge .
Starbuck , William H. , and Moshe Farjoun , eds .
2005 . Organization at the Limit: Lessons from the
Columbia Accident . Malden, MA : Blackwell .
Starbuck , William H. , and Frances J. Milliken . 1988 .
Challenger: Fine-Tuning the Odds Until
Something Breaks . Journal of Management Studies
25 ( 4 ): 319 – 40 .
Swanson , Glen E. , ed . 2002 . Before  is Decade Is
Out Personal Refl ections on the Apollo Program .
Gainesville : University Press of Florida .
Tamuz , Michal . 2001 . Learning Disabilities for
Regulators:  e Perils of Organizational Learning
in the Air Transportation Industry . Administration
& Society 33 ( 3 ): 276 – 302 .
Turner , Barry A . 1978 . Man-Made Disasters . London :
Wykeham .
Vaughan , Diane . 1996 . e Challenger Launch
Decision: Risky Technology, Culture and Deviance at
NASA . Chicago : University of Chicago Press .
— — — . 2006 . NASA Revisited : Ethnography, eory
and Public Sociology . American Journal of Sociology
112 ( 2 ): 353 – 93 .
Weick , Karl E. , and Kathleen M. Sutcliff e . 2001 .
Managing the Unexpected: Assuring High
Performance in an Age of Complexity . San Francisco :
Jossey-Bass .
Wildavsky , Aaron . 1988 . Searching for Safety . N e w
Brunswick, NJ : Transaction Books .
Wilson , James Q . 1989 . Bureaucracy: What
Government Agencies Do and Why  ey Do It . N e w
York : Basic Books .
Wohlstetter , Roberta . 1962 . Pearl Harbor: Warning and
Decision . Stanford, CA : Stanford University Press .
Wolfe , Tom . 2005 . e Right Stuff . New York : Black
Dog/Leventhal .
Have You Noticed?
PAR is Packed!
More pages, more content, more topics, more authors,
more perspectives than ever.
Support our work and your fi eld by joining ASPA today.
Visit: www.aspanet.org
... Teorien er utviklet med utgangspunkt i observasjon og analyse av høyrisikobedrifter. Høyreliable organisasjoner er organisasjoner som over tid evner å unngå feil og ivareta sikkerheten, til tross for at de arbeider i svaert varierende og komplekse omgivelser (Weick & Sutcliffe, 2015;Shrivastava et al., 2009;Boin & Schulman, 2008). Eksempler kan vaere luftfartssystemer, kjernekraftverk og militaere spesialavdelinger. ...
... Organisasjonsledelsen bør oppmuntre ansatte til å rapportere feil og avvik. For å legge til rette for dette er det behov for en rapporteringskultur (Reason, 1997;Boin & Schulman, 2008). ...
Thesis
Background: Over the last decade Norway has experienced two right-wing terrorist attacks, both striking soft targets. According to The Norwegian Police Security Service (PST) there is an even chance that right-wing-extremists will attempt to carry out a terror attack in Norway in 2021. The most likely scenario is a mass-casualty attack on congregations of non-western oriented people. This thesis sets out to answer the following issue: How is the terrorism preparedness at mosques in Oslo today, and how can it be improved? Theory: The theoretical basis of this study is the High reliable organizations-theory (HRO) (Weick & Sutcliffe, 2015), explaining how organizations can manage the unexpected. The purpose is to explore how mosques can prepare against terrorist attacks, and what is done in relation to security and emergency preparedness today. This thesis discusses the relation between theory and practice, and how the practical challenges can be met according to HRO- theoretical principles. Methods: A qualitative, inductive approach was used. The empirical foundation was eight individual semi-structured in-depth interviews with mosque-leaders, Islamic interest organizations and police officers, all located in Oslo. Thematic analysis (Braun & Clarke, 2006) was used to analyze the data. Results: The results of the study are five main themes; internal resources, awareness, risk and vulnerability assessment, physical security and preparedness, and eleven sub-themes; security responsibility, economy, knowledge, volunteers, risk acknowledgement, locked doors/access-control, security guards, camera surveillance systems, planning, exercises and collaboration. Conclusions: Today, the preventive security and emergency preparedness against terror attacks varies between the mosques. Physical security measures, such as access-control and camera surveillance systems, are more often applied than emergency preparedness measures as planning and exercises. Issues internally within the organizations represent challenges and limitations when it comes to improving the emergency preparedness. Specialization of security responsibility and cooperation with local police are suitable measures to meet existing challenges. Specialized security groups can improve the mosques safety and emergency preparedness against undesirable events, such as terrorist attacks. KEYWORDS: terrorism preparedness, high reliability, soft targets, mosques, specialized security responsibility
... Ainsi, si la littérature sur les projets complexes a mis en évidence des risques d'inadéquation entre logique projet et logique métier, peu d'auteurs semblent s'être interrogés sur la contribution des métiers à la performance organisationnelle des projets et sur la manière dont les métiers parviennent à s'articuler avec l'organisation projet pour contribuer efficacement aux objectifs de production, de (Musca, 2006;Yin, 2003), menée sur une année et demi, nous a permis de comprendre en quoi consistait le métier de chaudronnier-tuyauteur et dans quelle mesure cette activité contribuait à la performance du projet de construction. Du fait de la complexité de l'objet étudié, nous avons favorisé une méthodologie de recherche qualitative (Dumez, 2016), favorisant la collecte de données par observations participantes (Yin, 2003) dans une démarche ethnographique (Garfinkel, 1967) (Boin & Schulman, 2008;Roberts, 1990), propose d'étudier des organisations dites à risques et de comprendre non pas les causes des accidents, mais à l'inverse, ce qui fait que ces organisations, pourtant exposées à un haut niveau de risques, demeurent à la fois sûres et performantes. Les théoriciens de la haute fiabilité adoptent ainsi un raisonnement opposé à celui de Perrow et à celui des chercheurs s'intéressant plus traditionnellement aux analyses d'accidents à posteriori (Dechy et al., 2010;Vaughan, 1996) Henriqson et al. (2014) ont synthétisé un grand nombre de définitions de la culture de sûreté pour finalement conclure à la présence de trois piliers fondamentaux récurrents dans les organisations à risques : la sûreté doit être la valeur clef de l'organisation ; la sûreté doit nécessiter l'engagement du management ; la sûreté va aussi faire appel à l'engagement des travailleurs. ...
Thesis
Cette thèse a pour objectif d’étudier les conditions de la performance des chantiers industriels complexes et à risques au travers du concept original de communauté de métier. Ce questionnement empirique a pour objectif d’enrichir les réflexions sur les notions de performance organisationnelle et d’identité professionnelle, portées à la fois par la littérature sur la fiabilité organisationnelle, par les approches interactionnistes et la sociologie des groupes professionnels, et par les théories sur l’apprentissage situé.En partant de l’observation de l’organisation formelle d’un chantier naval et de l’activité concrète de ses acteurs, nous mettrons en évidence les enjeux d’articulation qui animent trois principaux acteurs du chantier, aux logiques de fonctionnement parfois contradictoires : les professionnels de métier,les managers de proximité et l’organisation projet.Cette étude terrain nous amènera finalement à redéfinir la notion de chantier et à discuter les concepts déjà existants de communautés de pratiques et de groupes professionnels au regard du concept original de communauté de métier.
... Care should be taken in generalizing the findings of this study to the global commercial airlines and commercial space industries. It is recommended that due to the complementary role SMS and Mindful Organizing play in determining productive outcomes such as safety and reliability in HROs [26,27,[31][32][33][43][44][45][46][47][48] , it may be beneficial to assess their relationship as part of continuous monitoring and improvements of organizational safety [22,25] . Future research studies may include direct comparisons in multiple aerospace entities using a larger sample size to determine the overall understanding of Mindful Organizing attributes and how it affects SMS. ...
Article
Full-text available
There seems to be a paucity in extant literature that assesses the relationship between Safety Management Systems (SMS) and High Reliability Theory (HRT) behavior process of Mindful Organizing (MO) among aerospace organizations. There could be benefits for organizational safety by exploring this relationship in high-reliability organizations (HROs) like the aerospace industry. Using a modified Safety Organizing Scale (SOS) by Vogus and Sutcliffe (2007) and a validated SMS scale, the relationship between SMS and MO was measured. The perceptions of a cross-section of respondents from commercial airlines with SMS and commercial space licensees without SMS in the United States (U.S) was assessed. A four-factor model of MO had acceptable fit. A model showing the relationship between SMS and MO had good fit and showed a high significant strength of relationship (r = .82, p =.000) with a big effect size. There were also significant differences in mean responses among management personnel and non-management personnel on the MO factor “sensitivity to operations” and the result suggests managers were better at identifying personnel with skills and knowledge to ensure safer task accomplishment than non-management personnel. The study results suggest that the SMS requirements for commercial airlines in the U.S. can enrich the identification and understanding of MO factors and it may be beneficial for the commercial space industry to formally adopt SMS. Future research studies may include direct comparisons in multiple aerospace organizations using a larger sample size to determine the overall understanding of MO factors and how it affects SMS.
... The extant literature in business and organizational behavior has explored the relevance of collective mindfulness for organizational resilience (Oeij et al., 2018;Ogliastri & Zúñiga, 2016). Most of these studies focus on the mindfulness of members in high-reliability organizations, which operate in unforgiving social and political environments and under very trying conditions (Boin & Schulman, 2010). However, the literature has not addressed the collective cognitive effort of multiple stakeholders for organizational resilience in megaprojects (Thomé et al., 2016). ...
Article
The complexity, internal and external risks, and significant social impact of megaprojects make their organizational resilience particularly important. To survive potential adversities, megaproject organizational resilience depends on collective mindfulness. Drawing on an attention-based view, this study investigates the mechanisms of collective mindfulness for megaproject organizational resilience as a process that functions prior to, during, and after recovery from crises. The results from analyzing six embedded crisis events in two megaprojects indicate that collective mindfulness influences organizational resilience processes through the mechanisms of awareness allocation, emotional detachment, and attention alignment. The study's theoretical and practical implications are discussed.
... Preserving safety comes ahead of any other objective. HROs are required to continuously commit to safety measures and risk management tools such as incident and near miss reporting, permit-to-work and job safety analysis (JSA) (Boin and Schulman, 2008). The label of HRO is implicitly assuring that the organization has an outstanding safety record and a set of basic safety principles for operations (Rochlin, 1993). ...
Article
Full-text available
Purpose High-reliability performance and high-hazard are intertwined in High-Reliability Organizations (HROs) operations; these organizations are highly safe, highly hazardous and highly significant for the modern society, not only for the valuable resources they have, but also the indispensable services they provide. This research intend to understand how HROs could produce high quality performance despite their challenging and demanding contexts. The research followed an emic approach to develop an organizational framework that reflects the contribution of the seeming traits of the organizations to the operations safety based on the workers point of views about the safety of workstations. Design/methodology/approach This research adopted mixed methods of in-depth interviews and literature review to identify the structural characteristics of high-reliability organizations (HROs) embedded in the organizations studies and developed a theoretical based structural framework for HROs. Furthermore, a systemic literature review was adopted to find the evidence from the organizations literature for the identified characteristics from the interviews from the first stage. The setting for this study is six Chinese power stations, four stations in Hubei province central China and two stations in the southern China Guangdong province. Findings The organizational framework is a key determinant to achieve high-reliability performance; however, solely it cannot explain how HROs manage the risks of hazard events and operate safely in high-hazard environments. High-reliability performance is attributed to the interaction between two sets of determinants of safety and hazard. The findings of this research indicate that HROs systems would be described as reliable or hazardous depending on the tightly coupled setting, complexity, bureaucracy involvement and dynamicity within the systems from one hand, and safety orientation, failure intolerance, systemwide processing, the institutional setting and the employment of redundant systems on other hand. Originality/value The authors developed an organizational framework of organizing the safety work in HROs. The applied method of interviewing and literature review was not adopted in any other researches.
... Even typical HRO settings are not immune to this evolution. Consider, for example, efficiency arguments in military contracting (Ortiz, 2010), or the infamous NASA case (e.g., Hodgkinson and Starbuck, 2008;Boin and Schulman, 2008). As Boin and Schulman put it: "What makes HROs special is that they do not treat reliability as a probabilistic property that can be traded at the margins for other organizational values such as efficiency or market competitiveness" (2008: 1052). ...
... In a similar vein, in the infamous incident at Tenerife airport where two Boeing 747s crashed into each other, a Dutch captain muted the feelings of his co-pilot and flight engineer that something was amiss (Weick, 1990). The Tenerife accident, however, rather than serving as a crisis with only a temporary effect on risk management, as was seen in the case of NASA (Boin & Schulman, 2008;CAIB, 2003;Vaughan, 2005), contributed to a lasting integration of more risk-averse attitudes in commercial aviation's professional safety culture which now pays more attention to intuitions of danger. Crew Resource Management (CRM) training was developed over decades, teaching co-pilots to 'speak up' against captains, accept errors as part of their normal work, and develop skills to manage and learn from these errors while remaining constantly aware of the danger (Helmreich, Merritt & Wilhelm, 1999). ...
Thesis
Full-text available
Are deviations from standards necessarily a bad thing in a safety-critical industry like aviation? In this paper-based PhD dissertation I start out by analysing contrasting reports on a 2009 Turkish Airlines crash. This opens up the debate on the sensitive issue of deviance, or routine deviations from standards such as standard operating procedures. Turning away from accidents and instead focusing on safe operations, I ask the question, what role do deviations from standards play in committed safety cultures? My studies include ethnographic fieldwork in airline cockpits and simulators and ground personnel's practices, and agent modelling of power relations in a maintenance organisation. My findings contribute to organisational risk debates and extend the "safety 2" or "safety differently" movement.
Article
Since the inception of the process industries (PI), leaders of these organizations have accepted some level of fires, explosions, environmental releases, and other events as the cost of doing business and inevitable. This cultural mindset continues to this day. Real safety performance and behaviors cannot be managed or produced by corporate safety programs, while active and latent hazards go unnoticed and unaddressed. The false sense of security that these “safety programs” encourage is evidenced by the continued occurrences of large‐scale catastrophes. The dichotomy of a “safety culture” versus such events is clearly evident, yet the PI continues to focus on the same indices—all of which have little bearing on the cause of these events. Zero can be a goal, but it needs to be a realistic zero. For many organizations the mistake is not establishing Zero as a goal, the mistake is selecting the wrong Zero and then failing to support the organization in order to make Zero a reality. This paper examines the organizational policies and metrics of the PI with that of the single most reliable and safe organization in the world, which has achieved a safety record of ZERO incidents for its entire 65+ year history.
Article
Full-text available
Background Engineering education research frequently examines students' persistence (or intentions to persist) into engineering careers from engineering school. However, the variety of engineering‐related occupations has increased substantially in recent years, challenging researchers' abilities to discern what constitutes persistence in engineering. Purpose This article investigates the question: How can researchers categorize students' occupational outcomes in terms of engineering relatedness in a manner that enables consistency across future studies and that is informed by enduring conceptions of engineering work? We develop an occupational outcomes typology in response to this question. Scope/Method We employed systematic literature reviews to substantiate the typology. In total, we reviewed 259 sources published between 1966 and 2016. Review 1 examined sources discussing or debating the presence of unifying occupational attributes across engineering practice. Review 2 examined sources discussing common job functions constituting unifying criteria identified in Review 1. Review 3 examined sources discussing specific work activities associated with functions identified in Review 2. Finally, Review 4 examined job profile data from the year 2017 on 1,100 job titles to identify contemporary nonengineering‐titled jobs involving activities similar to activities found in Review 3. Conclusions Engineering practitioners' possession of design responsibility—their responsibility for products' efficacy and safety through governance of designs (new or existing)—has served as a unifying work attribute over time. We find that the 21st century has given rise to interrelated roles encompassing and surrounding conventional engineering work and propose a typology that categorizes occupations in relation to engineering. The typology offers a responsibility‐based framing of engineering that helps educators illustrate key distinctions among contemporary engineering‐related occupations.
Article
This focused review summarizes the medical, logistical and environmental challenges that would be associated with dealing with a traumatic surgical case during an interplanetary space mission in the near future.
Article
Full-text available
In analysing the safety strategies of organisations successfully managing hazardous systems it is apparent that safety itself is a problematic, and even risky, concept. It is less the valuation of safety per se than the disvalue surrounding mis-specification, misidentification, and misunderstanding that drives reliability in these organisations. Two contrasting models of high reliability can be identified in precluded event and resilience focused organisations. Each model is adapted to different properties in the raw material, process variances' and knowledge base of the organisation. These two Models bound the reliability approaches available to medicine. The implications of each for medical reliability strategy are explored, and the possible adaptation of features from each for medical organisations are assessed.
Article
Organizations * Armies, Prisons, Schools * Organization Matters Operators * Circumstances * Beliefs * Interests * Culture Managers * Constraints * People * Compliance Executives * Turf * Strategies * Innovation Context * Congress * Presidents * Courts * National Differences Change * Problems * Rules * Markets * Bureaucracy and the Public Interest