ArticlePDF Available

Failing to Learn and Learning to Fail (Intelligently):: How Great Organizations Put Failure to Work to Innovate and Improve


Abstract and Figures

Organizations are widely encouraged to learn from their failures, but it is something most find easier to espouse than to effect. This article synthesizes the authors' wide research in this field to offer a strategy for achieving the objective. Their framework relates techni-cal and social barriers to three key activities e identifying failure, analyzing failure and deliberate experimentation e to develop six recommendations for action. They suggest that these be implemented as an integrated set of practices by leaders who can 'walk the talk' and work to shift the managerial mindset in a way that redefines failure away from its discreditable associations, and view it instead as a critical first step in a journey of discovery and learning.
Content may be subject to copyright.
Failing to Learn and Learning
to Fail (Intelligently):
How Great Organizations Put Failure
to Work to Innovate and Improve
Mark D. Cannon and Amy C. Edmondson
Organizations are widely encouraged to learn from their failures, but it is something most
find easier to espouse than to effect. This article synthesizes the authors’ wide research
in this field to offer a strategy for achieving the objective. Their framework relates techni-
cal and social barriers to three key activities eidentifying failure, analyzing failure and
deliberate experimentation eto develop six recommendations for action. They suggest
that these be implemented as an integrated set of practices by leaders who can ‘walk the
talk’ and work to shift the managerial mindset in a way that redefines failure away from its
discreditable associations, and view it instead as a critical first step in a journey of discovery
and learning.
Ó2005 Elsevier Ltd. All rights reserved
The idea that people and the organizations in which they work should learn from failure has
considerable popular support eand even seems obvious eyet organizations that systematically
learn from failure are rare. This article provides insight into what makes learning from failure so
difficult to put into practice ethat is, we address the question of why organizations fail to learn
from failure.
We also note that very few organizations experiment effectively ean activity that necessarily
generates failures while trying to discover successes eto maximize the opportunity for learning
from failure and minimize its cost. In short, we argue that organizations should not only learn from
Long Range Planning 38 (2005) 299e319
0024-6301/$ - see front matter Ó2005 Elsevier Ltd. All rights reserved.
failure ethey should learn to fail intelligently as a deliberate strategy to promote innovation and
improvement. In this article, we identify the barriers embedded in both technical and social systems
that make such intelligent use of failure rare in organizations, and we offer recommendations for
managers seeking to improve their organization’s ability to learn from failure.
Research foundations and core ideas
Over the past decade or so, our research has revealed impediments to organizational learning from
failure on multiple levels of analysis. The first author has investigated individuals’ psychological
responses to their own failures, demonstrating the aversive emotions people experience and how
that inhibits learning. The second author has identified group and organizational factors they limit
learning from failure in teams and organizations. We have worked together for a number of years to
conceptualize and develop recommendations for how to enable organizational learning from
failure, drawing from our own and others’ research. In this article, we hope to provoke reflection
and point to possibilities for managerial action by synthesizing diverse ideas and examples that
illuminate both the challenges and advantages of learning from failure.
We have three core aims. First, we aim to provide insights about what makes organizational
learning from failure difficult, paying particular attention to what we see as a lack of understanding
of the essential processes involved in learning from failure in a complex organizational system such
as a corporation, hospital, university, or government agency.
Second, drawing from our own field research conducted over the past decade as well as from
additional sources, we develop a model of three key processes through which organizations can
learn how to learn from failure. We argue that organizational learning from failure is feasible, but
that it involves skillful management of three distinct but interrelated processes: identifying failure,
analyzing failure, and deliberate experimentation. Managed skillfully, these processes help managers
take advantage of the lessons that failures offer, which otherwise tend to be ignored or suppressed in
most organizations.
Third, we argue that most managers underestimate the power of both technical and social barriers
to organizational learning from failure, leading to an overly simplistic criticism of organizations and
managers for not exploiting learning opportunities. Although numerous prior writings have
advocated learning from failure, there is little advice as to how to overcome barriers to making this
In summary, the intended contribution of this article is to explicate the challenges of
organizational learning from failure and to build on this explanation to design new strategies for
action. Our strategies are communicated through a framework that relates two types of barriers to
three key activities to develop six areas for action. Taken together, our framework, examples and
recommendations are intended to provide a starting point for concrete managerial action.
Organizational failure defined
Failure, in organizations and elsewhere, is deviation from expected and desired results. This
includes both avoidable errors and the unavoidable negative outcomes of experiments and risk
We define failure broadly to include both large and small failures in domains ranging from
the technical (a flaw in the design of a new machine) to the interpersonal (such as a failure to give
feedback to an employee with a performance problem). Drawing from our own and others’
research, we suggest that an organization’s ability to learn from failure is best measured by how it
deals with a range of large and small outcomes that deviate from expected results rather than
focusing exclusively on how it handles major disasters. Deviations from expected results can be
positive or negative, and even positive deviations present opportunities for learning. However, we
focus on negative surprises because of the unique psychological and organizational challenges
associated with learning from them.
300 Failing to Learn and Learning to Fail
negative surprises [have] unique psychological and organizational
challenges associated with learning from them
Small failures versus large failures
Large and well-publicized organizational failures esuch as the Columbia and Challenger Shuttle
tragedies, the Colorado South Canyon Firefighter deaths, the fatal drug error that killed a Boston
Globe correspondent at Boston’s Dana Farber Hospital, and the Parmelat and Enron accounting
scandals eargue for the necessity of learning from failure. Recognizing the need to understand and
learn from consequential incidents such as these, executives and regulators often establish task forces
or investigative bodies to uncover and communicate the causes and lessons of highly visible failures. By
their nature, many such efforts will come too late for the goal of organizational learning from failure.
The multiple causes of large failures are usually deeply embedded in the organizations where the
failures occurred, have been ignored or taken for granted for years, and rarely are simple to correct.
An important reason that most organizations do not learn from failure may be their lack of
attention to small, everyday organizational failures, especially as compared to the investigative
commissions or formal ‘after-action reviews’ triggered by large catastrophic failures. Small failures
are often the ‘early warning signs’ which, if detected and addressed, may be the key to avoiding
catastrophic failure in the future.
Our research in organizational contexts ranging from the hospital operating room to the
corporate board room suggests that an intelligent process of organizational learning from failure
requires proactively identifying and learning from small failures. Small failures are often overlooked
because at the time they occur they appear to be insignificant minor mistakes or isolated anomalies,
and thus organizations fail to make timely use of these important learning opportunities. We find
that when small failures are not widely identified, discussed and analyzed, it is very difficult for
larger failures to be prevented.
Barriers to organizational learning from failure
Learning from failure is a hallmark of innovative companies but, as noted above, is more common
in exhortation than in practice. Most organizations do a poor job of learning from failures, whether
large or small.
In our research, we found that even companies that had invested significant money
and effort into becoming ‘learning organizations’ (with the ability to learn from failure) struggled
when it came to the day-to-day mindset and activities of learning from failure.
Instead, organizations’ fundamental attributes usually conspire to make a rational process of
diagnosing and correcting causes of failures difficult to execute. A prominent tradition in
managerial research examines the importance of considering both social and technical attributes of
organizations as systems. Recognizing that organizations are simultaneously social systems and
technical systems, management researchers have long considered the need to examine how features
of tasks and technologies, together with social, psychological and structural factors, shape
organizational outcomes.
We draw from this basic framework to categorize barriers to
organizational learning from failure into technical and social causes. We then describe three
specific learning processes through which these barriers can be overcome.
Barriers Embedded in Technical Systems
Research on learning has shown that limitations in human intuition and ‘sense-making’ can lead
people to draw false conclusions that inhibit both individual and collective learning. Technical
barriers to learning from failure thus include a lack of the basic scientific ‘know how’ to be able to
Long Range Planning, vol 38 2005 301
draw inferences from experiences systematically, as well as the presence of complex systems or
technologies that are inherently difficult to understand.
In sum, when diagnosing cause-effect
relationships is technically difficult, learning from failure will necessarily be challenging as well.
Task design can obscure failures. For example, an excess work in process (WIP) inventory slows
the discovery of manufacturing process errors. By the time a large batch of defective inventory
reaches the next manufacturing step, the error has been repeated many times, leading to far more
rework than if the error had been caught immediately. A central insight of lean manufacturing,
therefore, is the redesign of tasks to make failures transparent by reducing WIP inventory through
smaller and smaller batch sizes.
Technical barriers to gleaning failure’s lessons include an inadequate understanding of the
scientific method and an inability to engage in the following aspects of rigorous analysis: problem
diagnosis, experimental design, systematic analysis of qualitative data, statistical process controls,
and statistical analysis.
Barriers embedded in social systems
Social barriers to learning from failure start with the strong psychological reactions that most
people have to the reality of failure. Being held in high regard by other people, especially those with
whom one interacts in an ongoing manner, is a strong fundamental human desire, and most people
tacitly believe that revealing failure will jeopardize this esteem. Even though people may learn from
and appreciate others’ disclosures of failure, positive impressions of the others in question may be
eroded in subtle ways through the disclosure process. Thus, most people have a natural aversion to
disclosing or even publicly acknowledging failure.
Being held in high regard by others is a strong fundamental human
desire.... people instinctively ignore or disassociate themselves
from their own failures
Even outside the presence of others, people have an instinctive tendency to deny, distort, ignore,
or disassociate themselves from their own failures, a tendency that appears to have deep
psychological roots.
The fundamental human desire to maintain high self-esteem is accompanied
by a desire to believe that we have a reasonable amount of control over important personal and
organizational outcomes. Psychologists have argued that these desires give rise to ‘positive
illusions’dunrealistically positive views of the self, accompanied by illusions of control ethat in
fact enable people to be energetic and happy and to avoid depression. Some even argue that positive
illusions are a hallmark of mental health.
However, the positive illusions that boost our self-
esteem and sense of control and efficacy may be incompatible with an honest acknowledgement of
failure, and thus, while promoting happiness, can inhibit learning.
Managers have an added incentive to disassociate themselves from failure because most
organizations reward success and penalize failure. Thus, holding an executive or leadership position
in an organization does not imply an ability to acknowledge one’s own failures.
Finkelstein’s in-depth investigation of major failures at over 50 companies suggested that the
opposite might be the case:
Ironically enough, the higher people are in the management hierarchy, the more they tend to
supplement their perfectionism with blanket excuses, with CEOs usually being the worst of all. For
example, in one organization we studied, the CEO spent the entire forty-five-minute interview
explaining all the reasons why others were to blame for the calamity that hit his company.
Regulators, customers, the government, and even other executives within the firm--all were
responsible. No mention was made, however, of personal culpability.
302 Failing to Learn and Learning to Fail
Organizational structures, policies and procedures, along with senior management behavior, can
discourage people from identifying and analyzing failures and from experimenting.
organizational cultures have little tolerance for eand punish efailure. A natural consequence of
punishing failures is that employees learn not to identify them, let alone analyze them, or to
experiment if the outcome might be uncertain. Even in more tolerant organizations, most managers
do not reward behaviors which acknowledge failure by offering raises, promotions, or other privileges.
Next, when failures are identified, social factors inhibit the constructive discussion and analysis
through which shared learning occurs. Most managers do not have strong skills for handling the hot
emotions that often surface ein themselves or others ein such sessions. Thus, discussions that
attempt to unlock the potential learning from the failure can easily degenerate into opportunities
for scolding, finger-pointing or name-calling. Public embarrassment or private derision can leave
participants with ill feelings and strained relationships rather than learning. For this reason,
effective debriefing and learning from failure requires substantial interpersonal skill.
The unfortunate reality for most organizations is that the barriers to learning from failure
described above are all but hard wired into social systems, and greatly reduce the ability of most
organizations to learn from failure. Thus, not only do few organizations systematically capture
failure’s lessons, most managers lack a clear understanding of what a proactive process of learning
from failure looks like.
Without a clear model of what it takes to learn from failure, organizations are at a disadvantage
in facing these barriers. While fully acknowledging the magnitude of the challenge, we suggest that
breaking the learning process down into more tangible component activities greatly enhances the
likelihood of gleaning failures’ lessons. In the next section, we identify and explain three distinct
processes through which effective organizations can proactively learn from failure.
Three processes for organizational learning from failure
Learning from failure is as much a process as an outcome. Identifying the component activities
through which this process can occur is an initial step in making it happen. We therefore offer three
core organizational activities through which organizations learn from failure: (1) identifying failure,
(2) analyzing failure, and (3) deliberate experimentation. They are presented in order of increasing
challenge - both organizationally and in terms of the technical and interpersonal skills required.
Breaking this encompassing organizational learning process into narrower component parts
suggests a strategy for building new competencies that starts with the least challenging process of
identifying failure and builds up to the more challenging one of deliberate experimentation. In this
section, we further describe these activities and provide illustrations of organizations that have
successfully enacted them. These illustrations were deliberately collected from multiple industries
and organizations so as to inform practitioners and researchers across diverse contexts.
Identifying failure
Proactive and timely identification of failures is an essential first step in the process of learning from
them. One of the revolutions in manufacturing - the drive to reduce inventory to the lowest
possible levels - was stimulated as much by the desire to make problems and errors quickly visible as
by the desire to avoid other inventory-associated costs. As Hayes and his colleagues have noted,
surfacing errors before they are compounded, incorporated into larger systems, or made
irrevocable, is an essential step in achieving high quality.
Indeed, one of the tragedies in organizational learning is that catastrophic failures are often
preceded by smaller failures that were not identified as being worthy of examination and learning.
In fact, these small failures are often the key ‘early warning signs’ that can provide the wake up call
needed to avert disaster further down the road. Social system barriers are often the key driver of this
kind of problem. Rather than acknowledge and address a small failure, individuals have a tendency
to deny the failure, distort the reality of the failure, or cover it up, and groups and organizations
have the tendency to suppress awareness of failures.
Long Range Planning, vol 38 2005 303
The tendency to ignore failure can allow failures to be repeated, developing a smaller failure into
a bigger one. For example, Finkelstein presents Jill Barad at Mattel as an illustration of failing to
acknowledge and learn from mistakes in a timely manner. In Mattel’s ill-fated acquisition of the
Learning Company, Barad first overlooked the problems that the organization was having prior to
the acquisition. An opportunity to acknowledge the failure came when the third quarter 1999
earnings turned out to be a loss of $105 million, rather than a profit of $50 million as she expected.
However, rather than address the failure, she remained optimistic and predicted significant profits
for the next quarter; instead, there was a loss of $184 million. Once again, rather than acknowledge
the failure and learn from it, she repeated the same mistake for the next two quarters as well, thus
making the same mistake for a total of four quarters.
Similarly, an examination of the failed HIH Insurance Group in Australia reveals that
organizational collapse often unfolds in a set of phases, including an ‘early warning sign’ phase in
which leaders typically do not openly identify or respond constructively to failures. Rather than
acknowledge failure and respond appropriately, management acted to conceal failure from the
board of directors and others who might have assisted in addressing the problems more
Likewise, a study of the failure of T. Eaton Co. Ltd. (once Canada’s largest retailer and
the world’s largest privately held department store chain) concluded that the company’s inability to
identify a series of failures in a timely manner contributed to the company’s demise.
By contrast, the CEO of a mechanical contractor recognized the value of exposing failure and
publicizing it in order to help employees learn from each other and not repeat the same mistake.
The CEO:
pulled a $450 ‘mistake’ out of the company’s dumpster, mounted it on a plaque, and named it the
‘no-nuts award’ - for the missing parts. A presentation ceremony followed at the company barbecue.
‘You can bet no one makes that mistake any more,’ the CEO says. ‘The winner, who was initially
embarrassed, now takes pride in the fact that his mistake has saved this company a lot of money.’
‘You can bet no one makes that mistake any more!’
Examples of systematically identifying failures
Overcoming the psychological barriers to identifying failure requires courage to face the unpleasant
truth. But the key organizational barrier to identifying failure has mostly to do with the
inaccessibility of the data necessary to identify failures. To overcome this barrier, organizational
leaders must take the initiative to develop systems and procedures that make available the data
necessary to identify and learn from failure. To illustrate, Dr. Kim Adcock of Kaiser Permanente
proactively collected and organized data to identify failure of physicians in reading mammograms.
Due to inherent difficulties in reading mammograms accurately, the medical profession has come
to expect a 10-15% error rate, even among expert readers. Consequently, discovering that a reader
has missed one or even several tumors is not necessarily indicative of that reader’s diagnostic ability
and may not provide much incentive for learning from failure. By contrast, when Dr. Adcock
became radiology chief at Kaiser Permanente Colorado, he utilized the longitudinal data available
in the HMO’s records to proactively identify failure and produce detailed, systematic feedback
including bar charts and graphs for each individual x-ray reader. For the first time, each reader
could learn whether he or she was falling near or outside of the acceptable range of errors. Dr.
Adcock also provided readers with the opportunity to return to the misread x-rays so they could
investigate why they missed a particular tumor and learn not to make the same mistake again.
On a larger scale, Electricite De France, which operates 57 nuclear power plants, provides an
example of identifying and learning from potential failures. The organization tracks each plant for
anything even slightly out of the ordinary and has a policy of quickly investigating and publicly
reporting any anomalies throughout the entire system so that the whole system can learn.
304 Failing to Learn and Learning to Fail
Feedback seeking is also an effective way of identifying many types of failures. Feedback from
customers, employees and other sources can expose failures, including communication breakdowns
as well as failure to meet goals or satisfy customer requirements. Proactively seeking feedback from
customers may be necessary in order for manufacturers and service providers to identify and
address failures in a timely manner.
For example, only five to ten percent of dissatisfied customers choose to complain following
service failure; instead, most simply switch providers. This is one of the reasons service companies
fail to learn from failures and therefore lose customers. Service management researchers Tax and
Brown cite General Electric and United Parcel Service as two organizations that proactively seek
data that will help them identify failures. General Electric (GE) places an 800 number directly on
each of its products and encourages customers to inform the company of any problems. GE has an
Answer Center that is open twenty-four hours a day, 365 days a year, receiving approximately 3
million customer calls a year.
United Parcel Service (UPS) provides an example of how to seek feedback from within the
company. The company has built in a half hour per week to the schedule of each of its drivers for
receiving their feedback and answering questions. These simple techniques exemplify methods of
identifying failure in a timely way so that the organizations can learn, respond quickly, and retain
At the same time, these techniques are not easy to implement. Employees, consciously and not,
may actively avoid opportunities to expose and learn about their failures. Effective identification of
failure entails exposing failures as early as possible, to allow learning in an efficient and cost effective
way. This often requires a proactive effort on the part of managers to surface available data on
failures and use it in a way that promotes learning.
Failing to identify failures
A recent tragic example of the consequences of delayed and minimized identification of failure can
be found in the Columbia disaster. As discussed in the Columbia Accident Investigation Board’s
report, NASA managers spent 16 days downplaying the possibility that foam strikes on the left side
of the shuttle represented a serious problem ea true failure eand so did not view the events as
a trigger for conducting detailed analyses of the situation. Instead the strikes were deemed ordinary
events, within the boundaries of past experience, an interpretation that would later seem absurd
given the large size of the debris. The shared belief that there was little they could have done
contributed to a lack of proactive analysis and exploration of possible remedies. Sadly, post-event
analyses have suggested the possibility that fruitful actions could have been taken had the failure
been identified and explored early in this window of opportunity.
Because psychological and organizational factors conspire to reduce failure identification,
a fundamental reorientation in which individuals and groups are motivated to engage in the
emotionally challenging task of seeking out failures is needed. Obviously, organizations that have
a habit of ‘shooting the messenger’ who identifies and reveals a failure will discourage this process.
organizations that have a habit of ‘shooting the messenger’ will
discourage the process of seeking out failures
Cultures that promote failure identification
Creating an environment in which people have an incentivedor at least do not have
a disincentivedto identify and reveal failures is the job of leadership.
For example, the
Children’s Hospital in Minneapolis developed a ‘blameless reporting’ system to encourage
employees not only to reveal medical errors right away, but also to share additional information
Long Range Planning, vol 38 2005 305
that could be used in analyzing causes of the error.
Similarly, the US Air Force specifically
motivates speaking up early by penalizing pilots for not reporting errors within 24 hours. Errors
reported immediately are not penalized; those not reported but discovered later are treated severely.
In sum, pervasive social barriers, psychological and organizational, discourage reporting failure,
just as technical barriers such as system complexity and causal ambiguity inhibit recognizing failure
Analyzing failure
It hardly needs to be said that organizations cannot learn from failures if people do not discuss and
analyze them. Yet this remains an important insight. The learning that is potentially available may
not be realized unless thoughtful analysis and discussion of failure occurs. For example, for Kaiser’s
Dr. Adcock, it is not enough just to know that a particular physician is making more than the
acceptable number or errors. Unless deeper analysis of the nature of the radiologists’ errors is
conducted, it is difficult to learn what needs to be corrected. On a larger scale, the US Army is
known for conducting After Action Reviews that enable participants to analyze, discuss and learn
from both the successes and failures of a variety of military initiatives. Similarly, hospitals use
‘Morbidity and Mortality’ (M&M) conferences (in which physicians convene to discuss significant
mistakes or unexpected deaths) as a forum for identifying, discussing and learning from failures.
This analysis can only be effective if people speak up openly about what they know and if others
listen, enabling a new understanding of what happened to emerge in the assembled group. Many of
these vehicles for analysis only address substantial failures, however, rather than identifying and
learning from smaller ones.
An example of effective analysis of failure is found in the meticulous and painstaking analysis that
goes into understanding the crash of an airliner. Hundreds of hours may go into gathering and
analyzing data to sort out exactly what happened and what can be learned. Compare this kind of
analysis to what takes place in most organizations after a failure
As noted above, social systems tend to discourage this kind of analysis. First, individuals
experience negative emotions when examining their own failures and this can chip away at self-
confidence and self-esteem. Most people prefer to put past mistakes behind them rather than revisit
and unpack them for greater understanding.
Second, conducting an analysis of a failure requires a spirit of inquiry and openness, patience and
a tolerance for ambiguity. However, most managers admire and are rewarded for decisiveness,
efficiency and action rather than for deep reflection and painstaking analysis.
Third, psychologists have spent decades documenting heuristics and psychological biases and
errors that reduce the accuracy of human perception, sense making, estimation, and attribution.
These can hinder the human ability to analyze failure effectively.
People tend to be more comfortable attending to evidence that enables them to believe what they
want to believe, denying responsibility for failures, and attributing the problem to others or to ‘the
system’. We would prefer to move on to something more pleasant. Rigorous analysis of failure
requires that people, at least temporarily, put aside these tendencies to explore unpleasant truths
and take personal responsibility. Evidence of this problem is provided by a study of a large
European telecoms company, which revealed that very little learning occurred from a set of large
and small failures over a period of twenty years. Instead of realistic and thorough analysis, managers
tended to offer ready rationalizations for the failures. Specifically, managers attributed large failures
to uncontrollable events outside the organization (e.g., the economy) and to the intervention of
outsiders. Small failures were interpreted as flukes, the natural outcomes of experimentation, or as
illustrations of the folly of not adhering strictly to the company’s core beliefs.
Similarly, we have observed failed consulting relationships in our field research in which the
consultants simply blamed the failure on the client, concluding that the client was not
really committed to change, or that the client was defensive or difficult. By contrast, a few highly
learning-oriented consultants were able to engage in discussion and analysis that involved raising
questions about how they themselves contributed to the problem. In these analytic sessions, the
306 Failing to Learn and Learning to Fail
consultants raised questions such as ‘Are there things I said or did that contributed to the defensiveness
of the client?’, Was my presentation of ideas and arguments clear and persuasive?’ or ‘Did my analysis
fall short in some way that led the client to have legitimate doubts?’ Raising such questions increases
the chances of the consultants learning something useful from the failed relationship, but requires
profound personal curiosity to learn what the answers might be. Blaming the client is much more
simple, comfortable and common.
Raising questions about their contribution to failure increases
consultants’ learning from failed relationships, but requires profound
personal curiosity. Blaming clients is much more comfortable
Recent research in the hospital setting by Ticker and Edmondson shows that health care
organizations typically fail to analyze or make changes even when people are well aware of failures.
Whether medical errors or simply problems in the work process, few hospital organizations dig
deeply enough to understand and capture the potential learning from failures. Processes, resources,
and incentives to bring multiple perspectives and multiple minds together to carefully analyze what
went wrong and how to prevent the occurrence of similar failures in the future are lacking in most
Thus formal processes or forums for discussing, analyzing and applying the lessons of failure
elsewhere in the organization are needed to ensure that effective analysis and learning from failure
occurs. Such groups are most effective when people have technical skills, expertise in analysis, and
diverse views, allowing them to brainstorm and explore different interpretations of a failure’s causes
and consequences. Because this usually involves the potential for conflict that can escalate, people
skilled in interpersonal or group process, or expert outside facilitators, can help keep the process
Next, skills for managing a group process of analyzing a failure with a spirit of inquiry and
sufficient understanding of the scientific method is an essential input to learning from failure as an
organization. Without a structure of rigorous analysis and deep probing, individuals tend to leap
prematurely to unfounded conclusions and misunderstand complicated problems. Some
understanding of system dynamics, the ability to see patterns, statistical process controls, and
group dynamics can be very helpful.
To illustrate how this works in real organizations, we review
a few case study examples below.
Examples of systematically analyzing failure
Edmondson et. al. report how Julie Morath, the Chief Operating Officer at the Minneapolis
Children’s Hospital, implemented processes and forums for the effective analysis of failures, both
large and small. She bolstered her own technical knowledge of how to probe more deeply into the
causes of failure in hospitals by attending the Executive Sessions on Medical Errors and Patient
Safety at Harvard University, which emphasized that, rather than being the fault of a single
individual, medical errors tend to have multiple, systemic causes. In addition, she made structural
changes within the organization to create a context in which failure could be identified, analyzed
and learned from.
To create a forum for learning from failure, Morath developed a Patient Safety Steering
Committee (PSSC). Not only was the PSSC proactive in seeking to identify failures, it ensured that
all failures were subject to analysis so that learning could take place. For example, the PSSC
determined that ‘Focused Event Studies’ would be conducted not only after serious medical
accidents but even after much smaller scale errors or ‘near misses.’ These formal studies were
forums designed explicitly for the purpose of learning from mistakes by probing deeply into their
causes. In addition, cross-functional teams, known as ‘Safety Action Teams’ spontaneously formed
Long Range Planning, vol 38 2005 307
in certain clinical areas to understand better how failures occurred, thereby proactively improving
medical safety. One clinical group developed something they called a ‘Good Catch Log’ to record
information that might be useful in better understanding and reducing medical errors. Other teams
in the hospital quickly followed their example, finding the idea compelling and practical.
In the pharmaceutical industry, about 90 percent of newly developed drugs fail in the
experimental stage, and thus drug companies have plenty of opportunities to analyze failure. Firms
that are creative in analyzing failure benefit in two ways. First, analyzing a failed drug sometimes
reveals that the drug may have a viable alternate use. For example, Pfizer’s Viagra was originally
designed to be a treatment for angina, a painful heart condition. Similarly, Eli Lilly discovered that
a failed contraceptive drug could treat osteoporosis and thus developed their one-billion-dollar-a-
year drug, Evista, while Strattera, a failed antidepressant, was discovered to be an effective treatment
for hyperactivity/attention deficit disorder.
Second, a deep probing analysis can sometimes save an apparently failed drug for its original
purposes, as is seen in the case of Eli Lilly’s Alimta. After this experimental chemotherapy drug
failed clinical trials, the company was ready to give up. The doctor conducting the failed Alimta
trials, however, decided to dig more deeply into the failuredutilizing a mathematician whose job at
Lilly was explicitly to investigate failures. Together they discovered that the patients who suffered
negative effects from Alimta typically had a deficiency in folic acid. Further investigation
demonstrated that simply giving patients folic acid along with Alimta solved the problem, thereby
rescuing a drug that the organization was ready to discard.
Failure analysis can reach beyond the company walls to include customers. Systematic analysis of
small failures in the form of customer breakdowns was instituted at Xerox using a network-based
system called Eureka. By capturing and sharing 30,000 repair tips, Xerox saves an estimated $100
million a year through service operations efficiencies. The Eureka analysis also provides important
information for new product design.
Analyzing employee and customer defections to capture the lessons
To help build an organization’s ability to analyze its own failures, outside sources of technical
assistance in analyzing failure can be engaged. For example, Frederick Reichheld at Bain and
Company has demonstrated the value of a deep, probing analysis of failure in the areas of customer
and employee defections In one instance, the fact that most customers who defected from
a particular bank gave ‘interest rates’ as the reason for switching banks seemed to suggest that their
original bank’s interest rates were not competitive. However, his additional investigation
demonstrated that there were no significant differences in interest rates across the banks. Careful
probing through interviews indicated that many customers defected because they were irritated by
the fact that they had been aggressively solicited for a bank-provided credit card, and then had their
applications turned down. A superficial analysis of customer defection would have led to the
conclusion that the bank’s interest rates were not competitive. A deeper analysis led to an alternate
conclusion: the bank’s marketing department needed to do a better job of screening in advance the
customers to whom it promoted such cards.
The importance of analysis related to employee turnover at another company, where managers
became concerned when they observed high turnover among sales people and conducted an
investigation. Many of the employees gave ‘working too many hours’ as the reason for their
defection. Initially, it appeared that the turnover may not have been such a bad thing dafter all
who needs employees who are not committed to working hard? However, further data collection
revealed that many of the employees who quit were among their most successful salespeople, and
had subsequently found jobs that required, on average, 20 percent fewer hours. Once again, deeper
probing and analysis yielded a truer understanding of the situtation.
Benefits of analyzing failure
In addition to the technical aspects of systematic analysis, discussing failures has important social
and organizational benefits. First, discussion provides an opportunity for others who may not have
308 Failing to Learn and Learning to Fail
been directly involved in the failure to learn from it. Second, others may bring new perspectives and
insights that deepen the analysis and help to counteract self-serving biases that may color the
perceptions of those most directly involved in the failure. After experiencing failure, people typically
attribute too much blame to other people and to forces beyond their control. If this tendency goes
unchecked, it reduces an organization’s ability to mine the key learning that could come from the
.the value of learning from analyzing and discussing simple mistakes
is often overlooked
Lastly, the value of the learning that might result from analyzing and discussing simple mistakes
is often overlooked. Many scientific discoveries have resulted from those who were attentive to
simple mistakes in the lab. For example, researchers in one of the early German polymer labs
occasionally made the mistake of leaving a Bunsen burner lit over the weekend. Upon discovering
this mistake on Monday mornings, the chemists simply discarded the overcooked results and went
on with their day. Ten years later, a chemist in a polymer lab at DuPont made the same mistake.
However, rather than simply discarding the mistake, the Dupont chemist gave the result some
analysis and discovered that the fibers had congealed. This discovery was the first step toward the
invention of nylon. With similar attention to the minor failure in the German lab, they might have
had a decade head start in nylon, potentially dominating the market for years.
These first two sections have dealt with inadvertent failures. If a firm can identify and analyse such
failures, and then learn from them, it may be able to retrieve some value from what has otherwise been
a negative ‘result’. But failure need not always be considered from a ‘defensive’ viewpoint. Our third
section describes an ‘offensive’ approach to learning from failure edeliberate experimentation. The
three activities presented in this article eidentifying failure, analysing failure and deliberate
experimentation eare not intended to be viewed as a sequential three-step process, but rather as
(reasonably) independent competencies for learning from failure. They can be sensibly examined
alongside each other, since each is easily inhibited by social and technical factors.
Deliberate experimentation
The third active process used by organizations to learn from failure is the most provocative. A
handful of exceptional organizations not only seek to identify and analyze failures, they actively
increase their chances of experiencing failure by experimenting. They recognize failure as
a necessary by-product of true experimentation, that is, experiments carried out for the express
purpose of learning and innovating. By devoting some portion of their energy to trying new things,
to find out what might work and what will not, firms certainly run the risk of increasing the
frequency of failure. But they also open up the possibility of generating novel solutions to problems
and new ideas for products, services and innovations. In this way, new ideas are put to the test, but
in a controlled context.
Experiments are understood to have uncertain outcomes and to be designed for learning. Despite
the increased rate of failure that accompanies deliberate experimentation, organizations that
experiment effectively are likely to be more innovative, productive and successful than those that do
not take such risks.
Similarly, other research has confirmed that those research and development
teams that experimented frequently performed better than other teams.
Social systems can make deliberate experimentation difficult because most organizations reward
success, not failure. Purposefully setting out to experiment ethus generating and accepting some
failures alongside some successes ealthough reasonable, is difficult in a general business culture
where failures are stigmatized. Conducting experiments also involves acknowledging that the status
quo is imperfect and could benefit from change. A psychological bias known as the confirmation
Long Range Planning, vol 38 2005 309
trap, meaning that people tend to seek to confirm their views rather than to learn why they might be
wrong, makes planning ventures that could produce learning but that might very well fail
particularly difficult.
Deliberate experimentation requires that people not just assume their views
are correct, but actually put their ideas to the test and design (even small very informal)
experiments in which their views could be disconfirmed.
Examples of effective experimentation
A good example of the ability to overcome these psychological barriers is provided by the
influential, award-winning design firm, IDEO. They communicate this perspective with slogans
such as ‘Fail often in order to succeed sooner’ and ‘Enlightened trial-and-error succeeds over the
planning of the lone genius.’
These sayings are accompanied by frequent small experiments, and
much good humor about the associated failures.
Similarly, PSS/World Medical encourages experimentation in a variety of ways and sometimes
even goes so far as to encourage employees to experiment with career moves. PSS/World Medical
has a ‘soft landing’ policy: if an employee tries out a new position, but does not succeed after a good
faith effort, the employee can have his or her former job back. This ‘soft landing’ policy is an
implicit recognition that experiments have uncertain outcomes and that people will be more willing
to experiment if the organization protects their interests.
Technical skills are critical in implementing a deliberate experimentation process. First, because
analyzing failure is part of this process, key individuals need skills in analyzing the results of
experiments. Second, because rigor is needed to design experiments that will effectively confirm
or disconfirm hypotheses to generate useful learning. Under some conditions, this can be
extremely challenging. For example, customer satisfaction at a large resort will be affected by
many interdependent aspects of the customer’s experience. If the resort experiments with
different possible innovations to enhance customer satisfaction, how do they determine their
Designing experiments in complex, interdependent systems is challenging even for research
experts. In addition to knowledge of experimental design and analysis, people need resources to run
experiments in different parts of the organization and to capture the learning.
The 3M Corporation has been unusually successful in providing incentives and policies that
encourage deliberate experimentation. The company has earned a reputation for successful product
innovation by encouraging deliberate experimentation and by cultivating a culture that is tolerant
and even rewarding of failures; failures at 3M are seen as a necessary step in a larger process of
developing successful, innovative products. Now legendary stories such as Arthur Fry and the failed
super-adhesive that spawned the Post-it industry are spread far and wide, both within and outside
the company. Setting goals, such as having 25 percent of a division’s revenues come from products
introduced within the last five years, means that divisions must continuously experiment to develop
new products.
Bank of America provides an interesting example of experimentation in the service setting.
Seeking to become more innovative, senior management decided to go ahead with deliberate,
intelligent experimentation in the branches eexperiments that would inevitably affect and often be
visible to customers. Wishing to become an industry leader in innovation, the bank established
a program to develop a process and culture of innovation in two dozen real-life ‘laboratories’: fully
operating banking branches in which new product and service concepts, such as virtual tellers, were
being tested by employees (and customers).
Senior executives addressed organizational barriers by funding and developing an ‘Innovation &
Development Team’ to manage this process. A successful program entailed hiring individuals with
the technical research skills to address a number of complicated questions, such as: how to gauge
success of a concept; how to prioritize which concepts would be tested; how to run several
experiments at once; and how to avoid the novelty factor itself from altering the experimental
outcome. Successful experiments, determined on the basis of consumer satisfaction or revenue
growth, were then recommended for a national rollout.
310 Failing to Learn and Learning to Fail
Senior management strongly supported experimentation .they
recognized [it] would necessarily produce failures along the way
Senior management strongly supported innovation and experimentation at these branches. For
example, they recognized that trying out innovative ideas would necessarily produce failures along
the way, so they targeted a failure rate of 30% as one that would indicate sufficient attempts at truly
novel ideas were being made. However, employee rewards were primarily based on indices
measuring routine performance (such as opening new customer accounts), and their personal
compensation often suffered when they spent time experimenting with new ideas, or when their
experiments failed. As a result, employees were reluctant to try out radical experiments until
management made changes to align reward systems with the organization’s espoused value of
Factors promoting deliberate experimentation
Experimental research in social psychology by Lee at al confirms this point, that espoused goals of
increasing innovation through experimentation are not as effective when rewards penalize failures
as when rewards and values are aligned with the goal of promoting experimentation. As both field
and laboratory examples show, although experimentation is an essential activity underlying
innovation, it is both technically and socially challenging to implement intelligently. One of the
advantages of most forms of experimentation is that failures can take place off-line ein dry runs,
simulations, and other kinds of practice situations in which the failures are not costly. However,
even in these situations, interpersonal fears can lead to reluctance to take risks, limiting the
effectiveness of the experiments.
Moreover, some experiments must take place on-line, in real
settings, in which customers interact directly with the failures.
Putting failure to work to innovate and improve
Our basic premise is that, although the barriers to a systematic process of learning from failure in
organizations are deep-rooted and numerous, by breaking this process down into component
activities, organizations can slowly but surely improve their track record of learning from their own
failures. While catastrophic failures will always, and rightly, command attention, we suggest that
focusing on the learning opportunities of small failures can allow organizations and their managers
to minimize the inherently threatening nature of failure to gain experience and momentum in this
learning process.
The previous section analyses the technical and social barriers to engaging in learning-from-
failure activities: this section builds the analysis to develop a framework to explore what
organisations can do to overcome these barriers.
Table 1 summarizes this advice, relating the two types of barriers to the three critical activities for
learning from failure to suggest six actionalde recommendations. The upper section lists the
technical system barriers to learning activities, and offers recommendations emphasizing training,
education, and the judicious use of technical expertise, while the lower section lists the social system
barriers, and presents recommendations for building psychological and organizational capabilities
for identifying failure, analyzing failure, and experimentation.
Recommendations on technical barriers
To overcome technical barriers, we first recommend helping employees to see that identifying
failure requires a proactive and skillful search, that human intuition is often insufficient to extract
Long Range Planning, vol 38 2005 311
the key learning from failure, and that intelligent experimental design is a critical tool for
innovation and learning. With this basic understanding, employees are better able to recognize
when they either need to receive more specialized training themselves or to engage the assistance of
someone else who has benefited from such training.
Recommendation 1: Overcoming technical barriers to identifying failure
Organizations are complex systems, often making small (and sometimes even large) failures difficult
to detect. Failures, as noted above, are deviations from the expected and desired; if a system has
many complex parts and interactions, such deviations can be ambiguous. The Columbia shuttle’s
initial failure exemplifies this phenomenon; it was not clear to those involved until much later that
the foam strike should indeed be identified as a failure.
The Columbia example also highlights the erroneous level of confidence people have in their
initial interpretations that nothing is really wrong. Enhancing an individual’s ability to identify
(especially small) failures requires training. For example, training in statistical process control (SPC)
is useful for identifying failure on an assembly line. Without SPC, people are at a disadvantage
Table 1. A Framework for Enabling Organizational Learning from Failure
Key Processes in Organizational Learning From Failure
Identifying failures Analyzing failures Experimentation
embedded in
Complex systems make
many small failures
A lack of skills and
techniques to extract
lessons from failures.
Lack of knowledge
of experimental design.
Recommendations R1: Build information
systems to capture and
organize data,
enabling detection of
anomalies, and ensure
availability of systems
analysis expertise.
R2: Structure After Action
Reviews or other formal
sessions that follow specific
guidelines for effective analysis
of failures, and ensure
availability of data
analysis expertise.
R3: Identify key individuals
for training in experimental
design; use as internal
consultants to advise pilot
projects and other line
(operational) experiments.
embedded in
Social Systems
Threats to self-esteem inhibit
recognition of one’s own
failures, and corporate cultures
that ‘shoot the messenger’
limit reporting of failures.
Ineffective group process
limits effectiveness of
failure analysis
Individuals lack efficacy
for handling ‘hot’ issues.
Organizations may penalize
failed experiments inhibiting
willingness to incur failure
for the sake of learning.
Recommendations R4: Reinforce psychological
safety through organizational
policies such as blameless
reporting systems,
through training first
line managers
in coaching skills, and by
publicizing failures as a
means of learning.
R5: Ensure availability of
experts in group dialogue
and collaborative learning,
and invest in development
of competencies of other
employees in these skills.
R6: Pick key areas of operations
in which to conduct an
experiment, and publicize
results, positive and negative,
widely within the company
(Bank of America example).
Set target failure rate for
experiments in service of
innovation and make sure
reward systems do not
contradict this goal.
312 Failing to Learn and Learning to Fail
in discovering whether variation indicates that something is really wrong ea signal eor
whether it is just natural ‘noise’ in a process that is under control. Similarly, employees in
complicated and interdependent organizations will benefit from training in systems thinking and
scientific analysis. This enhances their ability to identify failure and pinpoint its source eand
especially to realize the critical role of small failures in creating large consequences in complex
Some failures can only be discerned when sufficient data are compiled and reviewed, such as we
saw in the Kaiser Mammogram example, in which higher-than-average error rates for an individual
physician constituted a failure, although single errors were not be considered evidence of failure.
Thus failure identification may be enabled by effective information systems that facilitate collection
and analysis of otherwise dispersed experiences. Recall also that Electricite De France developed
a system to detect anomalies and feed the information back to operators. Likewise, GE0s 800
number and its prominent placement on products create the opportunity for data to be generated
by consumers and collected by GE, and reviewed when there are sufficient for meaningful analysis.
Recommendation 2: Overcoming technical barriers to analyzing failure
Most people tend not to recognize that they lack complete information or that their analysis is not
rigorous, and thus leap quickly to questionable conclusions while remaining confident that they are
correct. The tendency inhibits the extraction of the right lessons from failure. Figuring out which
aspects of a situation were contributing factors to something that did not go as expected is
a complex undertaking. For this reason, organizations need individuals with skills and techniques
for systematic analysis of complex organizational data. At the Minneapolis Children’s Hospital, the
patient safety effort included considerable care to ensure appropriate technical skills were in place to
gain the most appropriate lessons from each mishap ewhether large or small.
Overcoming technical barriers does not require each and every employee to have the required
technical skills. The judicious use of a few well-placed technical experts and systems thinkers may be
enough to trigger more reliable identification of failure. At Children’s Hospital, safety experts were
brought in to help the hospital identify latent failures and a skilled facilitator ran every meeting
convened to analyze a given failure. Again, the mathematician that Eli Lilly hired to help
understand failures provides another example of how technical expertise can be built into the
organizational structure.
In contrast, following the Columbia launch, a simulation program designed by Boeing was used
to analyze the potential threat of the foam strikes on the mission. However, the technology had
several shortcomings for the task. The tool had not been calibrated for foam pieces greater than
three cubic inches in size - but the foam piece that struck the Columbia was 400 times bigger!
Further, the computer model simulated damage to a particular kind of tile ewhich was not the
type of tile on the area struck on the leading edge of the Shuttle’s wing. Had the right technical
experts, with the right tool, been able to work on the analysis, the outcome of Columbia’s final flight
might have been different.
Recommendation 3: Overcoming technical barriers to effective experimentation
To produce valuable learning, experiments must be designed effectively. However even PhD
laboratory researchers with years of experience can struggle to get an experimental design just
right. In addition, most organizational settings have only limited ability to isolate variables and
reduce ‘noise’, which makes designing experiments for organizational learning challenging. At its
most basic, designing experiments for learning requires careful thought as to what kinds of data
will be collected and how results of the experiment will be assessed. For example, Bank of
America examined financial and customer satisfaction metrics at its experimental bank
branches. The key is to consider possible outcomes in advance and know how they might be
Long Range Planning, vol 38 2005 313
To produce valuable learning, experiments must be designed effectively.
The key is to consider in advance how all possible outcomes might be
Again, it is not necessary to make all employees experts in experimental methodology; it is more
important to know when help is needed from (internal or external) experts with sophisticated skills.
Organizations can overcome this barrier by hiring and supporting a few well placed experts and
making their availability known to others. Thus, Bank of America handled this problem smartly by
developing the Innovation and Development Team and staffing it with experts who understood the
vulnerabilities associated with conducting research experiments in a real-world setting and how to
work around them.
Recommendations on social barriers
In addition to implementing the above recommendations to manage technical barriers, managers
must also deal with barriers due to social systems that are more subtle, pervasive, and difficult to
address. Even without explicit incentives against failure, many organizations have norms and
practices that are unfriendly to experimentation as well as to identifying, analyzing and learning
from failure. The next three recommendations tackle these issues directly.
Recommendation 4: Overcoming social barriers to identifying failure
To promote timely identification of failure, organizations must avoid ‘shooting the messenger’ and
instead put in place constructive incentives for speaking up. People must feel able to talk about the
failures of which they are aware, whether they are clear or ambiguous. To do this, leaders need to
cultivate an atmosphere of psychological safety to mitigate risks to self-esteem and others’
impressions. Developing psychological safety begins with the leader modeling the desired behaviors,
visibly demonstrating how they wish subordinates and peers to behave.
Leader modeling serves two significant purposes. First, to communicate expected and
appropriate behavior, it is important for leaders to ‘walk the talk.’ Second, leader modeling can
help subordinates learn how to enact these processes. Because these behaviors may be unfamiliar in
many organizations, having a model to observe can be very helpful in facilitating subordinate
learning. Leaders can model effectively by generating new ideas, disclosing and analyzing failure,
inviting constructive criticism and alternative explanations, and capturing and then utilizing
Finally, psychological safety cannot be implemented by top down command. Instead, it is created
work group by work group through attitudes and activities of local managers, supervisors and
peers. As the second author has found in previous research, the development of managerial
coaching skills is one way to help build this type of learning environment. In addition,
organizational policies can either support or undermine the development of psychological safety.
For example, an organization-wide ‘blameless’ system for reporting errors, as used by Children’s
Hospital in Minneapolis, sends a cultural signal that it is truly safe to identify and reveal failures.
Perhaps one of the most difficult aspects of analyzing failure pertains to interpersonal dynamics,
the focus of our next recommendation.
Recommendation 5: Overcoming social barriers to analyzing failure
Developing an environment in which people feel safe enough to identify failure and speak up is
necessary to help ensure identification of failure, but insufficient to produce learning from failure.
Effective analysis of failure requires both time and space for analysis and skill in managing the
conflicting perspectives that may emerge. Some organizations provide for such time and space:, the
314 Failing to Learn and Learning to Fail
military use ‘After Action Reviews’ and hospitals ‘Mortality and Morbidity’ conferences to analyze
In addition to putting such structures in place, leaders need to involve people with diverse
perspectives and skills in order to generate deeper learning. While their introduction may produce
inevitable by-products of tension and conflicts, their experience can help keep the dialogue
learning-oriented. Decades of research by organizational learning pioneer Chris Argyris have
demonstrated that people in disagreement rarely ask each other the kind of sincere questions that
are necessary for them to learn from each other. People often try to force their views on the other
party, rather than seeking to educate them by explaining the underlying reasoning behind their
people in disagreement rarely ask each other the kind of sincere
questions necessary for them to learn from each other
For example, during the teleconference the night before the space shuttle Challenger was
launched, engineers and administrators both proved incapable of having the kind of discussion
which could lead to each side understanding the other’s concerns. Rather than try to explain what
they saw in their (incomplete) data to educate the administrators and fill in the gaps in their
understanding, the engineers made abstract statements such as ‘It is away from goodness to make any
other recommendation’ and ‘It’s clear, it’s absolutely clear.’ In turn, the administrators did not
communicate their own concerns and questions thoughtfully, but instead contributed to an
increasingly polarized discussion in which the engineers’ competencies were impugned. Eventually,
the individuals with the most power eNASA senior managers emade the decision.
Thus, we recommend either developing or hiring skilled facilitators who can ensure that
learning-oriented discussions take place when analyzing organizational failures. Managers can be
trained to test assumptions, inquire into others’ views, and present their own views (no matter how
correct or thorough they may seem to them) as incomplete eor partial eaccounts of reality. These
interpersonal skills can be learned, albeit slowly and with considerable effort, as action research has
When managers have such skills, they are able to both model this behavior and
provide active coaching to others to help them be more effective in generating learning from the
heated discussions often produced when failures are analyzed. Finally, even though a little training
may not produce instant skill, it can remind managers of the need to engage in discussions they
might otherwise avoid, as well as giving them some additional confidence.
Recommendation 6: Overcoming social barriers to experimentation
As Lee et al note, when incentives are inconsistent with espoused values that advocate learning from
failure, true experimentation will be rare. Executives thus need to align incentives and offer
resources to promote and facilitate effective experimentation. Organizational policies such as 3M’s
directive that 25 percent of a division’s revenues come from products developed in the last five
years, and Bank of America’s setting the expected level for failed experiments at 30 percent can go
a long way in sending the signal that the organization values creative experimentation. Promoting
individuals who have invested significant time experimenting with new ideas sends a similar
In addition, leading by example is crucial. Managers who experiment intelligently themselves,
and who publicize both failures and successes, demonstrate the value of these activities and help
others see that the ideal of learning from failure in their organization is more than talk. As an
example, Burton reports how Eli Lilly’s chief science officer introduced ‘failure parties’ to honor
intelligent, high-quality scientific experiments that nonetheless failed to achieve the desired results.
In addition, coaching and clear direction may be useful in helping subordinates understand what
types of experiments should be designed. Finally, to develop the ability to manage all these
Long Range Planning, vol 38 2005 315
processes, managers may need to work on their own psychological and emotional capabilities to
enable them to shift how they think about failure.
The above activities can help organizations to identify problems and opportunities and to learn
and innovate. But employees who engage in learning behaviors must work to ensure that their
bosses, and other parts of the organization, understand and endorse the ‘intelligent failure’ concept.
Further, implementing these recommendations requires time, resources and patience, so engaging
in learning activities while maintaining current operations will require building in some level of
slack. In sum, managers setting out on this course must be realistic in their expectations for learning
from failure.
Reframing failure
These above recommendations are best implemented as an integrated set of practices accompanied
by an encompassing shift in managerial mindset. Table 2 summarizes this shift.
First, failure must be viewed not as a problematic aberration that should never occur, but rather
as an inevitable aspect of operating in a complex and changing world. This is of course not to say
leaders should encourage people to make mistakes, but rather to acknowledge that failures are
inevitable, and hence the best thing to do is to learn as much as possible eespecially from small
ones dso as to make larger ones less likely. Beliefs about effective performance should reflect this.
This implies holding people accountable, not for avoiding failure, but for failing intelligently, and
for how much they learn from their failures.
Of course, whether a failure turns out to be intelligent or not is sometimes not easy to know at
the outset of an experiment. To provide managers with some guidelines, organizational scholar Sim
Sitkin identifies five characteristics of intelligent failures: (1) They result from thoughtfully planned
actions, (2) have uncertain outcomes, (3) are of modest scale, (4) are executed and responded to
with alacrity, and (5) take place in domains that are familiar enough to permit effective learning.
Managers would also be smart to consider their organization’s current issues related to risk
management as they develop experiments. By considering these criteria in advance, and by
analyzing and learning from previous experiments, managers are able to increase the chances that
their failures will be intelligent.
Examples of unintelligent failure include making the same mistake over and over again, failing
due to carelessness, or conducting a poorly designed experiment that will not produce helpful
learning. In addition, managers need to create an environment in which they and their employees
Table 2. Reframing the Traditional Managerial Mindset for Learning
Traditional Frame Learning-oriented reframe
Expectations about failure Failure is not acceptable Failure is a natural byproduct of a healthy
process of experimentation and learning
Beliefs about effective
Involves avoiding failure Involves learning from intelligent failure and
communicating the lessons broadly in the
Psychological and
interpersonal responses
to failure
Self-protective Curiosity, humor, and a belief that being
the first to capture learning creates personal
and organizational advantage
Approach to leading Manage day-to-day
operations efficiently
Recognizing the need for spare organizational
capacity to learn, grow and adapt for the future
Managerial focus Control costs Promote investment in future success
316 Failing to Learn and Learning to Fail
are open to putting aside their self-protective defenses and responding instead with curiosity and
a desire to learn from failure.
Finally, learning to fail intelligently requires leaders to adopt a long-term perspective. Too many
managers take a short-term view that focuses primarily on the efficient control of day-to-day
operations and on managing costs. By contrast, enhancing an organization’s learning ability
requires a perspective that focuses on building its long-term capacity to learn, grow, and adapt for
the future.
This article starts from the observation that few organizations make effective use of failures for
learning due to formidable and deep-rooted barriers. In particular, small failures can be valuable
sources of learning, presenting ‘early warning signs’. However, they are often ignored, and thus
their valuable lessons for preventing serious harm are missed. We show that properties of technical
systems combine with properties of social systems in most organizations to make failures’ lessons
especially difficult to glean. At the same time, we highlight noteworthy exceptions eorganizations
that have done a superb job of making failures visible, analyzing them systematically, or even
deliberately encouraging intelligent failures as part of thoughtful experimentation.
Organizational learning from failure is thus not impossible but rather is counter-normative and
often counter-intuitive. We suggest that making this process more common requires breaking it
down into essential activities eidentifying failure, analyzing failure, and experimenting ein which
individuals and groups can engage. By reviewing examples from a variety of organizations and
industries where failures are being mined and put to good use through these activities, we seek to
demystify the potentially abstract ideal of learning from failure. We offer six actionable
recommendations, and argue that these recommendations are best implemented by reframing
managerial thinking, rather than by treating them as a checklist of separate actions.
In conclusion, leaders can draw on this conceptual foundation as they seize opportunities, craft
skills, and build routines, structures, and incentives to help their organizations enact these learning
processes. At the same time, we do not underestimate the challenge of tackling the psychological
and interpersonal barriers to this organizational learning process. As human beings, we are
socialized to distance ourselves from failures. Reframing failure from something associated with
shame and weakness to something associated with risk, uncertainty and improvement is a critical
first step on the learning journey.
Reframing failure to being associated with risk and improvement is
a critical first step on the learning journey
We acknowledge the financial support of the Division of Research at Harvard Business School for
the research that gave rise to these ideas. We would also like to thank the LRP editor, as well as the
Special Issue editors and the anonymous reviewers for very helpful feedback.
1. M. Cannon and A. C. Edmondson, Confronting failure: Antecedents and consequences of shared beliefs
about failure in organizational work groups, Journal of Organizational Behavior 22, 161e177 (2001).
2. D. Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, University
of Chicago Press, Chicago, IL (1996).
Long Range Planning, vol 38 2005 317
3. S. Sitkin, Learning through failure: the strategy of small losses, in L. L. Cummings and B. Staw (eds.),
Research in Organizational Behavior 14, , JAI Press, Greenwich, CT, 231e266 (1992).
4. See especially A. L. Tucker and A. C. Edmondson, Why hospitals don’t learn from failures: Organizational
and psychological dynamics that inhibit system change, California Management Review 45(2), 55e72
5. For example, see both D. Leonard-Barton, Wellsprings of knowledge: Building and sustaining the sources of
innovation, Harvard Business School Press, Boston (1995); and S. B. Sitkin (1992) op cit at Ref 3 above.
6. See especially A. C. Edmondson, The local and variegated nature of learning in organizations,
Organization Science 13(2), 128e146 (2002).
7. E. A. Trist and K. W. Bamforth, Some social and psychological consequences of the Longwall method of
coal-getting, in D. S. Pugh (ed.), Organization theory, Penguin, London 393e419 (1958); Also see A. K.
Rice, Productivity and social organization: The Ahmedabad experiment, Tavistock, London (1958).
8. For example, see C. Perrow, Normal accidents. Living with high-risk technologies, Basic Books, New York
9. R. H. Hayes, S. C. Wheelwright and K. B. Clark, Dynamic Manufacturing: Creating the Learning
Organization, Free Press, New York (1988).
10. D. Goleman, Vital lies, simple truths: The psychology of self-deception, Simon and Schuster, New York
11. S. E. Taylor, Positive Illusions: Creative Self-Deception and the Healthy Mind, Basic Books, New York
12. C. Argyris, Overcoming organizational defenses: Facilitating organizational learning, Allyn and Bacon,
Wellesley, MA (1990).
13. S. Finkelstein, Why Smart Executives Fail and What You Can Learn from Their Mistakes, Portfolio,
New York (2003). The quotation is from p. 179e180.
14. F. Lee, A. Edmondson, S. Thomke and M. Worline, The mixed effects of inconsistency on
experimentation in organizations, Organization Science 15(3), 310e326 (2004).
15. K. Mellahi, The dynamics of boards of directors in failing organizations, Long Range Planning 38(3),
(2005) doi:10.1016/j.lrp.2005.04.001.
16. J. Sheppard and S. Chowdhury, Riding the wrong wave: Organizational failure as a failed turnaround,
Long Range Planning 38(3), (2005) doi:10.1016/j.lrp.2005.03.009.
17. Make no mistake, Inc. Magazine, June (1989), p. 105.
18. M. Moss, Spotting breast cancer, doctors are weak link, The New York Times [late ed] A1, (27 June 2002);
M. Moss, Mammogram team learns from its errors, The New York Times [late ed.] A1, (28 June 2002).
19. E. C. Nevis, A. J. DiBella and J. M. Gould, Understanding organizations as learning systems, Sloan
Management Review 36,73e85 (1995).
20. S. W. Brown and S. S. Tax, Recovering and Learning from Service Failures, Sloan Management Review
40(1), 75e89 (1998).
21. N.A.S.A. report on the Space Shuttle Columbia at
22. A. Edmondson, Organizing to learn, Harvard Business School Note, 9-604-031. (2003).
23. A. Edmondson, M. A. Roberto, A. Tucker, Children’s Hospital and Clinics, Harvard Business School Case,
9-302-050. (2002).
24. M. H. Bazerman, Judgment in Managerial Decision Making (fifth edition), John Wiley & Sons, New York
(2002); Also see S. T. Fiske and S. E. Taylor, Social Cognition, Random House, New York (1984).
25. P. Baumard and B. Starbuck, Learning from failures: Why it may not happen, Long Range Planning 38(3),
(2005) doi:10.1016/j.lrp.2005.03.004.
26. Cannon and Edmondson (2001) op cit at Ref 1 above; A. Edmondson and B. Moingeon, When to learn
how and when to learn why, in B. Moingeon and A. Edmondson (eds.), Organizational learning and
competitive advantage, Sage, London (1996).
27. For research on the lack of inquiry in group discussions, see C. Argyris (1990), op cit at Ref 12 above;
D. A. Garvin and M. A. Roberto, What You Don’t Know About Making Decisions, Harvard Business
Review 79(8), 108e116 (2001); for research on premature convergence on a solution or decision see I. L.
Janis and L. Mann, Decision-Making, The Free Press, New York (1997); and E. Langer, Mindfulness,
Addison-Wesley, Reading, MA (1989).
28. T. S. Burton, By learning from failures, Lilly keeps drug pipeline full, The Wall Street Journal (21 April
318 Failing to Learn and Learning to Fail
29. J. Hagel III and J. S. Brown, Productive friction: How difficult business partnerships can accelerate
innovation, Harvard Business Review February,83e91 (2005).
30. F. F. Reichheld and T. Teal, The Loyalty Effect: The Hidden Force Behind Growth, Profits, and Lasting Value,
Harvard Business School Press, Boston 194e195 (1996).
31. P. F. Drucker, Innovation and Entrepreneurship: Practice and Principles, Harper & Row, New York, 43 (1985).
32. S. Thomke, Experimentation matters: Unlocking the Potential of New Technologies for Innovation, Harvard
Business School Press, Boston, MA (2003).
33. M. A. Maidique and B. Zirger, A Study of Success and Failure in Product Innovation: The Case of the U.S.
Electronics Industry, IEEE Transactions on Engineering Management 31(4), 192e204 (1984).
34. P. C. Wason, On the failure to eliminate hypotheses in a conceptual task, Quarterly Journal of
Experimental Psychology 20, 273e283 (1960).
35. T. Kelley and J. Littman, The Art of Innovation: Lessons in Creativity from IDEO, America’s Leading Design
Firm, Currency Books, New York, 232 (2001).
36. A. C. Edmondson and L. Feldman, Understand and Innovate at IDEO Boston, Harvard Business School
Publishing, Case # 9-604-005 (2004).
37. J. Pfeffer and R. I. Sutton, The Knowing Doing Gap: How Smart Companies Turn Knowledge into Action,
Harvard Business School Press, Boston, 129 (2000).
38. S. Thomke and A. Nimgade Bank of America, Harvard Business School Case, 9-603-022. (2002).
39. P. Senge, The fifth discipline: The art and practice of the learning organization, Doubleday, New York
(1990); A. Edmondson, R. Bohmer and G. Pisano, Disrupted routines: Team leaning and new technology
implementation in hospitals, Administrative Science Quarterly 46, 685e716 (2001).
40. A. C. Edmondson, M. R. Roberto, R. M. J. Bohmer, E. Ferlins and L. Feldman, The recovery window:
Organizational learning following ambiguous threats in high-risk organizations, Harvard Business School
Working Paper (2004).
41. A. C. Edmondson, Speaking up in the operating room: How team leaders promote learning in
interdisciplinary action teams, Journal of Management Studies 40(6), 1419e1452 (2003).
42. C. Argyris, Strategy, change, and defensive routines, Harper Business, New York (1985).
43. A. C. Edmondson, Group process in the Challenger launch decision (B), Harvard Business School Case,
N9-603-070. (2003).
44. Action Design website
45. S. Sitkin (1992) p. 243 op cit at Ref 3 above.
Mark Cannon is an Assistant Professor of Leadership, Policy and Organizations and of Human and Organizational
Development at Vanderbilt University. He investigates barriers to learning in organizational settings, such as
positive illusions, individual and organizational defenses, and barriers to learning from failure. He has published
recently on executive coaching topics, including coaching leaders in transition and coaching interventions that
produce actionable feedback. His work has appeared in the Academy of Management Executive, Human Resource
Management, and Journal of Organizational Behavior. He received his Ph.D. in Organizational Behavior from
Harvard University. Vanderbilt University, Peabody #514, Nashville, TN 37203. Tel: (615) 343-2775 Fax: (615) 343-
7094 e-mail:
Amy C. Edmondson, Professor of Business Administration and Chair of the Doctoral Programs Committee at
Harvard Business School, studies teams in healthcare and other industries, and emphasizes the role of psychological
safety for enabling learning, change, and innovation in organizations. In 2003, she received the Cummings Award
from the Academy of Management OB division for outstanding achievement in early-mid career. Her recent article,
Why Hospitals Don’t Learn from Failures: Organizational and Psychological Dynamics That Inhibit System Change
(with A. Tucker), received the 2004 Accenture Award for a significant contribution to management practice.
Edmondson received her PhD in organizational behavior from Harvard University. Morgan Hall T-93, Harvard
Business School, Boston, MA 02163 Tel: (617) 495-6732 Fax: (617) 496-5265 email:
Long Range Planning, vol 38 2005 319
... Building on insights from an interview study with auditors on error management by Gold et al. (2022), we explore four work conditions that could inhibit auditors' error learning in daily practice: small consequences for errors, routine-type errors, strong negative emotions, and high time pressure. First, we expect that auditors will be less likely to engage in error learning when an error has smaller (rather than larger) error consequences (in line with, i.e., Levitt and March, 1988;Cannon and Edmondson, 2005;Homsma et al., 2009). That is, errors with smaller consequences can be considered less 'learnworthy'; however, ignoring their learning potential may lead to repetition and escalation in the future (Cannon and Edmondson, 2005). ...
... First, we expect that auditors will be less likely to engage in error learning when an error has smaller (rather than larger) error consequences (in line with, i.e., Levitt and March, 1988;Cannon and Edmondson, 2005;Homsma et al., 2009). That is, errors with smaller consequences can be considered less 'learnworthy'; however, ignoring their learning potential may lead to repetition and escalation in the future (Cannon and Edmondson, 2005). Second, we hypothesize that auditors will report less error learning from routine (compared to non-routine) errors (in line with Embrey, 2005;Zhang et al., 2019), as these errors are easily attributable to inattention or coincidence, rather than a lack of knowledge (Sutcliffe and Rugg, 1998). ...
... Small error consequences First, auditors, like other professionals, make errors that vary in their consequences (Gold et al., 2022). While all errors, regardless of their consequences, carry learning potential (van Dyck, 2009), prior research has shown that learning is more likely to occur when an error has relatively larger consequences, usually affecting the person committing the error or others, with regard to individuals' health, finances, or social standing (Cannon and Edmondson, 2005;Homsma et al., 2009). These errors challenge the existing state of affairs and stimulate individuals to engage in learning to prevent these significant consequences (Levitt and March, 1988). ...
Full-text available
Introduction Professionals do not always learn from their errors; rather, the way in which professionals experience errors and their work environment may not foster, but can rather inhibit error learning. In the wake of a series of accounting scandals, including Royal Ahold in Netherlands, Lehman Brothers in the United States, and Wirecard in Germany, within the context of financial auditing, we explore four audit-specific conditions at the workplace that could be negatively associated with learning: small error consequences, routine-type errors, negative emotions, and high time pressure. Then, we examine how perceptions of an open or blame error management climate (EMC) moderate the negative relationship between the four work conditions and learning from errors. Methods Using an experiential questionnaire approach, we analyze data provided by 141 Dutch auditors across all hierarchical ranks from two audit firms. Results Our results show that open EMC perceptions mitigate the negative relationship between negative emotions and error learning, as well as the negative relationship between time pressure and error learning. While we expected that blame EMC perceptions would exacerbate the negative relationship between negative emotions and error learning, we find a mitigating effect of low blame EMC perceptions. Further, and contrary to our expectations, we find that blame EMC perceptions mitigate the negative relationship between small error consequences and error learning, so that overall, more error learning takes place regardless of consequences when participants experience a blame EMC. Post-hoc analyses reveal that there is in fact an inverted- U-shaped relationship between time pressure and error learning. Discussion We derive several recommendations for future research, and our findings generate specific implications on how (audit) organizations can foster learning from errors.
... Since errors are unpredictable, error prevention must be complemented by error management strategies (van Dyck et al., 2005;Deng et al., 2022;Matthews et al., 2022). The purpose of this article is to understand the role of leadership and an organizational culture of error management in the effective use of an error management strategy in organizations, whose literature theoretically recognizes its importance, with the need for more empirical studies remaining (Gelfand et al., 2011;van Dyck et al., 2005;Cannon and Edmondson, 2005). Therefore, it is intended to answer the following research question: what is the role of leadership and an organizational culture of error management in an error management strategy in organizations? ...
... Several authors have drawn attention to Strategic perspective of error management the need to review contemporary theories of leadership, so that they address error management and its practices, such as detecting, treating, sharing and learning from mistakes (Judge et al., 2008;Bass and Avolio, 1994). In general, existing theories and research highlight those organizational contexts are characterized by hierarchical levels, where leaders assume particular importance, being key players and the main actors who, through their actions, decisions and provision of feedback, can encourage members of their teams to adopt productive attitudes and behaviours in the face of error (Cannon and Edmondson, 2005;Salas et al., 2004). They may frame mistakes as learning opportunities rather than something to hide or punish (Rodriguez and Griffin, 2009;Nielsen et al., 2013;Deng et al., 2022;Dimitrova et al., 2017). ...
... vision) but also reflected in the daily activities and procedures whose role of the leader is fundamental for the purpose. Cannon and Edmondson (2005), found a positive relationship between leadership and orientation towards learning through mistakes and, as a result, left open the importance of attesting to the role of team leaders in a clear alignment of developing constructive beliefs and behaviours about the error. ...
Purpose Errors are inevitable, resulting from the human condition itself, system failures and the interaction of both. It is essential to know how to deal with their occurrence, managing them. However, the negative tone associated with them makes it difficult for most organizations to talk about mistakes clearly and transparently, for fear of being harmed, preventing their detection, treatment and recovery. Consequently, errors are not managed, remaining accumulated in the system, turning into successive failures. Organizations need to recognize the inevitability of errors, making the system robust, through leadership and an organizational culture of error management. This study aims to understand the role of these influencing variables in an error management approach. Design/methodology/approach In this paper methodology of a quantitative nature based on a questionnaire survey that analyses error management, leadership and the organizational culture of error management of 380 workers in Portuguese companies. Findings The results demonstrate that leadership directly influences error management and indirectly through the organizational culture of error management, giving this last variable a mediating role. Originality/value The study covers companies from different sectors of activity on a topic that is little explored in Portugal, but part of the daily life of organizations, which should deserve greater attention from directors and managers, as they assume a privileged position to promote and develop error management mechanisms. Error management must be the daily work of leaders. This study contributes to theoretical knowledge and business practice on error management.
... In the broadest sense, failure is the gap between an expected or desired result and what one ultimately experiences (Cannon & Edmondson, 2005). More exclusively, failure is the lack of ability to meet the needs of an achievement context and not achieve a specific goal. ...
Conference Paper
Full-text available
In India, National Education Policy (NEP) 2020 focuses on key reforms in higher education that make ready the next generation to flourish and succeed in the new digital age. So the higher education system should ensure the quality meets the same. In this regard, the system needs to instil in students the hope for their success. Despite this, some students are lacking it because of the prevailing educational practices. The pass percentage of undergraduate students is not so high. This means that higher education is lacking something. The present paper reports on a qualitative exploration study using sequential semi-structured interviews on the perceptions and attributions of academically failed undergraduate students from the Malappuram District of Kerala (N=10). Reasons for their academic failure attributed to the Teacher, Curricular, Transactional, Learner and Institutional related practices in their undergraduate programme were identified and suggestions there for improvement of higher education practices were derived. The recommendations by the young learners can be implemented to make the higher education system more learner-friendly and the findings here will help to enhance the curricular reforms and improve the excellence of higher education.
... We propose that when leaders have higher promotion focus, a recall of learning from mistakes intervention will be more likely to prompt them to plan for engaging in learning behavior and display humility for several reasons. First, recall of learning from one's own mistakes begins with identifying and interpreting mistakes (Cannon & Edmondson, 2005). Because people with a higher promotion focus strive for ideal goals (Kark et al., 2015), they should be inclined to experience recalled mistakes as starting points for growth and development, formulate plans to seek improvement, ...
Full-text available
Making mistakes is an inevitable part of leadership, but little is known about how and when leaders benefit from reflecting on their missteps. In this paper, we propose that mistakes, when reflected upon, have the potential to increase a leader's expressed humility. We detail how having leaders recall past mistakes can help them formulate plans for learning and encourage them to express humility. We also argue that this positive relationship is strengthened when leaders have a promotion focus. We detail downstream benefits, as increased levels of leaders’ expressed humility is expected to increase their teams’ improvement‐oriented behaviors and, subsequently, team performance. Across multiple studies and using varied methods (i.e., scenario‐based experiments with 955 managerial leaders, a laboratory experiment with 210 student leaders and team members, and a daily field experiment with 85 managers), we empirically test the proposed relationships. Our studies contribute to the literature by identifying leaders’ recall of learning from mistakes as an important intervention to elicit their expressed humility. This article is protected by copyright. All rights reserved
Full-text available
Our food systems have performed well in the past, but they are failing us in the face of climate change and other challenges. This book tells the story of why food system transformation is needed, how it can be achieved and how research can be a catalyst for change. Written by a global interdisciplinary team of researchers, it brings together perspectives from multiple areas including climate, environment, agriculture, and the social sciences to describe how different tools and approaches can be used to tackle food system transformation. It provides practical, actionable insights for policymakers and advisors, demonstrating how science together with strong partnerships can enable real transformation on the ground. It also contributes to the academic debate on the transformation of food systems, and so will be an invaluable reference for researchers and students alike. This title is also available as Open Access on Cambridge Core.
Full-text available
This Handbook examines the study of failure in social sciences, its manifestations in the contemporary world, and the modalities of dealing with it – both in theory and in practice. It draws together a comprehensive approach to failing, and invisible forms of cancelling out and denial of future perspectives. Underlining critical mechanisms for challenging and reimagining norms of success in contemporary society, it allows readers to understand how contemporary regimes of failure are being formed and institutionalized in relation to policy and economic models, such as neoliberalism. While capturing the diversity of approaches in framing failure, it assesses the conflations and shifts which have occurred in the study of failure over time. Intended for scholars who research processes of inequality and invisibility, this Handbook aims to formulate a critical manifesto and activism agenda for contemporary society. Presenting an integrated view about failure, the Handbook will be an essential reading for students in sociology, social theory, anthropology, international relations and development research, organization theory, public policy, management studies, queer theory, disability studies, sports, and performance research.
This paper investigates computationally the following research hypotheses: (1) Higher flexibility and discretion in organizational culture results in better mistake management and thus better organizational learning, (2) Effective organizational learning requires a transformational leader to have both high social and formal status and consistency, and (3) Company culture and leader’s behavior must align for the best learning effects. Computational simulations of the introduced adaptive network were analyzed in different contexts varying in organization culture and leader characteristics. Statistical analysis results proved to be significant and supported the research hypotheses. Ultimately, this paper provides insight into how organizations that foster a mistake-tolerant attitude in alignment with the leader, can result in significantly better organizational learning on a team and individual level.
The COVID-19 pandemic has highlighted the importance of virtual work. Enabled by the pandemic, the present study addresses the consequences of virtual interaction among regular work teams. Building on and expanding prior research, we develop lines of reasoning to suggest that virtuality negatively affects team failure learning. Additionally, we argue that team LMX quality and team LMX differentiation can help mitigate this effect. We test our hypotheses based on survey data from 73 teams working for a service unit at an international bank. In line with our theorizing, the results reveal that virtuality hampers team failure learning. Moreover, we find that team LMX quality and team LMX differentiation can serve to alleviate the negative consequences of virtuality. We discuss the theoretical and practical implications of our study to support HR managers and propose some areas for future research.
A review of research on crisis leadership, which goes beyond crisis management literature. Discusses the key competencies required for effective crisis leadership and suggests ideas for research in the area going forward. With practical applications.
Full-text available
The importance of hospitals learning from their failures hardly needs to be stated. Not only are matters of life and death at stake on a daily basis, but also an increasing number of U.S. hospital's are operating in the red. This article reports on in-depth qualitative field research of nurses' responses to process failures in nine hospitals. it identifies two types of process failures-errors and problems-and discusses implications of each for process improvement. A dynamic model of the system in which front-line workers operate reveals an illusory equilibrium in which small process failures actually erode organizational effectiveness rather than driving learning and change in hospitals. Three managerial levers for change are identified, suggesting a new strategy for improving hospitals' and other service organizations' ability to learn from failure.
Every company's ability to innovate depends on a process of experimentation whereby new products and services are created and existing ones improved. But the cost of experimentation often limits innovation. New technologies—including computer modeling and simulation—promise to lift that constraint by changing the economics of experimentation. Never before has it been so economically feasible to ask "what-if" questions and generate preliminary answers. These technologies amplify the impact of learning, paving the way for higher R&D performance and innovation and new ways of creating value for customers.In Experimentation Matters, Stefan Thomke argues that to unlock such potential, companies must not only understand the power of experimentation and new technologies, but also change their processes, organization, and management of innovation. He explains why experimentation is so critical to innovation, underscores the impact of new technologies, and outlines what managers must do to integrate them successfully. Drawing on a decade of research in multiple industries as diverse as automotive, semiconductors, pharmaceuticals, chemicals, and banking, Thomke provides striking illustrations of how companies drive strategy and value creation by accommodating their organizations to new experimentation technologies.As in the outcome of any effective experiment, Thomke also reveals where that has not happened, and explains why. In particular, he shows managers how to: implement "front-loaded" innovation processes that identify potential problems before resources are committed and design decisions locked in; experiment and test frequently without overloading their organizations; integrate new technologies into the current innovation system; organize for rapid experimentation; fail early and often, but avoid wasteful "mistakes"; and manage projects as experiments.Pointing to the custom integrated circuit industry—a multibillion dollar market—Thomke also shows what happens when new experimentation technologies are taken beyond firm boundaries, thereby changing the way companies create new products and services with customers and suppliers. Probing and thoughtful, Experimentation Matters will influence how both executives and academics think about experimentation in general and innovation processes in particular. Experimentation has always been the engine of innovation, and Thomke reveals how it works today.