Content uploaded by Seong Dae Kim
Author content
All content in this area was uploaded by Seong Dae Kim on Mar 29, 2017
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=rjrr20
Download by: [UAA/APU Consortium Library] Date: 04 January 2017, At: 13:21
Journal of Risk Research
ISSN: 1366-9877 (Print) 1466-4461 (Online) Journal homepage: http://www.tandfonline.com/loi/rjrr20
Characterization of unknown unknowns using
separation principles in case study on Deepwater
Horizon oil spill
Seong Dae Kim
To cite this article: Seong Dae Kim (2017) Characterization of unknown unknowns using
separation principles in case study on Deepwater Horizon oil spill, Journal of Risk Research,
20:1, 151-168, DOI: 10.1080/13669877.2014.983949
To link to this article: http://dx.doi.org/10.1080/13669877.2014.983949
Published online: 28 Nov 2014.
Submit your article to this journal
Article views: 74
View related articles
View Crossmark data
Citing articles: 1 View citing articles
Characterization of unknown unknowns using separation
principles in case study on Deepwater Horizon oil spill
Seong Dae Kim*
Engineering, Science, and Project Management Department, University of Alaska
Anchorage, Anchorage, AK, USA
(Received 3 July 2014; final version received 28 October 2014)
Unidentified risks, also known as unknown unknowns, have traditionally been
underemphasized by risk management. Most unknown unknowns are believed to
be impossible to find or imagine in advance. But this study reveals that many
are not truly unidentifiable. This study develops a model using separation princi-
ples of the Theory of Inventive Problem Solving (whose Russian acronym is
TRIZ) to explain the mechanism that makes some risks hard to find in advance
and show potential areas for identifying hidden risks. The separation principles
used in the model are separation by time, separation by space, separation upon
condition, separation between parts and whole, and separation by perspective. It
shows that some risks are hard to identify because of hidden assumptions and
illustrates how separation principles can be used to formulate assumptions
behind what is already known, show how the assumptions can be broken, and
thus identify hidden risks. A case study illustrates how the model can be applied
to the Deepwater Horizon oil spill and explains why some risks in the oil rig,
which were identified after the incident, were not identified in advance.
Keywords: unknown unknowns; unidentified risks; hidden risks; risk
management; natural disaster
1. Introduction
Disasters like Hurricane Katrina in 2005, the Deepwater Horizon oil spill in 2010,
and the Fukushima nuclear accident in 2011 were unanticipated yet extremely dam-
aging. It might seem that such disasters, being unprecedented, were not preventable.
Indeed, some risks are impossible to detect or even imagine in advance. Many meth-
ods have been developed and used to assess, analyze, and manage risks that have
already been identified, but those methods can be used only after the risks are identi-
fied. Techniques like checklists or risk breakdown structures (PMI 2013) previously
used in similar cases can help identify typical risks that happened before but they
are limited in a unique situation and are not intended for unprecedented events.
There have been plenty of research on risk but there is little consensus over what
risk means (Rosa 1998). It is impossible to present all definitions of the risk concept,
but they can be classified into nine categories: ‘Risk = Expected value (loss),’
‘Risk = Probability of an (undesirable) event,’‘Risk = Objective uncertainty,’
‘Risk = Uncertainty,’‘Risk = Potential/possibility of a loss,’‘Risk = Probability and
*Email: Sdkim2@uaa.alaska.edu
© 2014 Informa UK Limited, trading as Taylor & Francis Group
Journal of Risk Research, 2017
Vol. 20, No. 1, 151–168, http://dx.doi.org/10.1080/13669877.2014.983949
scenarios/consequences/severity of consequences,’‘Risk = Event or consequence,’
‘Risk = Consequences/damage/severity of these + Uncertainty,’and ‘Risk is the
effect of uncertainty on objectives’(Aven 2012). Not all the categories are aged the
same. The definitions started from the first category, which is based on expected val-
ues, in 1711, and developed through different paths with different advocators of
each perspective (Aven 2012).
Uncertainty or risk is not the subject only for risks management. Sometimes, it is
a key subject of science. As science progresses, what used to be unpredictable, thus
uncertain, has become more and more predictable. One such example is a hurricane.
Advancement of atmospheric science enables much more precise forecast of hurri-
cane track than 40 years ago. Social science also strives for better understanding and
predicting human behavior.
In risk management, a typical approach to risks is trying to identify them as early
as possible and respond to them as quickly as possible once identified. However,
such an approach is not always effective and many researchers have been trying to
better understand risks and figure out how to better deal with them.
Many researchers have been trying to categorize and characterize hard-to-detect
risks or uncertainties as summarized in the Appendix 1. One of the earliest struc-
tured models to understand unknowns is the Johari window. It was created as an
interpersonal awareness model (Luft and Ingham 1955) and employs a 2 × 2 matrix
to characterize information about a person, based on whether it was known to the
person or to others, as arena,blind spot,façade,orunknown, as shown in Table 1.
The structure of the Johari window for risk classification received further attention
when former US Secretary of Defense Donald Rumsfeld used the term ‘unknown
unknowns’(Rumsfeld 2002). This led many researchers to employ quadrants of
knowledge, i.e. known known,known unknown,unknown known, and unknown
unknown to understand and explain the nature of risk.
A typical classification of risks is based on the level of knowledge about a risk
event’soccurrence (either known or unknown) and the level of knowledge about its
impact (either known or unknown). This leads to four possibilities, shown with exam-
ples in Table 2(Cleden 2009), and unknown unknowns is one of the quadrants.
The major obstacle to addressing unknown unknowns is their being hard to
imagine, but another is that people who cannot cope with unknown unknowns will
sometimes actively ignore them (Alles 2009). A likely event, having already been
identified, cannot be considered an unknown unknown, but its consequence may fall
into the category of unknown unknowns. The occurrence of an event like a natural
disaster may be readily anticipated, but its impact is not easy to predict or estimate
because of unintended secondary effects, also called knock-on effects (Ogaard
2009).
Table 1. Johari window (adapted from Luft and Ingham (1955)).
Self
Known Unknown
Others Known 1. Arena 2. Blind spot
Unknown 3. Façade 4. Unknown
152 S.D. Kim
Risk management usually tries to identify and list as many known unknowns,
i.e. risks, as possible, as early as possible. However, although risk management acts
as a ‘forward-looking radar,’it is not possible to identify all risks in advance, partly
for the following reasons (Hillson 2010): inherently unknowable, time dependent,
progress dependent, or response dependent.
However, it does not necessarily mean that all unknown unknowns are equally
hard to identify. Some of them might be easier to recognize than others.
One approach to deal with unknown unknowns is uncertainty allowance. A tech-
nique to determine the uncertainty allowance for a project was proposed (Raydugin
2012). This technique quantifies unknown unknowns that might influence a project
based on four dimensions: novelty of a project (mostly, technology or geography),
phase of project development, type of industry, and bias of various types. The guide-
line numbers for the uncertainty allowance are determined based on the experience
and data in a particular industry and cannot be generalized to other projects or indus-
tries. This technique can help estimate the uncertainty allowance to prepare for
potential hidden risks, but having additional allowance without any other response
plan can be costly.
Another approach to deal with unknown unknowns is categorizing the factors in
a project which is likely to contain unknown unknowns. A framework for recogniz-
ing areas in a project which may increase the likelihood of encountering unknown
unknowns was proposed (Ramasesh and Browning 2014). This framework focuses
on ‘knowable unknown unknowns,’which are not purely unimaginable, and concep-
tualizes six driving factors of unknown unknowns: complexity, complicatedness,
dynamism, equivocality, mindlessness, and project pathologies. This framework also
presents project design approaches and behavioral approaches to reduce the
unknown unknowns. It does not help identify particular unknown unknowns but can
help decide where to invest resources to uncover ‘knowable unknown unknowns’
and reduce their likelihood.
1.1. How this paper is structured
The literatures have characterized hidden risks and thereby helped to distinguish
unknown unknowns from others, but they have not provided a structured character-
ization of unknown unknowns. Methods to estimate the likelihood of encountering
Table 2. Simplified ‘four quadrants’model (adapted from Cleden (2009)).
Impact
Occurrence
Knowledge Risks
(known knowns) (known unknowns)
predictable future
states
project data
possible states identified
quantifiable variables
Untapped knowledge Unfathomable uncertainties
(unknown knowns) (unknown unknowns)
researchable facts
untapped resources
unknown relationships between key
variables
unpredictable events
Journal of Risk Research 153
unknown unknowns were proposed but those methods do not help identify individ-
ual unknown unknowns. This paper intends to extend previous work by systemati-
cally elucidating why some risks are hard to identify in advance, even with a vast
knowledge base. For this purpose, this study (1) adopts a new concept to delineate
the structure of hard-to-identify hidden risks, (2) proposes a new model to character-
ize them, and (3) presents a case study to illustrate how the proposed model can be
applied to explain why some risks are hard to identify in advance and show potential
areas for identifying unknown unknowns. This paper is structured as follows.
Section 2presents the scope of this paper and how to define unknown unknowns.
Section 3introduces TRIZ, which is a problem-solving methodology developed in
the former Soviet Union, and separation principles, which is one of the key tools of
TRIZ (Rantanen and Domb 2008), and shows how they can characterize and poten-
tially identify hidden risks. Section 4illustrates the application of the proposed
model using Deepwater Horizon oil spill case study. And Section 5concludes and
discusses future work.
2. Characterizing unknown unknowns
Risk can be defined as ‘an uncertain event or condition that, if it occurs, has a posi-
tive or negative effect on one or more project objectives.’(PMI 2013) This defini-
tion reflects project managers’perspective and is somewhat lengthy. To make it
more general and simple, this paper hereafter defines it as ‘an uncertain event or
condition that, if it occurs, has a significant effect.’Unknown unknowns can be
understood as a special type of risk and can be defined simply as ‘unidentified
risks.’This definition can be confusing because even though an individual did not
identify a risk, some other individuals may have already identified it. To avoid fur-
ther confusion, unknown unknowns are hereafter defined as ‘risks that the decision-
maker (DM) or the group of DMs is not aware of,’irrespective of whether some
other stakeholder might be aware of those risks.
The knowledge gap, which means the difference between the knowledge that the
DM has and the knowledge that the DM should have in order to identify events/con-
ditions that may have influence on the accomplishment of a project, operation, or
business can account for some unknown unknowns (Stoelsnes 2007). However, we
focus only on those that cannot be attributed to the knowledge gap. In addition, we
focus on risk events, rather than general uncertainties like parameter variability or
uncertain nature of something unrelated to the accomplishment.
The development of our model starts with modifying Cleden’s(
2009) quadrants
model described in the previous section. The proposed model opts for an ‘identifica-
tion’dimension instead of Cleden’s‘occurrence’dimension and for a ‘certainty’
dimension instead of Cleden’s‘impact’dimension. Cleden’s model classifies various
event-related knowledge, but it does not consider the uncertainty of a risk event’s
occurrence. By using an ‘identification’dimension instead of ‘occurrence,’the pro-
posed model better represents what knowledge about ‘occurrence’means to a risk
event. Also, by using a ‘certainty’dimension instead of ‘impact,’it can encompass
the uncertainty of the occurrence and the impact. Table 3shows the categorization
of event-related knowledge based on the identification of the event and the certainty
about the event realization. ‘Known known’denotes a fact that DM is aware of and
‘unknown known’denotes a fact that DM is not yet aware of. ‘Known unknown’
154 S.D. Kim
denotes uncertain event or condition that DM is aware of and ‘unknown unknown’
denotes uncertain event or condition that DM is not aware of.
In this model, a DM may possess knowledge about a certain event, i.e. known
known, or lack such knowledge about the event, i.e. unknown known. An uncertain
event can be uncertain in either its occurrence or its impact. For example, a hurri-
cane has two basic uncertainties: track, which implies the chance of landfall, and
intensity, expressed as wind speed or Saffir–Simpson hurricane scale. For a hurri-
cane, occurrence, i.e. landfall, is uncertain and the impact, i.e. loss of human life
and the damage to property, is also uncertain.
Known unknowns in this model are usually treated as identified risks (PMI
2013), and they may be estimated to account for the unknown amount of their
impact. On the other hand, unknown unknowns are unfathomable or even unimagin-
able to many people (Makridakis, Hogarth, and Gaba 2009) and seldom accounted
for by risk management. Sometimes, management reserves are used to address
unknown unknowns that can affect a project (PMI 2013), but are quite subjective
and limited. The unknown unknown quadrant shaded in Table 3is the focus of this
paper, and the following section discusses how to characterize it.
3. Proposed model to identify unknown unknowns
The proposed model adopts a component of TRIZ methodology. TRIZ is the
Russian acronym for the Theory of Inventive Problem Solving (Rantanen and Domb
2008). TRIZ is a problem-solving method based on logic and data, not intuition,
and was developed by G.S. Altshuller and his colleagues in the former Soviet Union
from 1946 to 1985. It was built on the study of patterns of problems and solutions,
including more than three million patents. TRIZ was originally developed to solve
technical problems but has been applied also to various technical and non-technical
problem areas (Mann 2007).
A fundamental concept of TRIZ is that contradiction makes problems hard to
solve and should be eliminated. TRIZ recognizes two main categories of contradic-
tion: technical and physical. Technical contradiction refers to the trade-off like safety
vs. gas mileage for a vehicle, whereas physical contradiction refers to where a prob-
lem or system has opposite/contradictory requirements or attributes.
In TRIZ, physical contradiction is resolved typically through the application of
separation principles. There are several separation principles in TRIZ, but separation
in time (ST), separation in space (SS), separation upon condition (SC), and separa-
tion between parts and whole (SPW) are the major ones most frequently referred to.
ST views a problem in time dimension. Instead of assuming all the times are
identical in terms of when an attribute is realized, ST separates a particular time
Table 3. Modified four-quadrant model of uncertainty (adapted from Cleden (2009)).
Journal of Risk Research 155
from other times which makes the opposite attribute possible. The ‘time’can be a
time in a day, a time in a week, a time in a month, a time in a year, a step in a pro-
cedure, or a phase in a life cycle. For example, instead of having a fixed undercar-
riage of an aircraft all the time, flight time can be separated from other times and the
undercarriage can be non-existent only during the flight so that it would not cause
air resistance.
SS views a problem in space dimension. Instead of assuming all the spaces are
identical in terms of where an attribute is realized, SS separates a particular space
from other spaces which makes the opposite attribute possible. The ‘space’is not
necessarily a physical space. It can be a geographical region, professional discipline,
organization, person, machine, or conceptual space. For example, instead of using
identical material for the whole bucket of an excavator, teeth part can be separated
from the rest of the bucket and we can use harder material only for the teeth.
SC views a problem in condition dimension. Instead of assuming all the condi-
tions are identical in terms of the circumstance an attribute is realized on, SC sepa-
rates a particular condition from other conditions which makes the opposite attribute
possible. The ‘condition’can be physical or technical conditions like speed, weight,
temperature, pressure, viscosity, porosity, brightness, and humidity, but it also can
be workforce performance, decision process quickly, human condition, organiza-
tional condition, political situation, environmental condition, constraint, regulation,
prior actions taken, culture, or any other types of condition. For example, instead of
constant clarity for a lens regardless of light intensity, high brightness can be sepa-
rated from other lower brightnesses and the lens can become dark only on intense
light.
SPW views a problem in system-level dimension. Instead of assuming that we
should look at only the current problem on hand, SPW separates a particular level in
the system from other levels which makes the opposite attribute possible. Lower
level refers to parts or sub-parts that constitute the problem. Higher level refers to
the whole that includes other problems as well as the current problem on hand.
Some risk event might be incurred only at the component level, rather than at the
current problem level. Also, some risk event might be incurred only from the combi-
nation of or interaction with other events or conditions, rather than the problem
alone. For example, instead of constant soundness of a subject regardless of its level
in the system, combination of multiple adverse factors, i.e. higher level in the sys-
tem, the higher level can be separated from other levels and the subject can become
problematic only at higher level, e.g. organization, local economy, or environment.
Additional separation principles have since been discovered (Ball 2009). One of
them is separation by perspective (SP), which means that some contradictory
requirements or opposing attributes can be realized through changing the way of
looking at the problem or situation. SP can explain a risk that is perceived only by a
group of stakeholders that have a unique view. The ‘perspective’can be that of a
stakeholder group’s, insiders vs. outsiders, or subject matter experts’.
Separation principles of TRIZ are typically used to find a way to meet contradic-
tory requirements or achieve an opposite attribute of a system. However, this study
uses separation principles to explain how an assumption within known knowledge
can be broken, i.e. assumption vs. broken assumption.
Section 1mentioned Hillson’s(2010) four reasons why some risks are not possible
to identify in advance. Another reason is revealed through the Swiss Cheese model
(Reason 2000), which illustrates that major accidents are usually caused not by a
156 S.D. Kim
single isolated failure, but by several defensive barriers that are not intact. The Swiss
Cheese model implies that some risks are hard to identify in advance because they are
not caused by a single source.
The above reasons for failure to identify risk could be addressed by correspond-
ing separation principles. Hillson’s time dependence corresponds to ST. Progress
dependence corresponds to SC. Response dependence also corresponds to SC. The
Swiss Cheese model corresponds to SPW. According to Dester and Blockley (2003),
hazard as an ‘accident waiting to happen’is based on the set of incubating precondi-
tions, not on a single condition. This concept corresponds to SPW as well.
The shaded area in Table 3can be expanded as shown in Table 4and it shows
the proposed framework. Some risks are not identified in advance due to the lack of
necessary knowledge or the assumptions behind the problem. This table shows five
different ways to formulate and break the assumptions, which are adopted from sep-
aration principles of TRIZ. For example, using ‘by Space,’an assumption ‘The oil
spill removal technology works at any depth.’can be formulated and this assump-
tion can be broken resulting in ‘The oil spill removal technology may not work at
5000 feet or deeper from sea surface.’The broken assumption is a newly identified
risk that used to be unidentified until the implementation of this process.
To apply this model, we start with listing known knowns and known unknowns,
before addressing unknown unknowns. (Unknown knowns are outside the scope of
this paper because they are untapped knowledge and do not directly aid in the iden-
tification of unknown unknowns.) Next, we derive an assumption behind each listed
item in the known knowns quadrant, using each one of the separation principles,
e.g. ‘it will work as planned at any time’or ‘it is fault-free in any space.’And then,
we find potential areas for identifying hidden risks, i.e. potential possibilities of
breaking the assumptions, applying the separation principle. Listed knowledge and
risks can be used as an input for a separation principle. An identified risk could be a
condition for other hidden risks. Multiple identified risks, if combined, might reveal
a previously unidentified risk.
4. Case study
This section reviews a recent catastrophe to illustrate how the proposed model can
characterize risks that allegedly were previously unidentified and to reveal potential
areas for identifying unknown unknowns.
Table 4. Mechanisms that account for unknown unknowns.
Journal of Risk Research 157
The Deepwater Horizon oil spill in 2010 is the largest accidental marine oil spill
in the history of the petroleum industry (Robertson and Krauss 2010). The spill
started from a sea-floor oil gusher that resulted from the 20 April 2010 explosion at
the Deepwater Horizon oil rig, which was drilling on the BP-operated Macondo
Prospect in the Gulf of Mexico. It flowed unabated for three months in 2010 (Jamail
2012), releasing about 4.9 million barrels of crude oil before the gushing wellhead
was capped (Hoch 2010). The spill caused extensive damage to marine and wildlife
habitats as well as to the Gulf’sfishing and tourism industries (Tangley 2010).
To apply the proposed model toward characterizing hidden risks in the Deepwa-
ter Horizon oil spill case, we first list identified items, either certain or uncertain, as
of before the oil spill occurred, as depicted in Table 5. Some of the items are
assumed to have previously been identified regarding the offshore deepwater field
development project.
Separation principles can be applied to each numbered item in Table 5, which
might reveal assumptions and hidden risks. Then, if applicable, we list any later
identified hidden risks that can be characterized by the separation mechanism, which
can be listed in the shaded area of the table. In the following subsections, KKn
refers to the nth item in the known known quadrant and KUnmeans the nth item in
the known unknown quadrant.
4.1. KK1 (oil spill response plans are in place)
Oil spill response may be impeded by the harsh weather during storm season
(Graham et al. 2011). We recognize this risk retrospectively but this risk may not
have been identified because it was implicitly assumed that the plans would be
effective at any times. We can consider many different ways to differentiate the
times (ST) and seek for any particular time when the assumption can be broken.
Consultation with subject matter experts might be necessary to single out the partic-
ular time or to check if the selected differentiations make sense. One way to differ-
entiate by season is distinguishing storm season in the Gulf of Mexico. Considering
this further might identify the risk above.
Oil spill response plans may fail in deepwater. We recognize this risk retrospec-
tively, but this risk may not have been identified because it was implicitly assumed
that the plans would suffice in any spaces. We can consider many ways to differenti-
ate the spaces (SS), e.g. by part of the reservoir, component of well structure, depth
below sea surface, depth below sea floor, or part of the organization, and seek for
any particular space where the assumption can be broken. One way to differentiate
by sea depth is distinguishing deepwater, such as the sea floor of the Macondo Pros-
pect, from other shallower waters. Considering this further might identify the risk
above. Actually, the plans were claimed to be capable of taking care of any oil spill
even in deepwater (Graham et al. 2011) but it turned out to be untrue.
Oil spill response plans may fail if an oil spill occurs (KU1) when govern-
ment and industry are unprepared (KU7) (Graham et al. 2011). We recognize this
risk retrospectively but this risk may not have been identified because it was
implicitly assumed that even multiple adverse factors cannot paralyze the plans or
that the adverse factors are not influential to the plans at any system levels. We
can consider ways to differentiate the system levels (SPW) and seek for any
particular system level at which the assumption can be broken. One adverse
158 S.D. Kim
factor might not have a significant impact to the plans, but multiple adverse
factors as a group, i.e. higher system level, could have a significant impact to the
oil spill response plans. Considering this further might identify the risk above.
4.2. KK2 (safety procedures are in place)
Safety procedures may fail if they are under pressure due to the schedule delay, the
budget overrun, or a safety-disregarding culture. We recognize this risk retrospec-
tively, but this risk may not have been identified because it was implicitly assumed
that the procedures would do their job on any conditions. We can consider many
Table 5. Identified knowns and unknowns that could have been analyzed for potential
hidden risks at the Macondo Prospect.
Journal of Risk Research 159
different ways to differentiate the conditions (SC) and seek for any particular condi-
tion on which the assumption can be broken. One way to differentiate by constraint
is distinguishing schedule delay, budget overrun, practice, or culture from other nor-
mal conditions. Considering this further might identify the risk above. Actually,
there was ‘a rush to completion’and ‘no culture of safety on the rig.’Consequently,
the management’s cost-cutting decisions compromised the safety at the rig and, as a
result, 11 people lost their lives from the blowout (Graham et al. 2011).
Safety procedures may be faulty from the perspective of field workers or external
observers. We recognize this risk retrospectively but this risk may not have been
identified because it was implicitly assumed that the procedures would do their job
from any perspectives. We can consider many ways to differentiate perspectives
(SP) and seek for any particular perspective from which the assumption can be bro-
ken. One way to differentiate by stakeholder group is distinguishing field workers or
external safety specialist from others. Considering this further might identify the risk
above. The company claimed that the safety procedure was sufficient, but it actually
turned out to be worse than what the company claimed (Graham et al. 2011).
4.3. KK3 (stakeholders include seafood industry, tourism industry, oil and gas
industry, and residents of the Gulf of Mexico)
No significant case supported by the literature was found.
4.4. KK4 (required risk assessments have been completed)
The assessment of oil spill risk may be faulty on the deepwater condition. We recog-
nize this risk retrospectively, but this risk may not have been identified because it
was implicitly assumed that the risk assessments would suffice on any conditions.
We can consider many ways to differentiate conditions (SC) and seek for any condi-
tion on which the assumption can be broken. One way to differentiate by physical
condition is distinguishing deepwater condition involving high pressure and low
temperature from other normal conditions. Considering this further might identify
the risk above. Actually, the oil spill was underestimated and became unmanageable
(Graham et al. 2011).
The assessment of oil spill risk may be faulty if remotely operated underwater
vehicle failure (KK11), containment dome failure (KK12), containment boom failure
(KK14, KU5), top kill failure (KK13), and oil dispersant failure (KK15) were com-
bined. We recognize this risk retrospectively, but this risk may not have been identi-
fied because it was implicitly assumed that even multiple adverse factors cannot
paralyze the risk assessments or that the adverse factors are not influential to the risk
assessment at any system levels. We can consider various ways to differentiate the
system levels (SPW) and seek for any particular system level at which the assump-
tion can be broken. One adverse factor might not have a significant impact to the
risk assessment but multiple adverse factors as a group could have a significant
impact to the risk assessment. Considering this further might identify the risk above.
4.5. KK5 (government conducts oversight of the project)
Government oversight may fail unless they have sufficient knowledge. We recognize
this risk retrospectively, but this risk may not have been identified because it was
160 S.D. Kim
implicitly assumed that government oversight would do its job on any conditions.
We can consider many ways to differentiate conditions (SC) and seek for any condi-
tion on which the assumption can be broken. One way to differentiate by organiza-
tional condition is distinguishing the unknowledgeability of the government from
other normal conditions. Actually, government oversight failed to reduce the risks of
a well blowout, due to lack of knowledge about offshore deepwater drilling (Graham
et al. 2011).
4.6. KK6 (the company has the technology to keep tubes centered)
The tube centralizers may fail to keep tubes centered if expediency were chosen for
operation. We recognize this risk retrospectively, but this risk may not have been
identified because it was implicitly assumed that the tube centralizers would do their
job upon any conditions. We can consider many ways to differentiate conditions
(SC) and seek for any condition on which the assumption can be broken. One way
to differentiate by organizational condition is distinguishing the decision-making for
expediency from other decision-making qualities. Actually, to save time, BP used 6
of the devices for keeping tubes centered, ignoring models calling for 21. Casings
should be centered in the well hole for the cement pumped in around it to set evenly
(Swenson 2013) but BP on shore decided not to wait for more centralizers (Graham
et al. 2011).
4.7. KK7 (cement integrity test is in place)
Considering this further might reveal that such a decision may cause the cement
integrity test to fail. We recognize this risk retrospectively, but this risk may not
have been identified because it was implicitly assumed that the cement integrity tests
would do their job upon any conditions. We can consider many ways to differentiate
conditions (SC) and seek for any condition on which the assumption can be broken.
One way to differentiate by organizational condition is distinguishing poor manage-
ment decision from normal decision-making qualities. Actually, BP had hired con-
tractor Schlumberger to run tests on the newly cemented well, but they sent
Schlumberger’s crew home without having it run the test, known as a cement bond
log (Swenson 2013), and BP on shore decided not to run the cement evaluation log
to save time (Graham et al. 2011).
4.8. KK8 (pressure test is in place)
Pressure tests may fail to do their job. We recognize this risk retrospectively, but this
risk may not have been identified because it was implicitly assumed that pressure
tests would do their job upon any conditions. We can consider many ways to differ-
entiate conditions (SC) and seek for any condition on which the assumption can be
broken. One way to differentiate by human condition is distinguishing misinterpreta-
tion of the test result from other human conditions. Actually, rig workers reported
confusion over the negative test, which measures upward pressure from the shut-in
well (Swenson 2013). BP (and perhaps Transocean) on rig decided to save time by
not performing further well integrity diagnostics, despite troubling and unexplained
negative pressure test results (Graham et al. 2011).
Journal of Risk Research 161
4.9. KK9 (heavy drilling mud is used to keep any upward pressure under
control)
Heavy drilling mud may fail to control upward pressure if a poor decision is made.
We recognize this risk retrospectively, but this risk may not have been identified
because it was implicitly assumed that heavy drilling mud would do its job upon
any conditions. We can consider many ways to differentiate conditions (SC) and
seek for any condition on which the assumption can be broken. One way to differen-
tiate by organizational condition is distinguishing poor decision from other normal
decisions. Actually, BP decided to take heavy drilling mud out of the system, to
3000 feet below the normal point, and earlier than usual. The mud barrier was not
there to stem the gas kick that destroyed the rig (Swenson 2013). BP on shore
decided to displace mud from the riser before setting the surface cement plug
(Graham et al. 2011).
4.10. KK10 (blowout preventer is used to prevent oil spill)
Blowout preventer may fail unless it has proper diagnostic tools. We recognize this
risk retrospectively, but this risk may not have been identified because it was implic-
itly assumed that the blowout preventer would do its job upon any conditions. We
can consider many ways to differentiate conditions (SC) and seek for any condition
on which the assumption can be broken. One way to differentiate by technical con-
dition is distinguishing diagnostic incapability from other capabilities. Actually, a
stuck drill pipe and intense pressures from the blowout caused a section of pipe to
bend and get lodged inside the blowout preventer. The blind shear rams could not
cut the bent pipe completely and failed to seal the well (Swenson 2013). And the
blowout preventer did not work properly due to the lack of key diagnostic tools
(Graham et al. 2011).
Blowout preventer may fail if tube centralization failure (KK6), cement integrity
test failure (KK7), pressure test failure (KK8), and mud barrier failure (KK9) are
combined. We recognize this risk retrospectively, but this risk may not have been
identified because it was implicitly assumed that the blowout preventer would be
robust to any adverse factor. We can consider ways to differentiate the system levels
(SPW) and seek for any particular system level at which the assumption can be bro-
ken. One adverse factor might not have a significant impact to the reliability of the
blowout preventer but multiple adverse factors as a group could have a significant
impact.
4.11. KK11 (remotely operated underwater vehicles are available just in case)
Remotely operated underwater vehicles may fail if its problem-solving capability is
limited. We recognize this risk retrospectively, but this risk may not have been iden-
tified because it was implicitly assumed that the underwater vehicles will do their
job on any conditions. We can consider many ways to differentiate conditions (SC)
and seek for any condition on which the assumption can be broken. One way to dif-
ferentiate by technical condition is differentiating limited problem-solving capability
from other capabilities. Actually, the vehicle failed to resolve the issue because the
problem was beyond the vehicle’s capability (CBC_News 2010).
162 S.D. Kim
4.12. KK12 (containment dome method is available in case oil spill occurs)
Containment dome method may fail if it operates in deepwater. We recognize this
risk retrospectively, but this risk may not have been identified because it was implic-
itly assumed that the containment dome method would do its job on any conditions.
We can consider many ways to differentiate conditions (SC) and seek for any condi-
tion on which the assumption can be broken. One way to differentiate by physical
condition is distinguishing deepwater condition from other normal conditions. Actu-
ally, the containment dome, also known as ‘top hat’or ‘cofferdam,’failed to collect
the spilt oil in deepwater (Bolstad, Clark, and Chang 2010; Graham et al. 2011).
This method had been tested in much shallower water but not in deepwater. It failed
because methane gas escaping from the well came into contact with cold sea water
and formed slushy hydrates clogging the dome with hydrocarbon ice.
4.13. KK13 (top kill method is available in case oil spill occurs)
Top kill method may fail if the oil spill occurs in deepwater or the oil flow rate is
high. We recognize this risk retrospectively, but this risk may not have been identi-
fied because it was implicitly assumed that the top kill method would do its job on
any conditions. We can consider many ways to differentiate conditions (SC) and
seek for any condition on which the assumption can be broken. One way to differen-
tiate the conditions is distinguishing the extreme physical condition, such as deep-
water condition or high oil flow rate, from other normal conditions. Actually, this
method failed due to both of those factors (BBC 2010; Graham et al. 2011).
4.14. KK14 (oil containment boom is available in case oil spill occurs)
No significant case supported by the literature was found.
4.15. KK15 (oil dispersant is available in case oil spill occurs)
Oil dispersant may fail or have serious side effects in deepwater (Swartz 2010;
Graham et al. 2011). We recognize this risk retrospectively, but this risk may not
have been identified because it was implicitly assumed that the oil dispersant would
do its job on any conditions. We can consider many ways to differentiate conditions
(SC) and seek for any condition on which the assumption can be broken. One way
to differentiate the conditions is distinguishing the extreme physical condition, like
high pressure and low temperature in deepwater from other normal conditions.
4.16. KK16 (even if oil spill occurs, the amount of spillage will be limited due to
multiple response measures)
‘Limited’oil spill, which is the perspective of oil and gas industry, may be perceived
to be disastrous if viewed from the perspectives of seafood and tourism industries,
residents of GOM, or environmentalists. We recognize this risk retrospectively, but
this risk may not have been identified because it was implicitly assumed that the
spillage amount would be limited from any perspectives. We can consider many
ways to differentiate perspectives (SP) and seek for any perspective from which the
assumption can be broken. One way to differentiate the perspectives is
Journal of Risk Research 163
distinguishing seafood industry, tourism industries, residents of the Gulf of Mexico
(GOM), or environmentalists from other stakeholders.
5. Conclusion and future work
As revealed in literatures and the case study, not all of what used to be unknown
unknowns are unrecognizable. This study proposed the characteristics of unknown
unknowns in a structured framework to explain why they are hard to identify in
advance. The proposed model adopts separation principles of TRIZ and applies them
to formulate assumptions behind what we think we already know and break the
assumptions, and thus reveal hidden risks. The case study shows how the proposed
model could have been applied to the Deepwater Horizon oil rig and might have
revealed hidden risks that were subsequently documented. It showed that hidden
risks do not necessarily appear without warning but might be identified using exist-
ing knowledge and triggers explained by the separation principles.
The proposed framework may not be able to explain all of the lurking risks but
it opened a new frontier in dealing with unknown unknowns. Risks unidentified due
to the knowledge gap are outside the scope of the proposed method but it may help
identify the knowledge gap that needs to be filled. Verification of that potential capa-
bility remains as a future work.
Future work will include sub-classifying the conditions for the ‘separation upon
condition’principle. As observed in the case study, separation upon condition (SC)
is applied more frequently and broadly than other separation principles and it needs
further classification to be more manageable.
Also, the proposed model will need to be extended to include more refined char-
acterization and other characteristics from the literature in a consolidated framework.
This extension can include tailoring the model to a target industry or organization.
We will also develop a model or process to identify hidden risks based on the
extended characterization. The identification model may include a structured ques-
tionnaire to induce the identification of potential hidden risks. The identification
model cannot be a fortune teller for a risky project, but it should be able to help
identify hidden risks that are not truly unimaginable. In addition, we will validate
and enhance this model using actual projects, rather than relying on retrospective
study. This will show whether this model actually helps identify hidden risks and
how the model can be improved.
References
Alles, Michael. 2009. “Governance in the Age of Unknown Unknowns.”International
Journal of Disclosure and Governance 6: 85–88.
Aven, Terje. 2012. “The Risk Concept-historical and Recent Development Trends.”
Reliability Engineering and System Safety 99: 33–44.
Ball, Larry. 2009. TRIZ Power Tools, Job #5: Resolving Problems. London: Third Millennium
Publishing.
BBC. 2010. “‘Top Kill’BP Operation to Halt US Oil Leak Fails.”BBC News, May 29.
Bolstad, Erika, Lesley Clark, and Daniel Chang. 2010. “Engineers Work to Place Siphon
Tube at Oil Spill Site.”theStar.com, May 14.
CBC_News. 2010. “Robot Subs Trying to Stop Gulf Oil Leak.”CBC News, April 25.
Chang, Wei-Wen, Cheng-Hui Lucy Chen, Yu-Fu Huang, and Yu-Hsi Yuan. 2012. “Exploring
the Unknown: International Service and Individual Transformation.”Adult Education
Quarterly 62 (3): 230–251.
Cleden, D. 2009. Managing Project Uncertainty. Farnham: Gower.
164 S.D. Kim
Daase, Christopher, and Oliver Kessler. 2007. “Knowns and Unknowns in the ‘War on
Terror’: Uncertainty and the Political Construction of Danger.”Security Dialogue 38 (4):
411–434.
Dester, W. S., and D. I. Blockley. 2003. “Managing the Uncertainty of Unknown Risks.”
Civil Engineering and Environmental Systems 20 (2): 83–103.
Galpin, Timothy. 1995. “Pruing the Grapevine.”Training & Development 4: 28–33.
Geraldi, Joana G., Liz Lee-Kelley, and Elmar Kutsch. 2010. “The Titanic Sunk, So What?
Project Manager Response to Unexpected Events.”International Journal of Project
Management 28: 547–558.
Graham, Bob, William K. Reilly, Frances Beinecke, Donald F. Boesch, Terry D. Garcia,
Cherry A. Murray, and Fran Ulmer. 2011. Deepwater: The Gulf Oil Disaster and the
Future of Offshore Drilling: Report to the President. Washington, DC: National
Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling.
Hillson, David. 2010. Exploiting Future Uncertainty: Creating Value from Risk. Burlington,
VT: Gower Publishing.
Hoch, M. 2010. “New Estimate Puts Gulf Oil Leak at 205 Million Gallons.”PBS NewsHour,
August 2.
Hole, Kjell Jørgen. 2013. “Management of Hidden Risks.”Computer 46: 65–70.
Jamail, D. 2012. “BP Settles While Macondo ‘Seeps’.”Al Jazeera English.
Jorion, Philippe. 2009. “Risk Management Lessons from the Credit Crisis.”European
Financial Management 15 (5): 923–933.
Keil, Mark, Paul E. Cule, Kalle Lyytinen, and Roy C. Schmidt. 1998. “A Framework for
Identifying Software Project Risks.”Communications of the ACM 41 (11): 76–83.
Luft, J., and H. Ingham. 1955. “The Johari Window, a Graphic Model of Interpersonal
Awareness.”Proceedings of the Western Training Laboratory in Group Development,
Los Angeles, CA.
Luong, Mary G., and Anne Collins McLaughlin. 2011. “Improving Communication of
Usability Perceptions: An Analysis of a Modified-Johari Window as a Tool for Software
Designers.”Proceedings of the Human Factors and Ergonomics Society 55th Annual
Meeting 2011. Las Vegas, NV.
Makridakis, Spyros, Robin M. Hogarth, and Anil Gaba. 2009. “Forecasting and Uncertainty
in the Economic and Business World.”International Journal of Forecasting 25:
794–812.
Mann, Darrell. 2007. Hands-on Systematic Innovation for Business and Management. 2nd
ed. Clevedon: IFR Consultants.
Ogaard, Ryan. 2009. “Known Unknowns.”Reinsurance 3: 9.
PMI. 2013. A Guide to the Project Management Body of Knowledge. 5th ed. Newtown
Square, PA: Project Management Institute.
Ramasesh, Ranga V., and Tyson R. Browning. 2014. “A Conceptual Framework for Tackling
Knowable Unknown Unknowns in Project Management.”Journal of Operations
Management 32: 190–204.
Rantanen, Kalevi, and Ellen Domb. 2008. Simplified TRIZ: New Problem Solving
Applications for Engineers and Manufacturing Professionals. 2nd ed. New York:
Auerbach Publications.
Raydugin, Yuri. 2012. “Quantifying Unknown Unknowns in an Oil and Gas Capital Project.”
International Journal of Risk and Contingency Management 1 (2): 29–42.
Reason, James. 2000. “Human Error: Models and Management.”British Medical Journal
320: 768–770.
Robertson, C., and C. Krauss. 2010. “Gulf Spill is the Largest of Its Kind, Scientists Say.”
The New York Times, August 2.
Rosa, Eugene A. 1998. “Metatheoretical Foundations for Post-normal Risk.”Journal of Risk
Research 1 (1): 15–44.
Rumsfeld, Donald. 2002. “Department of Defense News Briefing.”US Department of
Defense, February 12.
Shenton, Andrew K. 2007. “Viewing Information Needs Through a Johari Window.”
Reference Services Review 35 (3): 487–496.
Stoelsnes, Roger R. 2007. “Managing Unknowns in Projects.”Risk Management 9 (4):
271–280.
Journal of Risk Research 165
Swartz, Spencer. 2010. “BP Provides Lessons Learned from Gulf Spill.”The Wall Street
Journal. September 3.
Swenson, Dan. 2013. “Possible Causes of the Deepwater Horizon Explosion and BP Oil
Spill.”nola.com, February 22.
Talbot, Patrick J. 2006. “Automated Discovery of Unknown Unknowns.”Military Communi-
cations Conference 2006. MILCOM 2006. Washington, DC.
Tangley, L. 2010. “Bird Habitats Threatened by Oil Spill.”National Wildlife, June 17.
Appendix 1. Literature review on risk categorizations
Many researchers have been trying to capture the characteristics of hard-to-detect risks or
uncertainties. One of the earliest structured models to understand unknowns or uncertainties
is the Johari window. It was created as an interpersonal awareness model (Luft and Ingham
1955) and employs a 2 × 2 matrix to characterize information about a person, based on
whether it was known to the person or to others, as arena,blind spot,façade,orunknown.
Arena represents open information that both the individual and others are aware of. Blind
spot represents information that the individual is not aware of but others are. Façade repre-
sents hidden information about the individual others are unaware of. And, unknown repre-
sents the individual’s behaviors or motives that were not recognized by anyone, even him/
herself.
Researchers have proposed various versions of the Johari window. An organizational ver-
sion was proposed to help organizations assess how they communicate, by whether informa-
tion is exposed to stakeholders and whether the organization receives feedback (Galpin
1995).
A modified Johari window was also proposed to classify information needs by whether
the information is known to the information professional and the extent of the individual’s
awareness of the information (Shenton 2007) as shown in Table 6. Shenton’s model placed
information needs into five broad categories instead of four. Shenton’s model uses ‘informa-
tion professional’instead of ‘others’for one dimension, and ‘individual’instead of ‘self’for
the other dimension, but the biggest change is having the additional column ‘misunderstood
by’for the ‘individual’dimension.
The Johari window has been adapted to capture how designers and users perceive a soft-
ware program’s usability (Luong and McLaughlin 2011). It was also used to explore the
unknowns regarding international service experience and individual transformation through
environment–person interaction in cross-cultural settings (Chang et al. 2012).
Table 6. Types of information needs as represented in a Johari window (Shenton 2007).
Misunderstood by the
individual
Known to the
individual
Not known to the
individual
Known to the information
professional
Misguided needs Expressed needs Inferred needs
Not known to the
information professional
Misguided needs Unexpressed
needs
Dormant or
delitescent needs
Independently-
met needs
166 S.D. Kim
A.1. Classifications of uncertainty/risk using 2 × 2 matrices
There are other classifications of uncertainty that use a 2 × 2 matrix. For example, Keil et al.
(1998) used a 2 × 2 matrix as a framework for identifying software project risks. Their frame-
work classified risks by whether the perceived relative importance of a risk is high or moder-
ate, and whether the perceived level of control by the decision-maker is high or low. These
dimensions were used to classify risks into ‘scope and requirements’(high importance, high
control), ‘customer mandate’( high importance, low control), ‘execution’(moderate impor-
tance, high control), and ‘environment’(moderate importance, low control ).
A.2. Other classifications of uncertainty/risk
Another way to categorize uncertainties is by whether knowledge and information about them
exists but is not accessed, or simply does not exist. Stoelsnes (2007) divided unknowns into
two groups: unknown-knowable and unknown-unknowable. Unknown-knowable characterizes
events/conditions where knowledge and information is available but not accessed. Unknown-
unknowable are events/conditions where there is no knowledge or information to access in
advance, making it impossible to evaluate them in advance.
One scheme classifies uncertainty as either subway uncertainty or coconut uncertainty
(Makridakis, Hogarth, and Gaba 2009). Subway uncertainty refers to what can be modeled
and reasonably incorporated in probabilistic predictions that assume, for example, normally
distributed forecasting errors. Coconut uncertainty pertains to events that cannot be modeled,
and also to rare and unique events that simply are hard to envision. Subway uncertainty is
quantifiable, but coconut uncertainty is not.
A.3. Unknown unknowns
The concept of adapting the Johari window to risk classification received further attention
when former US Secretary of Defense Donald Rumsfeld used the term ‘unknown unknowns’
(Rumsfeld 2002). This led many researchers to employ quadrants of knowledge, i.e. known
known,known unknown,unknown known, and unknown unknown to understand and explain
the nature of risk.
A typical classification of risks is based on the level of knowledge about a risk event’s
occurrence (either known or unknown) and the level of knowledge about its impact (either
known or unknown). This leads to four possibilities, shown with examples in Table 2
(Cleden 2009).
Talbot (2006) decomposed unknown unknowns into three types: new hypotheses that
might explain a situation, new links that explain previously unknown relationships between
facts, and new story fragments that describe the significance of previously unidentified collec-
tions of factors that might drive a decision.
Daase and Kessler (2007)defined the dimensions’methodological knowledge and
empirical knowledge and used them to classify dangers, as shown in Table 7. Empirical
knowledge pertains to phenomena of reality, some of which could pose a danger. Methodo-
logical knowledge is the knowledge about ways to identify such things. If the empirical facts
of a danger are known and this knowledge is known to be reliable, which means that it is
known how the danger is identified, a threat exists. If, however, the factual knowledge is
Table 7. Four kinds of dangers (Daase and Kessler 2007).
Empirical knowledge
‘Knowns’‘Unknowns’
Methodological knowledge ‘Known’Threat Risk
‘Unknown’Ignorance Disaster
Journal of Risk Research 167
partial, yet methods exist for reducing the uncertainty, the danger is perceived as risk.Ifno
or scant factual knowledge about a danger exists and a method for assessment is not
available, the danger, if materialized, will be considered disaster. If the facts of a danger are
largely known but this knowledge is neglected, suppressed, or forgotten, the danger is
intensified by what we call ignorance.
Based on examining the global financial crisis that started in 2007, Jorion (2009) classi-
fied risks into three categories: known knowns,known unknowns, and unknown unknowns.
Known unknowns encompass known risk factors such as model risk, liquidity risk, and
counterparty risk. According to Jorion, unknown unknowns, which include regulatory and
structural changes in capital markets, are totally outside the scope of most scenarios because
no one could identify them in advance.
Geraldi, Lee-Kelley, and Kutsch (2010)defined unexpected events as the outcome of a
range of residual uncertainties that can threaten the viability of a project. They characterized
unexpected events by probability (unlikely), impact ( high), pertinence (untopical), and timing
(sudden). Even though the authors didn’t indicate how unexpected events would map a 2 × 2
matrix, ‘unexpected events’appears to be equivalent to or a subset of unknown unknowns.
Black swan events is sometimes used to mean hidden risks, but Hole (2013) classified
hidden risks into black swan and gray swan. Black swan is a metaphor for a high-impact, rare
event that comes as a complete surprise to all stakeholders. Gray swan is a metaphor for a
high-impact, rare event that is somewhat predictable yet is overlooked by most stakeholders.
The distinction is that black swans cannot be assessed, whereas gray swans can be partly
assessed.
168 S.D. Kim