ArticlePDF Available

Characterization of unknown unknowns using separation principles in case study on Deepwater Horizon oil spill

Taylor & Francis
Journal of Risk Research
Authors:

Abstract and Figures

Unidentified risks, also known as unknown unknowns, have traditionally been underemphasized by risk management. Most unknown unknowns are believed to be impossible to find or imagine in advance. But this study reveals that many are not truly unidentifiable. This study develops a model using separation principles of the Theory of Inventive Problem Solving (whose Russian acronym is TRIZ) to explain the mechanism that makes some risks hard to find in advance and show potential areas for identifying hidden risks. The separation principles used in the model are separation by time, separation by space, separation upon condition, separation between parts and whole, and separation by perspective. It shows that some risks are hard to identify because of hidden assumptions and illustrates how separation principles can be used to formulate assumptions behind what is already known, show how the assumptions can be broken, and thus identify hidden risks. A case study illustrates how the model can be applied to the Deepwater Horizon oil spill and explains why some risks in the oil rig, which were identified after the incident, were not identified in advance.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=rjrr20
Download by: [UAA/APU Consortium Library] Date: 04 January 2017, At: 13:21
Journal of Risk Research
ISSN: 1366-9877 (Print) 1466-4461 (Online) Journal homepage: http://www.tandfonline.com/loi/rjrr20
Characterization of unknown unknowns using
separation principles in case study on Deepwater
Horizon oil spill
Seong Dae Kim
To cite this article: Seong Dae Kim (2017) Characterization of unknown unknowns using
separation principles in case study on Deepwater Horizon oil spill, Journal of Risk Research,
20:1, 151-168, DOI: 10.1080/13669877.2014.983949
To link to this article: http://dx.doi.org/10.1080/13669877.2014.983949
Published online: 28 Nov 2014.
Submit your article to this journal
Article views: 74
View related articles
View Crossmark data
Citing articles: 1 View citing articles
Characterization of unknown unknowns using separation
principles in case study on Deepwater Horizon oil spill
Seong Dae Kim*
Engineering, Science, and Project Management Department, University of Alaska
Anchorage, Anchorage, AK, USA
(Received 3 July 2014; nal version received 28 October 2014)
Unidentied risks, also known as unknown unknowns, have traditionally been
underemphasized by risk management. Most unknown unknowns are believed to
be impossible to nd or imagine in advance. But this study reveals that many
are not truly unidentiable. This study develops a model using separation princi-
ples of the Theory of Inventive Problem Solving (whose Russian acronym is
TRIZ) to explain the mechanism that makes some risks hard to nd in advance
and show potential areas for identifying hidden risks. The separation principles
used in the model are separation by time, separation by space, separation upon
condition, separation between parts and whole, and separation by perspective. It
shows that some risks are hard to identify because of hidden assumptions and
illustrates how separation principles can be used to formulate assumptions
behind what is already known, show how the assumptions can be broken, and
thus identify hidden risks. A case study illustrates how the model can be applied
to the Deepwater Horizon oil spill and explains why some risks in the oil rig,
which were identied after the incident, were not identied in advance.
Keywords: unknown unknowns; unidentied risks; hidden risks; risk
management; natural disaster
1. Introduction
Disasters like Hurricane Katrina in 2005, the Deepwater Horizon oil spill in 2010,
and the Fukushima nuclear accident in 2011 were unanticipated yet extremely dam-
aging. It might seem that such disasters, being unprecedented, were not preventable.
Indeed, some risks are impossible to detect or even imagine in advance. Many meth-
ods have been developed and used to assess, analyze, and manage risks that have
already been identied, but those methods can be used only after the risks are identi-
ed. Techniques like checklists or risk breakdown structures (PMI 2013) previously
used in similar cases can help identify typical risks that happened before but they
are limited in a unique situation and are not intended for unprecedented events.
There have been plenty of research on risk but there is little consensus over what
risk means (Rosa 1998). It is impossible to present all denitions of the risk concept,
but they can be classied into nine categories: Risk = Expected value (loss),
Risk = Probability of an (undesirable) event,’‘Risk = Objective uncertainty,
Risk = Uncertainty,’‘Risk = Potential/possibility of a loss,’‘Risk = Probability and
*Email: Sdkim2@uaa.alaska.edu
© 2014 Informa UK Limited, trading as Taylor & Francis Group
Journal of Risk Research, 2017
Vol. 20, No. 1, 151168, http://dx.doi.org/10.1080/13669877.2014.983949
scenarios/consequences/severity of consequences,’‘Risk = Event or consequence,
Risk = Consequences/damage/severity of these + Uncertainty,and Risk is the
effect of uncertainty on objectives(Aven 2012). Not all the categories are aged the
same. The denitions started from the rst category, which is based on expected val-
ues, in 1711, and developed through different paths with different advocators of
each perspective (Aven 2012).
Uncertainty or risk is not the subject only for risks management. Sometimes, it is
a key subject of science. As science progresses, what used to be unpredictable, thus
uncertain, has become more and more predictable. One such example is a hurricane.
Advancement of atmospheric science enables much more precise forecast of hurri-
cane track than 40 years ago. Social science also strives for better understanding and
predicting human behavior.
In risk management, a typical approach to risks is trying to identify them as early
as possible and respond to them as quickly as possible once identied. However,
such an approach is not always effective and many researchers have been trying to
better understand risks and gure out how to better deal with them.
Many researchers have been trying to categorize and characterize hard-to-detect
risks or uncertainties as summarized in the Appendix 1. One of the earliest struc-
tured models to understand unknowns is the Johari window. It was created as an
interpersonal awareness model (Luft and Ingham 1955) and employs a 2 × 2 matrix
to characterize information about a person, based on whether it was known to the
person or to others, as arena,blind spot,façade,orunknown, as shown in Table 1.
The structure of the Johari window for risk classication received further attention
when former US Secretary of Defense Donald Rumsfeld used the term unknown
unknowns(Rumsfeld 2002). This led many researchers to employ quadrants of
knowledge, i.e. known known,known unknown,unknown known, and unknown
unknown to understand and explain the nature of risk.
A typical classication of risks is based on the level of knowledge about a risk
eventsoccurrence (either known or unknown) and the level of knowledge about its
impact (either known or unknown). This leads to four possibilities, shown with exam-
ples in Table 2(Cleden 2009), and unknown unknowns is one of the quadrants.
The major obstacle to addressing unknown unknowns is their being hard to
imagine, but another is that people who cannot cope with unknown unknowns will
sometimes actively ignore them (Alles 2009). A likely event, having already been
identied, cannot be considered an unknown unknown, but its consequence may fall
into the category of unknown unknowns. The occurrence of an event like a natural
disaster may be readily anticipated, but its impact is not easy to predict or estimate
because of unintended secondary effects, also called knock-on effects (Ogaard
2009).
Table 1. Johari window (adapted from Luft and Ingham (1955)).
Self
Known Unknown
Others Known 1. Arena 2. Blind spot
Unknown 3. Façade 4. Unknown
152 S.D. Kim
Risk management usually tries to identify and list as many known unknowns,
i.e. risks, as possible, as early as possible. However, although risk management acts
as a forward-looking radar,it is not possible to identify all risks in advance, partly
for the following reasons (Hillson 2010): inherently unknowable, time dependent,
progress dependent, or response dependent.
However, it does not necessarily mean that all unknown unknowns are equally
hard to identify. Some of them might be easier to recognize than others.
One approach to deal with unknown unknowns is uncertainty allowance. A tech-
nique to determine the uncertainty allowance for a project was proposed (Raydugin
2012). This technique quanties unknown unknowns that might inuence a project
based on four dimensions: novelty of a project (mostly, technology or geography),
phase of project development, type of industry, and bias of various types. The guide-
line numbers for the uncertainty allowance are determined based on the experience
and data in a particular industry and cannot be generalized to other projects or indus-
tries. This technique can help estimate the uncertainty allowance to prepare for
potential hidden risks, but having additional allowance without any other response
plan can be costly.
Another approach to deal with unknown unknowns is categorizing the factors in
a project which is likely to contain unknown unknowns. A framework for recogniz-
ing areas in a project which may increase the likelihood of encountering unknown
unknowns was proposed (Ramasesh and Browning 2014). This framework focuses
on knowable unknown unknowns,which are not purely unimaginable, and concep-
tualizes six driving factors of unknown unknowns: complexity, complicatedness,
dynamism, equivocality, mindlessness, and project pathologies. This framework also
presents project design approaches and behavioral approaches to reduce the
unknown unknowns. It does not help identify particular unknown unknowns but can
help decide where to invest resources to uncover knowable unknown unknowns
and reduce their likelihood.
1.1. How this paper is structured
The literatures have characterized hidden risks and thereby helped to distinguish
unknown unknowns from others, but they have not provided a structured character-
ization of unknown unknowns. Methods to estimate the likelihood of encountering
Table 2. Simplied four quadrantsmodel (adapted from Cleden (2009)).
Impact
Occurrence
Knowledge Risks
(known knowns) (known unknowns)
predictable future
states
project data
possible states identied
quantiable variables
Untapped knowledge Unfathomable uncertainties
(unknown knowns) (unknown unknowns)
researchable facts
untapped resources
unknown relationships between key
variables
unpredictable events
Journal of Risk Research 153
unknown unknowns were proposed but those methods do not help identify individ-
ual unknown unknowns. This paper intends to extend previous work by systemati-
cally elucidating why some risks are hard to identify in advance, even with a vast
knowledge base. For this purpose, this study (1) adopts a new concept to delineate
the structure of hard-to-identify hidden risks, (2) proposes a new model to character-
ize them, and (3) presents a case study to illustrate how the proposed model can be
applied to explain why some risks are hard to identify in advance and show potential
areas for identifying unknown unknowns. This paper is structured as follows.
Section 2presents the scope of this paper and how to dene unknown unknowns.
Section 3introduces TRIZ, which is a problem-solving methodology developed in
the former Soviet Union, and separation principles, which is one of the key tools of
TRIZ (Rantanen and Domb 2008), and shows how they can characterize and poten-
tially identify hidden risks. Section 4illustrates the application of the proposed
model using Deepwater Horizon oil spill case study. And Section 5concludes and
discusses future work.
2. Characterizing unknown unknowns
Risk can be dened as an uncertain event or condition that, if it occurs, has a posi-
tive or negative effect on one or more project objectives.(PMI 2013) This deni-
tion reects project managersperspective and is somewhat lengthy. To make it
more general and simple, this paper hereafter denes it as an uncertain event or
condition that, if it occurs, has a signicant effect.Unknown unknowns can be
understood as a special type of risk and can be dened simply as unidentied
risks.This denition can be confusing because even though an individual did not
identify a risk, some other individuals may have already identied it. To avoid fur-
ther confusion, unknown unknowns are hereafter dened as risks that the decision-
maker (DM) or the group of DMs is not aware of,irrespective of whether some
other stakeholder might be aware of those risks.
The knowledge gap, which means the difference between the knowledge that the
DM has and the knowledge that the DM should have in order to identify events/con-
ditions that may have inuence on the accomplishment of a project, operation, or
business can account for some unknown unknowns (Stoelsnes 2007). However, we
focus only on those that cannot be attributed to the knowledge gap. In addition, we
focus on risk events, rather than general uncertainties like parameter variability or
uncertain nature of something unrelated to the accomplishment.
The development of our model starts with modifying Cledens(
2009) quadrants
model described in the previous section. The proposed model opts for an identica-
tiondimension instead of Cledensoccurrencedimension and for a certainty
dimension instead of Cledensimpactdimension. Cledens model classies various
event-related knowledge, but it does not consider the uncertainty of a risk events
occurrence. By using an identicationdimension instead of occurrence,the pro-
posed model better represents what knowledge about occurrencemeans to a risk
event. Also, by using a certaintydimension instead of impact,it can encompass
the uncertainty of the occurrence and the impact. Table 3shows the categorization
of event-related knowledge based on the identication of the event and the certainty
about the event realization. Known knowndenotes a fact that DM is aware of and
unknown knowndenotes a fact that DM is not yet aware of. Known unknown
154 S.D. Kim
denotes uncertain event or condition that DM is aware of and unknown unknown
denotes uncertain event or condition that DM is not aware of.
In this model, a DM may possess knowledge about a certain event, i.e. known
known, or lack such knowledge about the event, i.e. unknown known. An uncertain
event can be uncertain in either its occurrence or its impact. For example, a hurri-
cane has two basic uncertainties: track, which implies the chance of landfall, and
intensity, expressed as wind speed or SafrSimpson hurricane scale. For a hurri-
cane, occurrence, i.e. landfall, is uncertain and the impact, i.e. loss of human life
and the damage to property, is also uncertain.
Known unknowns in this model are usually treated as identied risks (PMI
2013), and they may be estimated to account for the unknown amount of their
impact. On the other hand, unknown unknowns are unfathomable or even unimagin-
able to many people (Makridakis, Hogarth, and Gaba 2009) and seldom accounted
for by risk management. Sometimes, management reserves are used to address
unknown unknowns that can affect a project (PMI 2013), but are quite subjective
and limited. The unknown unknown quadrant shaded in Table 3is the focus of this
paper, and the following section discusses how to characterize it.
3. Proposed model to identify unknown unknowns
The proposed model adopts a component of TRIZ methodology. TRIZ is the
Russian acronym for the Theory of Inventive Problem Solving (Rantanen and Domb
2008). TRIZ is a problem-solving method based on logic and data, not intuition,
and was developed by G.S. Altshuller and his colleagues in the former Soviet Union
from 1946 to 1985. It was built on the study of patterns of problems and solutions,
including more than three million patents. TRIZ was originally developed to solve
technical problems but has been applied also to various technical and non-technical
problem areas (Mann 2007).
A fundamental concept of TRIZ is that contradiction makes problems hard to
solve and should be eliminated. TRIZ recognizes two main categories of contradic-
tion: technical and physical. Technical contradiction refers to the trade-off like safety
vs. gas mileage for a vehicle, whereas physical contradiction refers to where a prob-
lem or system has opposite/contradictory requirements or attributes.
In TRIZ, physical contradiction is resolved typically through the application of
separation principles. There are several separation principles in TRIZ, but separation
in time (ST), separation in space (SS), separation upon condition (SC), and separa-
tion between parts and whole (SPW) are the major ones most frequently referred to.
ST views a problem in time dimension. Instead of assuming all the times are
identical in terms of when an attribute is realized, ST separates a particular time
Table 3. Modied four-quadrant model of uncertainty (adapted from Cleden (2009)).
Journal of Risk Research 155
from other times which makes the opposite attribute possible. The timecan be a
time in a day, a time in a week, a time in a month, a time in a year, a step in a pro-
cedure, or a phase in a life cycle. For example, instead of having a xed undercar-
riage of an aircraft all the time, ight time can be separated from other times and the
undercarriage can be non-existent only during the ight so that it would not cause
air resistance.
SS views a problem in space dimension. Instead of assuming all the spaces are
identical in terms of where an attribute is realized, SS separates a particular space
from other spaces which makes the opposite attribute possible. The spaceis not
necessarily a physical space. It can be a geographical region, professional discipline,
organization, person, machine, or conceptual space. For example, instead of using
identical material for the whole bucket of an excavator, teeth part can be separated
from the rest of the bucket and we can use harder material only for the teeth.
SC views a problem in condition dimension. Instead of assuming all the condi-
tions are identical in terms of the circumstance an attribute is realized on, SC sepa-
rates a particular condition from other conditions which makes the opposite attribute
possible. The conditioncan be physical or technical conditions like speed, weight,
temperature, pressure, viscosity, porosity, brightness, and humidity, but it also can
be workforce performance, decision process quickly, human condition, organiza-
tional condition, political situation, environmental condition, constraint, regulation,
prior actions taken, culture, or any other types of condition. For example, instead of
constant clarity for a lens regardless of light intensity, high brightness can be sepa-
rated from other lower brightnesses and the lens can become dark only on intense
light.
SPW views a problem in system-level dimension. Instead of assuming that we
should look at only the current problem on hand, SPW separates a particular level in
the system from other levels which makes the opposite attribute possible. Lower
level refers to parts or sub-parts that constitute the problem. Higher level refers to
the whole that includes other problems as well as the current problem on hand.
Some risk event might be incurred only at the component level, rather than at the
current problem level. Also, some risk event might be incurred only from the combi-
nation of or interaction with other events or conditions, rather than the problem
alone. For example, instead of constant soundness of a subject regardless of its level
in the system, combination of multiple adverse factors, i.e. higher level in the sys-
tem, the higher level can be separated from other levels and the subject can become
problematic only at higher level, e.g. organization, local economy, or environment.
Additional separation principles have since been discovered (Ball 2009). One of
them is separation by perspective (SP), which means that some contradictory
requirements or opposing attributes can be realized through changing the way of
looking at the problem or situation. SP can explain a risk that is perceived only by a
group of stakeholders that have a unique view. The perspectivecan be that of a
stakeholder groups, insiders vs. outsiders, or subject matter experts.
Separation principles of TRIZ are typically used to nd a way to meet contradic-
tory requirements or achieve an opposite attribute of a system. However, this study
uses separation principles to explain how an assumption within known knowledge
can be broken, i.e. assumption vs. broken assumption.
Section 1mentioned Hillsons(2010) four reasons why some risks are not possible
to identify in advance. Another reason is revealed through the Swiss Cheese model
(Reason 2000), which illustrates that major accidents are usually caused not by a
156 S.D. Kim
single isolated failure, but by several defensive barriers that are not intact. The Swiss
Cheese model implies that some risks are hard to identify in advance because they are
not caused by a single source.
The above reasons for failure to identify risk could be addressed by correspond-
ing separation principles. Hillsons time dependence corresponds to ST. Progress
dependence corresponds to SC. Response dependence also corresponds to SC. The
Swiss Cheese model corresponds to SPW. According to Dester and Blockley (2003),
hazard as an accident waiting to happenis based on the set of incubating precondi-
tions, not on a single condition. This concept corresponds to SPW as well.
The shaded area in Table 3can be expanded as shown in Table 4and it shows
the proposed framework. Some risks are not identied in advance due to the lack of
necessary knowledge or the assumptions behind the problem. This table shows ve
different ways to formulate and break the assumptions, which are adopted from sep-
aration principles of TRIZ. For example, using by Space,an assumption The oil
spill removal technology works at any depth.can be formulated and this assump-
tion can be broken resulting in The oil spill removal technology may not work at
5000 feet or deeper from sea surface.The broken assumption is a newly identied
risk that used to be unidentied until the implementation of this process.
To apply this model, we start with listing known knowns and known unknowns,
before addressing unknown unknowns. (Unknown knowns are outside the scope of
this paper because they are untapped knowledge and do not directly aid in the iden-
tication of unknown unknowns.) Next, we derive an assumption behind each listed
item in the known knowns quadrant, using each one of the separation principles,
e.g. it will work as planned at any timeor it is fault-free in any space.And then,
we nd potential areas for identifying hidden risks, i.e. potential possibilities of
breaking the assumptions, applying the separation principle. Listed knowledge and
risks can be used as an input for a separation principle. An identied risk could be a
condition for other hidden risks. Multiple identied risks, if combined, might reveal
a previously unidentied risk.
4. Case study
This section reviews a recent catastrophe to illustrate how the proposed model can
characterize risks that allegedly were previously unidentied and to reveal potential
areas for identifying unknown unknowns.
Table 4. Mechanisms that account for unknown unknowns.
Journal of Risk Research 157
The Deepwater Horizon oil spill in 2010 is the largest accidental marine oil spill
in the history of the petroleum industry (Robertson and Krauss 2010). The spill
started from a sea-oor oil gusher that resulted from the 20 April 2010 explosion at
the Deepwater Horizon oil rig, which was drilling on the BP-operated Macondo
Prospect in the Gulf of Mexico. It owed unabated for three months in 2010 (Jamail
2012), releasing about 4.9 million barrels of crude oil before the gushing wellhead
was capped (Hoch 2010). The spill caused extensive damage to marine and wildlife
habitats as well as to the Gulfsshing and tourism industries (Tangley 2010).
To apply the proposed model toward characterizing hidden risks in the Deepwa-
ter Horizon oil spill case, we rst list identied items, either certain or uncertain, as
of before the oil spill occurred, as depicted in Table 5. Some of the items are
assumed to have previously been identied regarding the offshore deepwater eld
development project.
Separation principles can be applied to each numbered item in Table 5, which
might reveal assumptions and hidden risks. Then, if applicable, we list any later
identied hidden risks that can be characterized by the separation mechanism, which
can be listed in the shaded area of the table. In the following subsections, KKn
refers to the nth item in the known known quadrant and KUnmeans the nth item in
the known unknown quadrant.
4.1. KK1 (oil spill response plans are in place)
Oil spill response may be impeded by the harsh weather during storm season
(Graham et al. 2011). We recognize this risk retrospectively but this risk may not
have been identied because it was implicitly assumed that the plans would be
effective at any times. We can consider many different ways to differentiate the
times (ST) and seek for any particular time when the assumption can be broken.
Consultation with subject matter experts might be necessary to single out the partic-
ular time or to check if the selected differentiations make sense. One way to differ-
entiate by season is distinguishing storm season in the Gulf of Mexico. Considering
this further might identify the risk above.
Oil spill response plans may fail in deepwater. We recognize this risk retrospec-
tively, but this risk may not have been identied because it was implicitly assumed
that the plans would sufce in any spaces. We can consider many ways to differenti-
ate the spaces (SS), e.g. by part of the reservoir, component of well structure, depth
below sea surface, depth below sea oor, or part of the organization, and seek for
any particular space where the assumption can be broken. One way to differentiate
by sea depth is distinguishing deepwater, such as the sea oor of the Macondo Pros-
pect, from other shallower waters. Considering this further might identify the risk
above. Actually, the plans were claimed to be capable of taking care of any oil spill
even in deepwater (Graham et al. 2011) but it turned out to be untrue.
Oil spill response plans may fail if an oil spill occurs (KU1) when govern-
ment and industry are unprepared (KU7) (Graham et al. 2011). We recognize this
risk retrospectively but this risk may not have been identied because it was
implicitly assumed that even multiple adverse factors cannot paralyze the plans or
that the adverse factors are not inuential to the plans at any system levels. We
can consider ways to differentiate the system levels (SPW) and seek for any
particular system level at which the assumption can be broken. One adverse
158 S.D. Kim
factor might not have a signicant impact to the plans, but multiple adverse
factors as a group, i.e. higher system level, could have a signicant impact to the
oil spill response plans. Considering this further might identify the risk above.
4.2. KK2 (safety procedures are in place)
Safety procedures may fail if they are under pressure due to the schedule delay, the
budget overrun, or a safety-disregarding culture. We recognize this risk retrospec-
tively, but this risk may not have been identied because it was implicitly assumed
that the procedures would do their job on any conditions. We can consider many
Table 5. Identied knowns and unknowns that could have been analyzed for potential
hidden risks at the Macondo Prospect.
Journal of Risk Research 159
different ways to differentiate the conditions (SC) and seek for any particular condi-
tion on which the assumption can be broken. One way to differentiate by constraint
is distinguishing schedule delay, budget overrun, practice, or culture from other nor-
mal conditions. Considering this further might identify the risk above. Actually,
there was a rush to completionand no culture of safety on the rig.Consequently,
the managements cost-cutting decisions compromised the safety at the rig and, as a
result, 11 people lost their lives from the blowout (Graham et al. 2011).
Safety procedures may be faulty from the perspective of eld workers or external
observers. We recognize this risk retrospectively but this risk may not have been
identied because it was implicitly assumed that the procedures would do their job
from any perspectives. We can consider many ways to differentiate perspectives
(SP) and seek for any particular perspective from which the assumption can be bro-
ken. One way to differentiate by stakeholder group is distinguishing eld workers or
external safety specialist from others. Considering this further might identify the risk
above. The company claimed that the safety procedure was sufcient, but it actually
turned out to be worse than what the company claimed (Graham et al. 2011).
4.3. KK3 (stakeholders include seafood industry, tourism industry, oil and gas
industry, and residents of the Gulf of Mexico)
No signicant case supported by the literature was found.
4.4. KK4 (required risk assessments have been completed)
The assessment of oil spill risk may be faulty on the deepwater condition. We recog-
nize this risk retrospectively, but this risk may not have been identied because it
was implicitly assumed that the risk assessments would sufce on any conditions.
We can consider many ways to differentiate conditions (SC) and seek for any condi-
tion on which the assumption can be broken. One way to differentiate by physical
condition is distinguishing deepwater condition involving high pressure and low
temperature from other normal conditions. Considering this further might identify
the risk above. Actually, the oil spill was underestimated and became unmanageable
(Graham et al. 2011).
The assessment of oil spill risk may be faulty if remotely operated underwater
vehicle failure (KK11), containment dome failure (KK12), containment boom failure
(KK14, KU5), top kill failure (KK13), and oil dispersant failure (KK15) were com-
bined. We recognize this risk retrospectively, but this risk may not have been identi-
ed because it was implicitly assumed that even multiple adverse factors cannot
paralyze the risk assessments or that the adverse factors are not inuential to the risk
assessment at any system levels. We can consider various ways to differentiate the
system levels (SPW) and seek for any particular system level at which the assump-
tion can be broken. One adverse factor might not have a signicant impact to the
risk assessment but multiple adverse factors as a group could have a signicant
impact to the risk assessment. Considering this further might identify the risk above.
4.5. KK5 (government conducts oversight of the project)
Government oversight may fail unless they have sufcient knowledge. We recognize
this risk retrospectively, but this risk may not have been identied because it was
160 S.D. Kim
implicitly assumed that government oversight would do its job on any conditions.
We can consider many ways to differentiate conditions (SC) and seek for any condi-
tion on which the assumption can be broken. One way to differentiate by organiza-
tional condition is distinguishing the unknowledgeability of the government from
other normal conditions. Actually, government oversight failed to reduce the risks of
a well blowout, due to lack of knowledge about offshore deepwater drilling (Graham
et al. 2011).
4.6. KK6 (the company has the technology to keep tubes centered)
The tube centralizers may fail to keep tubes centered if expediency were chosen for
operation. We recognize this risk retrospectively, but this risk may not have been
identied because it was implicitly assumed that the tube centralizers would do their
job upon any conditions. We can consider many ways to differentiate conditions
(SC) and seek for any condition on which the assumption can be broken. One way
to differentiate by organizational condition is distinguishing the decision-making for
expediency from other decision-making qualities. Actually, to save time, BP used 6
of the devices for keeping tubes centered, ignoring models calling for 21. Casings
should be centered in the well hole for the cement pumped in around it to set evenly
(Swenson 2013) but BP on shore decided not to wait for more centralizers (Graham
et al. 2011).
4.7. KK7 (cement integrity test is in place)
Considering this further might reveal that such a decision may cause the cement
integrity test to fail. We recognize this risk retrospectively, but this risk may not
have been identied because it was implicitly assumed that the cement integrity tests
would do their job upon any conditions. We can consider many ways to differentiate
conditions (SC) and seek for any condition on which the assumption can be broken.
One way to differentiate by organizational condition is distinguishing poor manage-
ment decision from normal decision-making qualities. Actually, BP had hired con-
tractor Schlumberger to run tests on the newly cemented well, but they sent
Schlumbergers crew home without having it run the test, known as a cement bond
log (Swenson 2013), and BP on shore decided not to run the cement evaluation log
to save time (Graham et al. 2011).
4.8. KK8 (pressure test is in place)
Pressure tests may fail to do their job. We recognize this risk retrospectively, but this
risk may not have been identied because it was implicitly assumed that pressure
tests would do their job upon any conditions. We can consider many ways to differ-
entiate conditions (SC) and seek for any condition on which the assumption can be
broken. One way to differentiate by human condition is distinguishing misinterpreta-
tion of the test result from other human conditions. Actually, rig workers reported
confusion over the negative test, which measures upward pressure from the shut-in
well (Swenson 2013). BP (and perhaps Transocean) on rig decided to save time by
not performing further well integrity diagnostics, despite troubling and unexplained
negative pressure test results (Graham et al. 2011).
Journal of Risk Research 161
4.9. KK9 (heavy drilling mud is used to keep any upward pressure under
control)
Heavy drilling mud may fail to control upward pressure if a poor decision is made.
We recognize this risk retrospectively, but this risk may not have been identied
because it was implicitly assumed that heavy drilling mud would do its job upon
any conditions. We can consider many ways to differentiate conditions (SC) and
seek for any condition on which the assumption can be broken. One way to differen-
tiate by organizational condition is distinguishing poor decision from other normal
decisions. Actually, BP decided to take heavy drilling mud out of the system, to
3000 feet below the normal point, and earlier than usual. The mud barrier was not
there to stem the gas kick that destroyed the rig (Swenson 2013). BP on shore
decided to displace mud from the riser before setting the surface cement plug
(Graham et al. 2011).
4.10. KK10 (blowout preventer is used to prevent oil spill)
Blowout preventer may fail unless it has proper diagnostic tools. We recognize this
risk retrospectively, but this risk may not have been identied because it was implic-
itly assumed that the blowout preventer would do its job upon any conditions. We
can consider many ways to differentiate conditions (SC) and seek for any condition
on which the assumption can be broken. One way to differentiate by technical con-
dition is distinguishing diagnostic incapability from other capabilities. Actually, a
stuck drill pipe and intense pressures from the blowout caused a section of pipe to
bend and get lodged inside the blowout preventer. The blind shear rams could not
cut the bent pipe completely and failed to seal the well (Swenson 2013). And the
blowout preventer did not work properly due to the lack of key diagnostic tools
(Graham et al. 2011).
Blowout preventer may fail if tube centralization failure (KK6), cement integrity
test failure (KK7), pressure test failure (KK8), and mud barrier failure (KK9) are
combined. We recognize this risk retrospectively, but this risk may not have been
identied because it was implicitly assumed that the blowout preventer would be
robust to any adverse factor. We can consider ways to differentiate the system levels
(SPW) and seek for any particular system level at which the assumption can be bro-
ken. One adverse factor might not have a signicant impact to the reliability of the
blowout preventer but multiple adverse factors as a group could have a signicant
impact.
4.11. KK11 (remotely operated underwater vehicles are available just in case)
Remotely operated underwater vehicles may fail if its problem-solving capability is
limited. We recognize this risk retrospectively, but this risk may not have been iden-
tied because it was implicitly assumed that the underwater vehicles will do their
job on any conditions. We can consider many ways to differentiate conditions (SC)
and seek for any condition on which the assumption can be broken. One way to dif-
ferentiate by technical condition is differentiating limited problem-solving capability
from other capabilities. Actually, the vehicle failed to resolve the issue because the
problem was beyond the vehicles capability (CBC_News 2010).
162 S.D. Kim
4.12. KK12 (containment dome method is available in case oil spill occurs)
Containment dome method may fail if it operates in deepwater. We recognize this
risk retrospectively, but this risk may not have been identied because it was implic-
itly assumed that the containment dome method would do its job on any conditions.
We can consider many ways to differentiate conditions (SC) and seek for any condi-
tion on which the assumption can be broken. One way to differentiate by physical
condition is distinguishing deepwater condition from other normal conditions. Actu-
ally, the containment dome, also known as top hator cofferdam,failed to collect
the spilt oil in deepwater (Bolstad, Clark, and Chang 2010; Graham et al. 2011).
This method had been tested in much shallower water but not in deepwater. It failed
because methane gas escaping from the well came into contact with cold sea water
and formed slushy hydrates clogging the dome with hydrocarbon ice.
4.13. KK13 (top kill method is available in case oil spill occurs)
Top kill method may fail if the oil spill occurs in deepwater or the oil ow rate is
high. We recognize this risk retrospectively, but this risk may not have been identi-
ed because it was implicitly assumed that the top kill method would do its job on
any conditions. We can consider many ways to differentiate conditions (SC) and
seek for any condition on which the assumption can be broken. One way to differen-
tiate the conditions is distinguishing the extreme physical condition, such as deep-
water condition or high oil ow rate, from other normal conditions. Actually, this
method failed due to both of those factors (BBC 2010; Graham et al. 2011).
4.14. KK14 (oil containment boom is available in case oil spill occurs)
No signicant case supported by the literature was found.
4.15. KK15 (oil dispersant is available in case oil spill occurs)
Oil dispersant may fail or have serious side effects in deepwater (Swartz 2010;
Graham et al. 2011). We recognize this risk retrospectively, but this risk may not
have been identied because it was implicitly assumed that the oil dispersant would
do its job on any conditions. We can consider many ways to differentiate conditions
(SC) and seek for any condition on which the assumption can be broken. One way
to differentiate the conditions is distinguishing the extreme physical condition, like
high pressure and low temperature in deepwater from other normal conditions.
4.16. KK16 (even if oil spill occurs, the amount of spillage will be limited due to
multiple response measures)
Limitedoil spill, which is the perspective of oil and gas industry, may be perceived
to be disastrous if viewed from the perspectives of seafood and tourism industries,
residents of GOM, or environmentalists. We recognize this risk retrospectively, but
this risk may not have been identied because it was implicitly assumed that the
spillage amount would be limited from any perspectives. We can consider many
ways to differentiate perspectives (SP) and seek for any perspective from which the
assumption can be broken. One way to differentiate the perspectives is
Journal of Risk Research 163
distinguishing seafood industry, tourism industries, residents of the Gulf of Mexico
(GOM), or environmentalists from other stakeholders.
5. Conclusion and future work
As revealed in literatures and the case study, not all of what used to be unknown
unknowns are unrecognizable. This study proposed the characteristics of unknown
unknowns in a structured framework to explain why they are hard to identify in
advance. The proposed model adopts separation principles of TRIZ and applies them
to formulate assumptions behind what we think we already know and break the
assumptions, and thus reveal hidden risks. The case study shows how the proposed
model could have been applied to the Deepwater Horizon oil rig and might have
revealed hidden risks that were subsequently documented. It showed that hidden
risks do not necessarily appear without warning but might be identied using exist-
ing knowledge and triggers explained by the separation principles.
The proposed framework may not be able to explain all of the lurking risks but
it opened a new frontier in dealing with unknown unknowns. Risks unidentied due
to the knowledge gap are outside the scope of the proposed method but it may help
identify the knowledge gap that needs to be lled. Verication of that potential capa-
bility remains as a future work.
Future work will include sub-classifying the conditions for the separation upon
conditionprinciple. As observed in the case study, separation upon condition (SC)
is applied more frequently and broadly than other separation principles and it needs
further classication to be more manageable.
Also, the proposed model will need to be extended to include more rened char-
acterization and other characteristics from the literature in a consolidated framework.
This extension can include tailoring the model to a target industry or organization.
We will also develop a model or process to identify hidden risks based on the
extended characterization. The identication model may include a structured ques-
tionnaire to induce the identication of potential hidden risks. The identication
model cannot be a fortune teller for a risky project, but it should be able to help
identify hidden risks that are not truly unimaginable. In addition, we will validate
and enhance this model using actual projects, rather than relying on retrospective
study. This will show whether this model actually helps identify hidden risks and
how the model can be improved.
References
Alles, Michael. 2009. Governance in the Age of Unknown Unknowns.International
Journal of Disclosure and Governance 6: 8588.
Aven, Terje. 2012. The Risk Concept-historical and Recent Development Trends.
Reliability Engineering and System Safety 99: 3344.
Ball, Larry. 2009. TRIZ Power Tools, Job #5: Resolving Problems. London: Third Millennium
Publishing.
BBC. 2010. “‘Top KillBP Operation to Halt US Oil Leak Fails.BBC News, May 29.
Bolstad, Erika, Lesley Clark, and Daniel Chang. 2010. Engineers Work to Place Siphon
Tube at Oil Spill Site.theStar.com, May 14.
CBC_News. 2010. Robot Subs Trying to Stop Gulf Oil Leak.CBC News, April 25.
Chang, Wei-Wen, Cheng-Hui Lucy Chen, Yu-Fu Huang, and Yu-Hsi Yuan. 2012. Exploring
the Unknown: International Service and Individual Transformation.Adult Education
Quarterly 62 (3): 230251.
Cleden, D. 2009. Managing Project Uncertainty. Farnham: Gower.
164 S.D. Kim
Daase, Christopher, and Oliver Kessler. 2007. Knowns and Unknowns in the War on
Terror: Uncertainty and the Political Construction of Danger.Security Dialogue 38 (4):
411434.
Dester, W. S., and D. I. Blockley. 2003. Managing the Uncertainty of Unknown Risks.
Civil Engineering and Environmental Systems 20 (2): 83103.
Galpin, Timothy. 1995. Pruing the Grapevine.Training & Development 4: 2833.
Geraldi, Joana G., Liz Lee-Kelley, and Elmar Kutsch. 2010. The Titanic Sunk, So What?
Project Manager Response to Unexpected Events.International Journal of Project
Management 28: 547558.
Graham, Bob, William K. Reilly, Frances Beinecke, Donald F. Boesch, Terry D. Garcia,
Cherry A. Murray, and Fran Ulmer. 2011. Deepwater: The Gulf Oil Disaster and the
Future of Offshore Drilling: Report to the President. Washington, DC: National
Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling.
Hillson, David. 2010. Exploiting Future Uncertainty: Creating Value from Risk. Burlington,
VT: Gower Publishing.
Hoch, M. 2010. New Estimate Puts Gulf Oil Leak at 205 Million Gallons.PBS NewsHour,
August 2.
Hole, Kjell Jørgen. 2013. Management of Hidden Risks.Computer 46: 6570.
Jamail, D. 2012. BP Settles While Macondo Seeps.Al Jazeera English.
Jorion, Philippe. 2009. Risk Management Lessons from the Credit Crisis.European
Financial Management 15 (5): 923933.
Keil, Mark, Paul E. Cule, Kalle Lyytinen, and Roy C. Schmidt. 1998. A Framework for
Identifying Software Project Risks.Communications of the ACM 41 (11): 7683.
Luft, J., and H. Ingham. 1955. The Johari Window, a Graphic Model of Interpersonal
Awareness.Proceedings of the Western Training Laboratory in Group Development,
Los Angeles, CA.
Luong, Mary G., and Anne Collins McLaughlin. 2011. Improving Communication of
Usability Perceptions: An Analysis of a Modied-Johari Window as a Tool for Software
Designers.Proceedings of the Human Factors and Ergonomics Society 55th Annual
Meeting 2011. Las Vegas, NV.
Makridakis, Spyros, Robin M. Hogarth, and Anil Gaba. 2009. Forecasting and Uncertainty
in the Economic and Business World.International Journal of Forecasting 25:
794812.
Mann, Darrell. 2007. Hands-on Systematic Innovation for Business and Management. 2nd
ed. Clevedon: IFR Consultants.
Ogaard, Ryan. 2009. Known Unknowns.Reinsurance 3: 9.
PMI. 2013. A Guide to the Project Management Body of Knowledge. 5th ed. Newtown
Square, PA: Project Management Institute.
Ramasesh, Ranga V., and Tyson R. Browning. 2014. A Conceptual Framework for Tackling
Knowable Unknown Unknowns in Project Management.Journal of Operations
Management 32: 190204.
Rantanen, Kalevi, and Ellen Domb. 2008. Simplied TRIZ: New Problem Solving
Applications for Engineers and Manufacturing Professionals. 2nd ed. New York:
Auerbach Publications.
Raydugin, Yuri. 2012. Quantifying Unknown Unknowns in an Oil and Gas Capital Project.
International Journal of Risk and Contingency Management 1 (2): 2942.
Reason, James. 2000. Human Error: Models and Management.British Medical Journal
320: 768770.
Robertson, C., and C. Krauss. 2010. Gulf Spill is the Largest of Its Kind, Scientists Say.
The New York Times, August 2.
Rosa, Eugene A. 1998. Metatheoretical Foundations for Post-normal Risk.Journal of Risk
Research 1 (1): 1544.
Rumsfeld, Donald. 2002. Department of Defense News Brieng.US Department of
Defense, February 12.
Shenton, Andrew K. 2007. Viewing Information Needs Through a Johari Window.
Reference Services Review 35 (3): 487496.
Stoelsnes, Roger R. 2007. Managing Unknowns in Projects.Risk Management 9 (4):
271280.
Journal of Risk Research 165
Swartz, Spencer. 2010. BP Provides Lessons Learned from Gulf Spill.The Wall Street
Journal. September 3.
Swenson, Dan. 2013. Possible Causes of the Deepwater Horizon Explosion and BP Oil
Spill.nola.com, February 22.
Talbot, Patrick J. 2006. Automated Discovery of Unknown Unknowns.Military Communi-
cations Conference 2006. MILCOM 2006. Washington, DC.
Tangley, L. 2010. Bird Habitats Threatened by Oil Spill.National Wildlife, June 17.
Appendix 1. Literature review on risk categorizations
Many researchers have been trying to capture the characteristics of hard-to-detect risks or
uncertainties. One of the earliest structured models to understand unknowns or uncertainties
is the Johari window. It was created as an interpersonal awareness model (Luft and Ingham
1955) and employs a 2 × 2 matrix to characterize information about a person, based on
whether it was known to the person or to others, as arena,blind spot,façade,orunknown.
Arena represents open information that both the individual and others are aware of. Blind
spot represents information that the individual is not aware of but others are. Façade repre-
sents hidden information about the individual others are unaware of. And, unknown repre-
sents the individuals behaviors or motives that were not recognized by anyone, even him/
herself.
Researchers have proposed various versions of the Johari window. An organizational ver-
sion was proposed to help organizations assess how they communicate, by whether informa-
tion is exposed to stakeholders and whether the organization receives feedback (Galpin
1995).
A modied Johari window was also proposed to classify information needs by whether
the information is known to the information professional and the extent of the individuals
awareness of the information (Shenton 2007) as shown in Table 6. Shentons model placed
information needs into ve broad categories instead of four. Shentons model uses informa-
tion professionalinstead of othersfor one dimension, and individualinstead of selffor
the other dimension, but the biggest change is having the additional column misunderstood
byfor the individualdimension.
The Johari window has been adapted to capture how designers and users perceive a soft-
ware programs usability (Luong and McLaughlin 2011). It was also used to explore the
unknowns regarding international service experience and individual transformation through
environmentperson interaction in cross-cultural settings (Chang et al. 2012).
Table 6. Types of information needs as represented in a Johari window (Shenton 2007).
Misunderstood by the
individual
Known to the
individual
Not known to the
individual
Known to the information
professional
Misguided needs Expressed needs Inferred needs
Not known to the
information professional
Misguided needs Unexpressed
needs
Dormant or
delitescent needs
Independently-
met needs
166 S.D. Kim
A.1. Classications of uncertainty/risk using 2 × 2 matrices
There are other classications of uncertainty that use a 2 × 2 matrix. For example, Keil et al.
(1998) used a 2 × 2 matrix as a framework for identifying software project risks. Their frame-
work classied risks by whether the perceived relative importance of a risk is high or moder-
ate, and whether the perceived level of control by the decision-maker is high or low. These
dimensions were used to classify risks into scope and requirements(high importance, high
control), customer mandate( high importance, low control), execution(moderate impor-
tance, high control), and environment(moderate importance, low control ).
A.2. Other classications of uncertainty/risk
Another way to categorize uncertainties is by whether knowledge and information about them
exists but is not accessed, or simply does not exist. Stoelsnes (2007) divided unknowns into
two groups: unknown-knowable and unknown-unknowable. Unknown-knowable characterizes
events/conditions where knowledge and information is available but not accessed. Unknown-
unknowable are events/conditions where there is no knowledge or information to access in
advance, making it impossible to evaluate them in advance.
One scheme classies uncertainty as either subway uncertainty or coconut uncertainty
(Makridakis, Hogarth, and Gaba 2009). Subway uncertainty refers to what can be modeled
and reasonably incorporated in probabilistic predictions that assume, for example, normally
distributed forecasting errors. Coconut uncertainty pertains to events that cannot be modeled,
and also to rare and unique events that simply are hard to envision. Subway uncertainty is
quantiable, but coconut uncertainty is not.
A.3. Unknown unknowns
The concept of adapting the Johari window to risk classication received further attention
when former US Secretary of Defense Donald Rumsfeld used the term unknown unknowns
(Rumsfeld 2002). This led many researchers to employ quadrants of knowledge, i.e. known
known,known unknown,unknown known, and unknown unknown to understand and explain
the nature of risk.
A typical classication of risks is based on the level of knowledge about a risk events
occurrence (either known or unknown) and the level of knowledge about its impact (either
known or unknown). This leads to four possibilities, shown with examples in Table 2
(Cleden 2009).
Talbot (2006) decomposed unknown unknowns into three types: new hypotheses that
might explain a situation, new links that explain previously unknown relationships between
facts, and new story fragments that describe the signicance of previously unidentied collec-
tions of factors that might drive a decision.
Daase and Kessler (2007)dened the dimensionsmethodological knowledge and
empirical knowledge and used them to classify dangers, as shown in Table 7. Empirical
knowledge pertains to phenomena of reality, some of which could pose a danger. Methodo-
logical knowledge is the knowledge about ways to identify such things. If the empirical facts
of a danger are known and this knowledge is known to be reliable, which means that it is
known how the danger is identied, a threat exists. If, however, the factual knowledge is
Table 7. Four kinds of dangers (Daase and Kessler 2007).
Empirical knowledge
Knowns’‘Unknowns
Methodological knowledge KnownThreat Risk
UnknownIgnorance Disaster
Journal of Risk Research 167
partial, yet methods exist for reducing the uncertainty, the danger is perceived as risk.Ifno
or scant factual knowledge about a danger exists and a method for assessment is not
available, the danger, if materialized, will be considered disaster. If the facts of a danger are
largely known but this knowledge is neglected, suppressed, or forgotten, the danger is
intensied by what we call ignorance.
Based on examining the global nancial crisis that started in 2007, Jorion (2009) classi-
ed risks into three categories: known knowns,known unknowns, and unknown unknowns.
Known unknowns encompass known risk factors such as model risk, liquidity risk, and
counterparty risk. According to Jorion, unknown unknowns, which include regulatory and
structural changes in capital markets, are totally outside the scope of most scenarios because
no one could identify them in advance.
Geraldi, Lee-Kelley, and Kutsch (2010)dened unexpected events as the outcome of a
range of residual uncertainties that can threaten the viability of a project. They characterized
unexpected events by probability (unlikely), impact ( high), pertinence (untopical), and timing
(sudden). Even though the authors didnt indicate how unexpected events would map a 2 × 2
matrix, unexpected eventsappears to be equivalent to or a subset of unknown unknowns.
Black swan events is sometimes used to mean hidden risks, but Hole (2013) classied
hidden risks into black swan and gray swan. Black swan is a metaphor for a high-impact, rare
event that comes as a complete surprise to all stakeholders. Gray swan is a metaphor for a
high-impact, rare event that is somewhat predictable yet is overlooked by most stakeholders.
The distinction is that black swans cannot be assessed, whereas gray swans can be partly
assessed.
168 S.D. Kim
... In practice, a typical approach to risks is trying to identify them as early as possible and respond to them as quickly as possible once identified (Kim, 2017). However, green projects anticipate unidentified risks, also known as 'unknown unknowns' that have traditionally been underemphasized by risk management (Thamhain, 2013). ...
Conference Paper
Full-text available
Green building projects are ambitious in terms of the complexity of structures, design requirements, information flows, stakeholder integration and technological integration. As a consequence, management of these projects is becoming increasingly integrated. However, risk management (RM) has taken little account of these emergent interconnected stakeholders, interdependent tasks, inseparable risks and iteration in the design process. This leads to poor risk management outcomes, where traditional risk management practices that rely on allocating risks to specific individual entities are not able to accommodate the complexities of a collaborative integrated design. As part of a comprehensive research into how project stakeholders in collaborative design teams manage inseparable risks within their interdependent design tasks, multiple case studies were analysed using empirical data from semi-structured interviews of experienced practitioners. The abductive approach provided explanations of the continuous interplay between theory and various real-life examples. To bridge the current research gap, a matrix-based approach of using Dependency Structure Matrix to integrate the stakeholder dimension and the task dimension to solve for inseparable risks, enabled Collaborative Risk Management (CRM) to filter out most complexities, so that efforts could be directed to appropriate risk sharing and analysis of important parts of the design process. In order to judge the collaborative climate and satisfaction of each stakeholder in the design team, stakeholders suggested a decentralized process that foster a cooperative culture, contract negotiation and communication as key to ensuring that all parties are able to perform their respective tasks adequately. To manage inseparable risk, stakeholders suggested proportional risk sharing approaches, regular team meetings and timeous information sharing. The project should have a shared insurance cover that will balance the risks fairly between stakeholders; in absence of bad faith; leading to a reasonable price; qualitative performance and the minimization of disputes.
... The examination of operational uncertainty can be based on the probability of occurrence of events (known or unknown) and the evaluation of its impact (known or unknown). The review leads to four possibilities regarding events, which are known known, known unknown, unknown known, and unknown unknown to understand and explain the nature of risk (Kim, 2017). The first three event cases can be covered by using risks assessment methods and processes, but totally unknown possibility cases need to be covered by a different tool. ...
Conference Paper
The smooth operation of contemporary society relies on the collaborative functioning of multiple essential infrastructures, with their collective effectiveness increasingly hinging on a dependable national system of systems construction. The central focus within the realm of cyberspace revolves around safeguarding this critical infrastructure (CI), which includes both physical and electronic components essential for societal operations. The recent surge in cyber-attacks targeting CI, critical information infrastructures, and the Internet, characterized by heightened frequency and increased sophistication, presents substantial threats. As perpetrators become more adept, they can digitally infiltrate and disrupt physical infrastructure, causing harm to equipment and services without the need for a physical assault. The operational uncertainty of CI in these cases is obvious. The linchpin of cyber security lies in a well-executed architecture, a fundamental requirement for effective measures. The framework of this paper emphasizes organizational guidance in cyber security management by integrating the cyber security risks assessment and the cyber resilience process into overall continuity management of organizations business processes.
... The examination of operational uncertainty can be based on the probability of occurrence of events (known or unknown) and the evaluation of its impact (known or unknown). The review leads to four possibilities regarding events, which are known known, known unknown, unknown known, and unknown unknown to understand and explain the nature of risk (Kim, 2017). The first three event cases can be covered by using risks assessment methods and processes, but totally unknown possibility cases need to be covered by a different tool. ...
Article
Full-text available
The smooth operation of contemporary society relies on the collaborative functioning of multiple essential infrastructures, with their collective effectiveness increasingly hinging on a dependable national system of systems construction. The central focus within the realm of cyberspace revolves around safeguarding this critical infrastructure (CI), which includes both physical and electronic components essential for societal operations. The recent surge in cyber-attacks targeting CI, critical information infrastructures, and the Internet, characterized by heightened frequency and increased sophistication, presents substantial threats. As perpetrators become more adept, they can digitally infiltrate and disrupt physical infrastructure, causing harm to equipment and services without the need for a physical assault. The operational uncertainty of CI in these cases is obvious. The linchpin of cyber security lies in a well-executed architecture, a fundamental requirement for effective measures. The framework of this paper emphasizes organizational guidance in cyber security management by integrating the cyber security risks assessment and the cyber resilience process into overall continuity management of organizations business processes.
... She might perceive a low level of crisis training to be sufficient when it is not. We cannot confirm this phenomenon explains the findings, but we believe the conundrum of "unknown unknowns" is a contributing factor (Kim, 2017;Mills, 2019). People may overestimate their ability to control uncertain problems and are hence, overly optimistic. ...
Article
China’s increasing international prominence has prompted additional research on how Chinese firms manage organizational crises. The purpose of this paper is to identify patterns of concerns and experiences with crises in China. We report on a survey of 105 managers and non-managers in China about their experience and concern with crises in their firms. Our analysis underscores three key findings. First, one's concern about a crisis is strongly associated with one's experience involving that crisis. Second, views about crisis experience and concern differ between employees in state-owned enterprises (SOEs) and non-SOEs. Finally, despite these differences, perspectives on crisis training among SOE and non-SOE firms are similar. This paper augments the literature by identifying relationships among crisis experience, crisis concern, and training in Chinese organizations.
Article
In einer dynamischen und komplexen Welt sind Szenariomethoden nützliche In-strumente, um die Auswirkungen zukünftiger Entwicklungen zu verstehen bzw. die Gestaltung der Zukunft zu unterstützen. Nach ihrer Einführung im Verteidigungs¬sektor in den 1960er Jahren wurde die Szenariomethodik in vielen Bereichen ein¬gesetzt und an die unterschiedlichen Bedürfnisse angepasst, was zu einer Vielzahl an Variationen und Kombinationen der Szenariomethoden führte. Auf der Grund¬lage einer umfangreichen Literaturrecherche und jahrelanger praktischer Erfahrung reflektieren wir in diesem Artikel den Status quo von Forschung und Anwendung. Wir identifizieren, wie und für welche Zwecke die Methodik in der Praxis am besten eingesetzt werden kann. Wir unterscheiden fünf mit Szenarien verbundene Zielkategorien und entwickeln ein Prozessmodell, das bisherige Ansätze zusammenführt. Wir befassen uns mit der konkreten Ausgestaltung der Teilschritte des Prozesses (Methodenvarianten), mit Methodenkombinationen sowie mit alternativen Methoden. Des Weiteren identi¬fizieren wir mögliche Erfolgsfaktoren und potenzielle Fallstricke. Schließlich werden noch offene (Forschungs-)Fragen formuliert, die auf Basis der vorliegenden Bestandsaufnahme zu Systematisierung, Qualitätssicherung und Evaluation beitragen können, etwa durch ein Entscheidungsunterstützungskonzept zur Prozess¬optimierung samt Methodenauswahl.
Article
Widening participation in higher education is a key priority in the UK, aiming to address historical and multifaceted disparities where access to University has traditionally favoured more affluent backgrounds. Achieving balanced representation across socio-economic groups is essential for creating a level playing field. This requires comprehensive efforts, including targeted initiatives for improving access to education, providing financial assistance, and implementing inclusive policies. The term `widening participation´ encompasses various interventions aimed at creating a more inclusive higher education system. This study focuses on exploring the effectiveness of interventions in enhancing students’ self‐belief and confidence for widening participation in higher education, as perceived by widening participation tutors. While there is a considerable amount of literature on the perspectives of widening participation students, less attention has been given to the views of tutors. This article aims to contribute to filling a gap in recent research. A purposive sample of six independent tutors, considered experts in the field of widening participation, participated in semi-structured telephone interviews. Thematic approach was employed for data analysis, and saturation was reached by the fourth interview. Two main themes emerged: `Spectrum of self-motivation´ and `widening participation as a medium to success,´ falling under the overarching theme of `Actualisation of student potential´. Participants expressed a coherent sense of unity in acknowledging the positive impact of widening participation strategies in helping underrepresented students access higher education. However, they also recognised the structural inequalities that these students face, which can limit the effectiveness of widening participation interventions. Overall, the findings suggest that widening participation interventions play a significant role in empowering self-determined students from underrepresented groups to pursue higher education.
Chapter
With the changing circumstances across the world due to the pandemic COVID-19, the concept of distance learning and the use of technology has increased at a very fast pace. This has resulted different online classes and different associated risks related to security with these online classes for the organizations. However, with the passage of time, it has been realized that COVID-19 has now become a part of our lives, and what we considered a matter of days or weeks might last for a very long period, which is still unknown to us. Most importantly, this new phase of life has become a new normal for the people today. It has now become important for us to change the course of our lives and to hold different activities according to this new normal. Most of the business activities are also now being adapted through this social distancing and many other activities involving the education system to be on top of everything else. The methodology adopted for the study was qualitative study and the method adopted for the study was interviewed and generating the study through literature reviews.
Article
The life cycle cost (LCC) design method tries to improve conventional design practices with different costs, such as risk costs, over the life cycle of the structure in the design procedure. The present study introduces a modified life cycle cost (MLCC) approach for the design of excavations. Some modifications proposed in the typical LCC design method include considering the effect of risk aversion/seeking of decision-makers in the main LCC formula by a risk-seeking factor and taking in the effect of risk exposure time on risk cost by a risk duration as an impact factor. The risk-seeking factor is obtained by identifying the risky behaviour of decision-makers based on the expected utility theory. The risk duration impact factor is evaluated by analysing statistical information about high-risk excavations versus their lifetime. The novel MLCC design method is evaluated in real deep urban excavation projects and the method is applicable for design and yields sensible outputs.
Article
Full-text available
Projects continue to fail at a high rate despite the well-known risk benchmarks published decades ago. Risk assessment and contingency planning are needed in oil and gas (O&G) capital projects because of many ‘unknown unknowns.’ Uncertainty must be estimated for the project schedule as well as for investment costs. Quantitative estimates and diagramming tools can assist in understanding and communicating project risk levels. This paper outlines and applies a method for quantifying unknown unknowns in the O&G industry based on a case study. Four dimensions of unknown unknowns are discussed: novelty of a project, phase of project development, type of industry, and bias. Uncertainty is classified as unknown unknowns, bias, known unknowns, and corporate risks. Practical recommendations are made to quantify uncertainty using probabilistic risk models, and then to integrate these estimates into the budget and schedule.
Article
One of the most difficult problems in usability and user experience has been to clearly illustrate differences in perceptions between software design teams and end users of their products. Many strategies are directed at closing this gap, such as video-taping users unable to perform a function on an interface and collecting quotes from focus groups and interviews. In the current study we have created a novel tool to quickly elicit perceptions of usability from software designers and end users based on the Johari window (Luft, 1984). The benefit of this technique is a visualization of differences in usability perceptions and the flexibility to be used in many scenarios; for example, to elicit inter-team differences in perceptions of usability by the designers themselves. We will first provide the tool to a design team and group of end users, then gather feedback from designers about the modified-Johari window’s usefulness and whether they desire to use it in future software development.
Article
A challenge in most projects is to deal with unexpected events or conditions that may influence projects accomplishment. An outline of a structure that can be used when addressing these events or conditions is presented in this paper. High Reliability Organizations (HRO) has ways of acting and managing that make them able to handle the unexpected, hallmarks and characteristics of HRO are presented in this paper. This paper do furthermore suggests ways of acting and managing, so that project organizations may be able to address and manage unexpected events or conditions in projects.
Article
Understanding and dealing with the unknown is a major challenge in project management. An extensive body of knowledge − theory and technique − exists on the “known unknowns,” i.e., uncertainties which can be described probabilistically and addressed through the conventional techniques of risk management. Although some recent studies have addressed projects where the existence of unknown unknowns (unk unks) is readily apparent or may be assumed given the type of project − e.g., new product development or new process implementation − very little work has been reported with respect to projects in general on how a project manager might assess its vulnerability to unk unks. In this paper, we present a conceptual framework to deal with (i.e., recognize and reduce) knowable unk unks in project management. The framework is supported by insights from a variety of theories, case analyses, and experiences. In this framework, we first present a model of the key factors–relating to both project design and behavioral issues–that increase the likelihood of unk unks and a set of propositions linking these factors to unk unks. We then present a set of design and behavioral approaches that project managers could adopt to reduce knowable unk unks. Our framework fills a gap in the project management literature and makes a significant practical contribution: it helps project managers diagnose a project to recognize and reduce the likelihood of unk unks and thus deal more effectively with the otherwise unrecognized risks and opportunities.
Article
The ability to recover quickly from large-impact, hard-to-predict, and rare incidents without incurring sizable permanent damage is vital to major stakeholders in ICT infrastructures of national importance.