ArticlePDF Available

Abstract and Figures

The risk matrix (RM) is a widely espoused approach to assess and analyze risks in the oil & gas (O&G) industry. RMs have been implemented throughout that industry and are extensively used in risk-management contexts. This is evidenced by numerous SPE papers documenting RMs as the primary risk management tool. Yet, despite this extensive use, the key question remains to be addressed: Does the use of RMs guide us to make optimal (or even better) risk-management decisions? We have reviewed 30 SPE papers as well as several risk-management standards that illustrate and discuss the use of RMs in a variety of risk-management contexts, including HSE, financial, and inspection. These papers promote the use of RMs as a "best practice.?? Unfortunately, they do not discuss alternative methods or the pros and cons of using RMs. The perceived benefit of the RM is its intuitive appeal and simplicity. RMs are supposedly easy to construct, easy to explain, and easy to score. They even might appear authoritative and intellectually rigorous. Yet, the development of RMs has taken place completely isolated from academic research in decision making and risk management. This paper discusses and illustrates how RMs produce arbitrary decisions and risk-management actions. These problems cannot be overcome because they are inherent in the structure of RMs. In their place, we recommend that O&G professionals rely on risk- and decision-analytic methods that rest on over 300 years of scientific thought and testing.
Content may be subject to copyright.
The Risk of Using Risk Matrices
Philip Thomas, SPE, and Reidar B. Bratvold, SPE, University of Stavanger; and
J. Eric Bickel, SPE, University of Texas at Austin
The risk matrix (RM) is a widely espoused approach to assess and
analyze risks in the oil and gas (O&G) industry. RMs have been
implemented throughout that industry and are used extensively in
risk-management contexts. This is evidenced by numerous SPE
papers documenting RMs as the primary risk-management tool.
Yet, despite this extensive use, the key question remains to be
addressed: Does the use of RMs guide us to make optimal (or
even better) risk-management decisions?
We have reviewed 30 SPE papers as well as several risk-man-
agement standards that illustrate and discuss the use of RMs in a
variety of risk-management contexts, including health, safety, and
environment (HSE), financial; and inspection. These papers pro-
mote the use of RMs as a “best practice.” Unfortunately, they do
not discuss alternative methods or the benefits and detriments of
the use of RMs.
The perceived benefit of the RM is its intuitive appeal and sim-
plicity. RMs are supposedly easy to construct, easy to explain,
and easy to score. They even might appear authoritative and intel-
lectually rigorous. However, the development of RMs has taken
place completely isolated from scientific research in decision
making and risk management. This paper discusses and illustrates
how RMs produce arbitrary decisions and risk-management actions.
These problems cannot be overcome because they are inherent in
the structure of RMs. In their place, we recommend that O&G pro-
fessionals rely on risk- and decision-analytic methods that rest on
250 years of scientific thought and testing.
In the O&G industry, risk-intensive decisions are made daily. In
their attempt to implement a sound and effective risk-manage-
ment culture, many companies use RMs
and specify this in “best
practice” documents. Furthermore, RMs are recommended in
numerous international and national standards such as ISO,
The popularity of RMs has been attributed in part
to their visual appeal, which is claimed to improve communications.
Despite these claimed advantages, we are not aware of any pub-
lished scientific studies demonstrating that RMs improve risk-man-
agement decisions.
However, several studies indicate the opposite:
that RMs are conceptually and fundamentally flawed. For example,
Cox et al. (2005) derived and discussed several fundamental flaws
introduced through the qualitative scoring system that is often used
in RMs. Cox (2008) provided further examples of these flaws and
presented a set of rules that RMs must obey if they are to be logi-
cally consistent. Hubbard (2009) provided compelling arguments
for why, in most cases, the use of RMs results in unclear informa-
tion flow and suboptimal risk-management decisions.
This paper summarizes the known flaws of RMs, identifies
several previously undiscussed problems with RMs, and illus-
trates that these shortcomings can be seen in SPE papers that ei-
ther demonstrate or recommend the use of RMs. The paper is
organized as follows: The next section describes RMs. The fol-
lowing section discusses current practices and standards for risk
management, including an example. We then illustrate the flaws
and dangers resulting from the use of RMs before we provide a
very short overview of methods and references that discuss a con-
sistent approach to risk management. Finally, we provide a sum-
mary and a discussion.
An RM is a graphical presentation of the likelihood, or probabil-
ity, of an outcome and the consequence should that outcome
occur. Consequences are often defined in monetary terms. RMs,
as their name implies, tend to be focused on outcomes that could
result in loss, rather than gain. The purported objective of the RM
is to prioritize risks and risk-mitigation actions.
Within the context of RMs, “risk” is typically defined as con-
sequence multiplied by its probability, which yields the expected
downside consequence or the expected loss. Rather than refer to
expected downside consequence as “risk,” we will use the more
precise term expected loss (EL).
Pritchard et al. (2010) gave an example of using RMs to assess
the risk of a drilling hazard. This paper was one of three in a spe-
cial issue of World Oil devoted to advances in drilling. Pritchard
et al. (2010) note the example as a “typical industry risk assess-
ment matrix.” We have adopted this example as Fig. 1 and use it
to explain the flaws inherent in RMs.
As can be seen in Fig. 1, the consequences and probabilities in
an RM are expressed as a range. For example, the first conse-
quence category might be “<USD 100K,” the second might be
“USD 100–250K,” and so on. The first probability range might be
<¼1%,” the second might be between 1 and 5%, and so forth.
A verbal label and a score are also assigned to each range. (Some
RMs use these instead of a quantitative range.) For example,
probabilities from 10 to 20% might be labeled as “seldom” and
assigned a score of 4. Probabilities greater than 40% might be
termed “likely” and given a score of 6. Consequences from USD
5 to 20 million might be termed “severe” and given a score of 5;
losses greater than USD 20 million might be labeled as “cata-
strophic” and given a score of 6.
It is interesting and concerning that such an RM would treat
losses of USD 50 billion (on the scale of BP’s losses stemming
from the Macondo blowout) or USD 20 million in the same way,
despite the three-orders-of-magnitude difference. Because there is
no scientific method of designing the ranges used in an RM, many
practitioners simply use the ranges specified in their company’s
best-practice documents. In fact, as we will show, differently
shaped regions can alter risk rankings.
The cells in RMs are generally colored green, yellow, and red.
Green means “acceptable,” yellow stands for “monitor, reduce if
possible,” and red is “unacceptable, mitigation required.” Previous
work has detailed the way in which the colors must be assigned if
one seeks consistency in the ranking of risks. Most of the SPE
papers we examined failed to assign colors in a logically consistent
way. For example, some of the cells designated as red were “less
risky” than some of the cells that were designated as yellow.
The problem context presented in Pritchard et al. (2010) is the
loss of fluid during drilling in a particular section of a well. There
Copyright V
C2014 Society of Petroleum Engineers
This paper (SPE 166269) was accepted for presentation at the SPE Annual Technical
Conference and Exhibition, New Orleans, 30 September–2 October 2013, and revised for
publication. Original manuscript received for review 16 July 2013. Revised manuscript
received for review25 November 2013. Paper peer approved 11 December 2013.
Sometimes called probability-impact matrices (PIMs)
International Organization for Standardization (ISO), the world’s largest developer of vol-
untary international standards
American Petroleum Institute (API), which establishes standards for petroleum-industry
activities in the US
NORSOK—produces standards for petroleum-industry activities in Norway
The use of RMs to analyze and manage risks may be better than doing nothing. Indeed,
any approach that generates some discussion of the risks in a particular activity will be
56 April 2014 SPE Economics & Management
is a need to identify the possible outcomes and consequences aris-
ing from this event and to prioritize these risks. Three possible
downside outcomes were identified: severe losses of drilling fluid,
well-control issues, and blowout.
Once the possible outcomes
were defined, Pritchard et al. (2010) specified their probabilities
and the range of possible consequences, both of which are given
in Table 1.
Once the assessment of consequence and probability
was complete, the outcome was plotted in the RM (Fig. 1) to
determine whether the risk of an outcome fell into a green, yel-
low, or red region. Thus, well control and blowout fell in the yel-
low region, whereas severe losses fell in the red region. Hence, in
the parlance of RMs, the possibility of severe losses is “riskier”
than either well control or blowout and should therefore be priori-
tized over these other two concerns.
Fig. 1 indicates the score associated with each range. Pritchard
et al. (2010) assumed that cells along a diagonal with slope of –1
have the same risk. Thus, they considered blowout and well con-
trol to have the same degree of risk. Poedjono et al. (2009) and
Dethlefs and Chastain (2012) also documented the use of RMs in
a drilling context, but they used the more common practice of
multiplying the probability and consequence scores to obtain a
“risk score” for each outcome. Table 2 shows the results of apply-
ing this procedure to the Pritchard et al. (2010) example. There
appears to be no mathematical theory that would allow the multi-
plication of scores, a practice that seems to be an attempt to
mimic the calculation of expected loss, in which case monetary
consequence would be multiplied, or “risked,” by the likelihood
of its occurrence. On the basis of these results, actions to mitigate
severe losses will be prioritized whereas blowout will be add-
ressed only after the other two possible outcomes have been
Before concluding this section, we explain how and why we
slightly modified the RM used by Pritchard et al. (2010). First,
they used a decreasing score scale rather than the increasing scale
that is more commonly used. As we will show later, the choice
between an ascending or descending scale in our analysis can alter
the prioritization. Second, they did not use mutually exclusive cat-
egories. Specifically, they used categories of USD 1 to 5 million
and USD 2 to 20 million. This is clearly problematic for an out-
come of, for example, USD 3 million. Similarly, there was an
overlap in their probability ranges of 0 to 1% and 0 to 5%, which
means that the ranges were not mutually exclusive.
Current Practices and Standards
RMs are considered to be versatile enough to be used to analyze
and prioritize risks in many settings. A number of international
standards support the role of RMs in risk assessment, and many
companies consider RMs to be a “best practice.” In this section,
we illustrate a common RM-analysis approach. We then summa-
rize how some central risk-management standards view the use of
Common Industry Practices. To use the RM for risk prioritiza-
tion and communication, several steps must be carried out. Clare
and Armstrong (2006) presented a common risk-evaluation pro-
cess for the O&G industry, in which they used RMs as a risk-eval-
uation tool. The work process they used is shown in Fig. 2.
Step 1: Define Risk Criteria. This step determines the size of
the RM and its number of colors. Although there is no technical
reason for it, RMs are generally square. The most common size is
five rows by five columns (i.e., a 5"5 matrix), but some compa-
nies use a 3"3 matrix and others use an 8"8 matrix. Some com-
panies choose to include more colors than the standard red,
yellow, and green in their RMs.
Step 2: Define Risk Events. This step identifies the risk
events. For example, drilling a particular section of a hole is the
event for which we are going to identify all the possible downside
Step 3: Consequence Estimation and Probability Assessment.
This step estimates the consequence range of each outcome iden-
tified in Step 2 and assigns probabilities to each outcome. For
example, the outcome of severe losses is registered, and the
expected financial consequence is estimated to be from USD 1 to
5 million. The chance of this occurring is estimated to be 40%.
By use of the RM in Fig. 1, this equates to a probability score of 5
(“occasional”) and a consequence score of 4 (“major”).
Step 4: Risk Profile. This step positions each identified down-
side outcome in a cell in the RM.
Step 5: Rank and Prioritize. This step ranks and prioritizes
the outcomes according to their risk score. Most companies use a
risk-management policy in which all outcomes in the red area are
“unacceptable” and thus must be mitigated.
The results of Steps 2 through 5 are often collectively called a
“risk register,” and the information required is usually collected
in a joint meeting with the key stakeholders from the operating
company, service companies, partners, and others.
Severe Losses
Well Control
Probability P - Rating P - Indices
> 40% 6 Likely
20% < p <= 40% 5 Occasional
10% < p <= 20% 4 Seldom
5% < p < = 10% 3 Unlikely
1% < p < = 5% 2 Remote
<= 1% 1 Rare
1 2 3 4 5 6
Incidental Minor Moderate Major Severe Catastrophic
<= USD 100K USD 100–250K USD 250K–1MM USD 1–5MM USD 5–20MM > USD 20MM
Consequence Rating
Consequence Indices
Consequence Cost
Fig. 1—RMs modified from Pritchard et al. (2010).
Outcome Consequence (USD Million) Probability
Severe Losses 1 to 5 40%
Well Control 5 to 20 10%
Blowout >20 5%
Event: Fluid losses occur in hole section (12 to 14 in.)
Outcome Risk Score Rank
Severe Losses 5"4¼20 1
Well Control 3"5¼15 2
Blowout 2"6¼12 3
The outcomes are assumed to be independent, which might not be correct. For example,
a blowout implies loss of well control.
The probabilities in this case example are taken from Pritchard et al. (2010), and the con-
sequences come from reconversion of the consequence scores into their definition as pre-
sented in Pritchard et al. (2010).
The probabilities need not sum to unity because the events are assumed to be mutually
exclusive, but not collectively exhaustive.
April 2014 SPE Economics & Management 57
Standards. Among the standards that are commonly used in the
O&G industry are API, NORSOK, and ISO. All of these standards
recommend RMs as an element of risk management. This section
summarizes how each of these standards supports RMs.
API. API RP 581(2008) recommends RMs customarily for its
risk-based-inspection (RBI) technology. RBI is a method to opti-
mize inspection planning by generating a risk ranking for equip-
ment and processes and, thus, prioritization for inspection of the
right equipment at the right time. API RP 581 specifies how to
calculate the likelihoods and consequences to be used in the RMs.
The specification is a function of the equipment that is being ana-
lyzed. The probability and consequence of a failure are calculated
by use of several factors. API RP 581 asserts that “Presenting the
results in a risk matrix is an effective way of showing the distribu-
tion of risks for different components in a process unit without nu-
merical values.”
NORSOK. The NORSOK (2002) standards were developed by
the Norwegian petroleum industry to “ensure adequate safety, value
adding and cost effectiveness for petroleum industry developments
and operations. Furthermore, NORSOK standards are as far as pos-
sible intended to replace oil company specifications and serve as
references in the authority’s regulations.” NORSOK recommends
the use of RMs for most of their risk-analysis illustrations. The
RMs used by NORSOK are less rigid than those of API RBI
because the NORSOK RMs can be customized for many problem
contexts (the RM template is not standardized). NORSOK S-012,
an HSE document related to the construction of petroleum infra-
structure, uses an RM that has three consequence axes—occupa-
tional injury, environment, and material/production cost—with a
single probability axis for all three consequence axes.
ISO. ISO standards ISO 31000 (2009) and ISO/IEC 31010
(2009) influence risk-management practices not only in the O&G
industry but in many others. In ISO 31000, the RM is known as a
probability/consequence matrix. In ISO/IEC 31010, there is a ta-
ble that summarizes the applicability of tools used for risk assess-
ment. ISO claims that the RM is a “strongly applicable” tool for
risk identification and risk analysis and is “applicable” for risk
evaluation. As with the NORSOK standard, ISO does not stand-
ardize the number of colors, the coloring scheme (risk-acceptance
determination), or the size of range for each category. ISO praises
RMs for their convenience, ease of use, and quick results. How-
ever, ISO also lists limitations of RMs, including some of their
inconsistencies, to which we now turn.
Deficiencies of RMs
Several flaws are inherent to RMs. Some of them can be corrected,
whereas others seem more problematic. For example, we will show
that the ranking produced by a RM depends upon arbitrary choices
regarding its design, such as whether one chooses to use an increas-
ing or decreasing scale for the scores. As we discuss these flaws,
we also survey the SPE literature to identify the extent to which
these flaws are being made in practical applications.
To locate SPE papers that address or demonstrate the use of
RMs, we searched the OnePetro database using the terms “risk
matrix” and “risk matrices.” This returned 527 papers. Then, we
removed 120 papers published before the year 2000, to make sure
our study is focused upon current practice. We next reviewed the
remaining 407 papers and selected those that promote the use of
RMs as a “best practice” and actually demonstrate RMs in the pa-
per; leaving 68 papers. We further eliminated papers that pre-
sented the same example. In total, we considered a set of 30
papers covering a variety of practice areas (e.g., HSE, hazard
analysis, and inspection). We believe that this sampling of papers
presents the current RM practice in the O&G industry. We did not
find any SPE papers documenting the known pitfalls of the use of
RMs. The 30 papers we consider in this paper are given in Appen-
dix A.
Known Deficiencies of RMs
Several deficiencies of RMs have been identified by other authors.
Risk-Acceptance Inconsistency. RMs are used to identify, rank,
and prioritize possible outcomes so that scarce resources can be
directed toward the most-beneficial areas. Thus, RMs must reli-
ably categorize the possible outcomes into green, yellow, and red
regions. Cox (2008) suggested we should conform to three axioms
and one rule when designing RMs to ensure that the EL in the
green region is consistently smaller than the EL in the red region.
Cox (2008) also clarifies that the main purpose of the yellow
region is to separate the green region and red region in the RMs,
not to categorize the outcomes. He argues that the RM is inconsis-
tent if the EL in the yellow region can be larger than in any of the
red cells or smaller than in any of the green cells. Nevertheless,
the practice in O&G is to use the yellow region to denote an out-
come with a medium risk. Every SPE paper we reviewed imple-
ments this practice and also violates at least one of the axioms or
the rule proposed by Cox (2008), leading to inconsistencies in the
Fig. 3 shows an example RM with many outcomes. This
example shows that there are two groups of outcomes. The first
group is the outcome with medium-high probability and medium-
high consequence (e.g., severe losses, well-control issues) and the
second group is the outcome with the low probability but very
high consequence (e.g., blowout). In Fig. 3, the first group of out-
comes is illustrated in the red cells whereas the second group is in
the yellow cell. The numbers shown in some of the cells represent
the probability, consequence, and EL, respectively, where EL is
calculated as probability multiplied by consequence. This exam-
ple shows the inconsistency between EL and color practice in
RMs where all outcomes in the red cells have a lower EL com-
pared with the outcome in the yellow cell. Assuming that we wish
to rank outcomes on the basis of expected loss, we would priori-
tize the outcome in the yellow cell compared with the outcomes
in the red cells, which is the opposite of the ranking provided by
the color regions in the RM. Clearly, the use of the RM would in
this case lead us to focus our risk-mitigation actions on the out-
come that does not have the highest EL. This type of structure is
evident in eight of the papers we reviewed.
Step 3a.
Probability Assessment
Step 3b.
Consequence Estimation
Step 1.
Define Risk Cricteria
Step 2.
Define Risk Events
Step 4.
Plot in Risk Matrix
“Risk Profile”
Step 5.
Risk Prioritization
& Mitigation plan
Fig. 2—Common workflow for analyzing risks by use of RMs.
58 April 2014 SPE Economics & Management
Range Compression. Cox (2008) described range compression
in RMs as a flaw that “assigns identical ratings to quantitatively
very different risk.” Hubbard (2009) also focused extensively on
this problem.
Range compression is unavoidable when consequences and
probabilities are converted into scores. The distance between risks
in the RM using scores (mimicking expected-loss calculation)
does not reflect the actual distance between risks (specifically, the
difference in their expected loss).
In our case example shown in Fig. 1, blowout and well control
are considered to have the same risk (both are yellow). However,
this occurs only because of the ranges that were used and the arbi-
trary decision to have the “catastrophic” category include all con-
sequences greater than USD 20 million. Fig. 4 more accurately
represents these outcomes. A blowout could be many orders of
magnitude worse than a loss of well control. Yet, the RM does not
emphasize this in a way that we think is likely to lead to high-
quality risk-mitigation actions. To the contrary, the sense that we
get from Fig. 1 is that a blowout is not significantly different (if
any different) from a loss in well control—they are both “yellow”
risks. The use of the scoring mechanism embedded in RMs com-
presses the range of outcomes and, thus, miscommunicates the
relative magnitude of both consequences and probabilities. The
failure of the RM to convey this distinction seems to undermine
its commonly stated benefit of improved communication. This
example demonstrates the range compression inherent in RMs,
which necessarily affected all the surveyed SPE papers. The next
section will introduce the “lie factor” (LF) that we use to quantify
the degree of range compression.
Centering Bias. Centering bias refers to the tendency of people
to avoid extreme values or statements when presented with a
choice. For example, if a score range is from 1 to 5, most people
will select a value from 2 to 4. Hubbard (2009) analyzed this in
the case of information-technology projects. He found that 75%
of the chosen scores were either 3 or 4. This further compacts the
scale of RMs, exacerbating range compression. Smith et al.
(2009) came to the same conclusions from investigating risk man-
agement in the airline industry.
Is this bias also affecting risk-management decisions in the
O&G industry? Unfortunately, there is no open-source O&G
database that can be used to address this question. However, six
of the reviewed SPE papers presented their data in sufficient
detail to investigate whether the centering bias seems to be occur-
ring. Each of the six papers uses an RM with more than 15 out-
comes. Fig. 5 shows the percentage of the outcomes that fell into
the middle consequence and probability scores. For example, pa-
per SPE 142854 used a 5"5 RM; hence, the probability ratings
ranged from 1 to 5. Paper SPE 142854 has 24 outcomes, out of
which 18 have a probability rating of 2, 3, or 4 (which we will
denote as “centered”), and the remaining six outcomes have a
probability rating of 5. Hence, 75% of the probability scores were
For the six papers combined, 83% of the probability scores
were centered, which confirms Hubbard (2009). However, only
52% of the consequence scores were centered, which is less than
that found in Hubbard (2009). A closer inspection shows that in
four out of the six papers, 90% of either probability or conse-
quence scores were centered.
(10%, 25, 2.5)
(5%, 250, 12.5)
Probability P - Rating P - Indices
> 40% 6 Likely (45%,1,0.45) (45%,3,1.35) (45%,15,6.75) (45%,25,11.25)
20% < p <= 40% 5 Occasional (25%,3,0.75) (25%,15,3.75) (25%,25,6.25)
10% < p <= 20% 4 Seldom (15%,15,2.25) (15%,25,3.75)
5% < p <= 10% 3 Unlikely
1% < p <= 5% 2 Remote
<=1% 1 Rare
1 2 3 4 5 6
Incidental Minor Moderate Major Severe Catastrophic
Consequence Rating
Consequence Indices
Consequence cost <= USD 100K USD 100–250K USD 250K–1MM USD 1–5MM USD 5–20MM > USD 20MM
Fig. 3—Risk acceptance inconsistency in RMs.
20 40
Well Control
Severe Losses
Millions, USD
60 80 100
Fig. 4—Plot of probabilities and consequences value of the out-
comes in the case example.
SPE 142824 SPE 146845 SPE 74080 OTC 18912
Paper Number
Probability Consequence
Average of centered
probability scores
Average of centered
consequence scores
75% as documented
in Hubbard (2009)
SPE 162500 SPE 73897
Fig. 5—Centering-bias evidence in SPE papers.
April 2014 SPE Economics & Management 59
Category-Definition Bias. Budescu et al. (2009) concluded that
providing guidelines on probability values and phrases is not suf-
ficient to obtain quality probability assessments. For example,
when guidelines specified that “very likely” should indicate a
probability greater than 0.9, study participants still assigned prob-
abilities in the 0.43 to 0.99 range when they encountered the
phrase “very likely.” He argued that this creates the “illusion of
communication” rather than real communication. If a specific def-
inition of scores or categories is not effective in helping experts to
be consistent in their communication, then the use of only qualita-
tive definitions would likely result in even more confusion. Wind-
schitl and Weber (1999) showed that the interpretation of phrases
conveying a probability depends on context and personal prefer-
ences (e.g., perception of the consequence value). Although most
research on this topic has focused on probability-related words,
consequence-related words such as “severe,” “major,” or “cata-
strophic” would also seem likely to foster confusion and
We reviewed the 30 SPE papers on the scoring method used.
The papers were then classified into qualitative, semiqualitative,
and quantitative categories.
Most of the scores (97%) were qualita-
tive or semiqualitative. However, these papers included no discus-
sion indicating that the authors are aware of category-definition
bias or any suggestions for how it might be counteracted.
Category-definition bias is also clearly seen between papers.
For example, paper SPE 142854 considered “improbable” as
“virtually improbable and unrealistic.” In contrast, paper SPE
158114 defined “improbable” as “would require a rare combina-
tion of factors to cause an incident.” These definitions clearly
have different meanings, which will lead to inconsistent risk
assessments. This bias is also seen in the quantitative RMs. Paper
SPE 127254 categorized “frequent” as “more than 1 occurrence
per year,” but paper SPE 162500 categorized “frequent” as “more
than 1 occurrence in 10 years.” This clearly shows inconsistency
between members of the same industry. Table 3 summarizes the
variations in definitions within the same indices in some of the
SPE papers surveyed.
Given these gross inconsistencies, how can we accept the
claim that RMs improve communication? As we show here, RMs
that are actually being used in the industry are likely to foster mis-
communication and misunderstanding, rather than improve com-
munication. This miscommunication will result in misallocation
of resources and the acceptance of suboptimal levels of risk.
Identification of Previously Unrecognized
This section discusses three RM flaws that had not been previously
identified. We demonstrate that these flaws cannot be overcome
and that RMs will likely produce arbitrary recommendations.
Ranking is Arbitrary. Ranking Reversal. Lacking standards for
how to use scores in RMs, two common practices have evolved:
ascending scores or descending scores. The example in Fig. 1
uses ascending scores, in which a higher score indicates a higher
probability or more serious consequence. Using descending
scores, a lower score indicates a higher probability or more seri-
ous consequence. These practices are contrasted in Fig. 6.
A glance at Fig. 6 might give the impression that ascending or
descending scores would produce the same risk ranking of out-
comes. However, Table 4 shows for each ordering the resulting
risk scores and ranking of the outcomes shown in Fig. 6. With the
use of ascending scores, severe losses will be prioritized for risk
mitigation. However, with the use of the descending scores, blow-
out will be prioritized for risk mitigation.
The typical industry RM given in Pritchard et al. (2010) used
descending ordering. However, both ascending and descending
scoring systems have been cited in the SPE literature and there is
no scientific basis for either method. In the 30 SPE papers sur-
veyed, five use the descending scoring system, and the rest use
the ascending scoring system. This behavior demonstrates that
Paper Index Index Definition Quantitative Measures
SPE 146845 Frequent Several times a year in one location Occurrence >1/year
SPE 127254 Frequent Expected to occur several times during lifespan of a unit Occurrence >1/year
SPE 162500 Frequent Happens several times per year in same location or operation Occurrence >0.1/year
SPE 123457 Frequent Has occurred in the organization in the last 12 months
SPE 61149 Frequent Possibility of repeated incidents
SPE 146845 Probable Several times per year in a company 1/year >Occurrence >0.1/year
SPE 127254 Probable Expected to occur more than once during lifespan of a unit 1/year >Occurrence >0.03/year
SPE 162500 Probable Happens several times per year in specific group company 0.1/year >Occurrence >0.01/year
SPE 123457 Probable Has occurred in the organization in the last 5 years or has
occurred in the industry in the last 2 years
SPE 158115 Probable Not certain, but additional factor(s) likely result in incident
SPE 61149 Probable Possibility of isolated incident
Severe Losses
Well Control
Probability P - Rating Descending P - Rating Ascending P - Indices
> 40% 1 6 Likely
20% < p <= 40% 2 5 Occasional
10% < p <= 20% 3 4 Seldom
5% < p <= 10% 4 3 Unlikely
1% < p <= 5% 5 2 Remote
<=1% 6 1 Rare
1 2 3 4 5 6
6 5 4 3 2 1
Incidental Minor Moderate Major Severe Catastrophic
Consequence Rating Ascending
Consequence Rating Descending
Consequence Indices
Consequence Cost <= USD 100K
USD 100–250K USD 250K–1MM
USD 1–5MM USD 5–20MM > USD 20MM
Fig. 6—Two different scoring systems for an RM.
Qualitative refers to RMs in which none of the definitions of probability and consequence
categories provide numerical values. Semiqualitative refers to RMs in which some of the
definitions of probability and consequence categories provide numerical values. Quantita-
tive refers to RMs in which definitions of all probability and consequence categories provide
numerical values.
60 April 2014 SPE Economics & Management
RM rankings are arbitrary; whether something is ranked first or
last, for example, depends on whether or not one creates an
increasing or a decreasing scale. How can a methodology that
exhibits such a gross deficiency be considered an industry best
practice? Would such a method stand up to scrutiny in a court of
law? Imagine an engineer defending their risk-management plan
by noting it was developed by use of an RM, when the lawyer
points out that simply changing the scale would have resulted in a
different plan. What other engineering best practices produce dif-
ferent designs simply by changing the scale or the units?
Instability Because of Categorization. RMs categorize conse-
quence and probability values, but there are no well-established
rules for how to do the categorization. Morgan et al. (2000) rec-
ommended testing different categories because no single category
breakdown is suitable for every consequence variable and proba-
bility within a given situation.
Following this recommendation, we tried to find the best cate-
gories for the RM in Fig. 1 by examining the sensitivity of the
risk ranking to changes in category definitions. To ease this analy-
sis, we introduced a multiplier nthat determines the range for each
category. We retained ranges for the first category for both conse-
quence and probability. For the categories that are not at the end-
points of the axes, nwill determine the start value and end value of
the range. For example, with n¼2, the second probability category
in Fig. 1 has a value range from 0.01 to 0.02 (0.01 to 0.01 "n). For
the category at the end of the axis, nwill affect only the start value
of the range, which must not exceed unity (n¼3.15) for the proba-
bility axis and must not exceed USD 20 million (n¼3.6) for the
consequence axis. Tables 5 and 6 show the probability and conse-
quence ranges, respectively, for n¼2 or n¼3.
We vary the multiplier and observe the effect on risk ranking
for both ascending and descending scores. While varying the mul-
tiplier for one axis, the ranges in the other axis are kept in a
default value (Fig. 3) and constant. Because Table 1 gives the
consequence value in ranges, we use the midpoint
value within the range for each outcome, as shown in Table 7.
Given a single consequence value for each outcome, the categori-
zation instability analysis can be performed. Figs. 7 and 8 show
how the risk ranking is affected by change in n.
Figs. 7 and 8 indicate that except where consequence is in
ascending order, the risk prioritization is a function of n. This is
problematic because the resulting risk ranking is unstable in the
sense that a small change in the choice of ranges, which is again
Ascending Descending
Outcome Risk Score Rank Outcome Risk Score Rank
Severe Losses 5"4¼20 1 Severe Losses 2"3¼6 2
Well Control 3"5¼15 2 Well Control 4"2¼8 3
Blowout 2"6¼12 3 Blowout 5"1¼5 1
Equation Score Probability Score Probability
<p$1 6 0.16 <p<¼1 6 0.81 <p<¼1
5 0.08 <p<¼0.16 5 0.27<p<¼0.81
4 0.04 <p<¼0.08 4 0.09<p<¼0.27
3 0.02 <p<¼0.04 3 0.03<p<¼0.09
0.01 <p$0.01#n2 0.01 <p<¼0.02 2 0.01<p<¼0.03
p$0.01 1 <¼0.01 1 <¼0.01
Equation Score Consequence (USD Million) Score Consequence (USD Million)
<Consequence 6 1.6 <Consequence 6 8.1 <Consequence
<Consequence $100#n
5 0.8 <Consequence <¼1.6 5 2.7 <Consequence <¼8.1
<Consequence $100#n
4 0.4 <Consequence <¼0.8 4 0.9 <Consequence <¼2.7
100#n<Consequence $100#n
3 0.2 <Consequence <¼0.4 3 0.3 <Consequence <¼0.9
100 <Consequence $100#n2 0.1 <Consequence <¼0.2 2 0.1 <Consequence <¼0.3
Consequence $100 1 <¼0.1 1 <¼0.1
Outcome Consequence (USD Million) Probability
Severe Losses 3 40%
Well Control 12.5 10%
Blowout 50 5%
For the practicality of the analysis, we assume that for blowout consequence, the ratio of
the range’s high value to low value is the same as for Category 5 (high value ¼4"low
value). Thus, the range is USD 20 to 80 million, and the middle value is USD 50 million. No
matter which value is chosen to represent the high-end consequence, the instability
remains and is equally severe.
April 2014 SPE Economics & Management 61
arbitrary, can lead to a large change in risk prioritization. Thus, we
again see that the guidance provided by RMs is arbitrary, being
determined by arbitrary design choices that have no scientific basis.
For each SPE paper that used at least one quantitative scale,
Table 8 shows percentage of the domain for Categories 1 through
4, with Category 5 being excluded because it was often unbounded.
The left-hand table is for the frequency and the right-hand table is
for the consequence. For example, the probability categories for
paper SPE 142854, in ascending order, cover 0.001, 0.1, 0.9, and
99% of the domain. The consequence categories for paper SPE
142854, in ascending order, cover 0.1, 0.9, 9, and 90% of the
That categories cover different amounts of the total range is
clearly a significant distortion. In addition to this, the size of the
categories varies widely across papers. For example, in the papers
we surveyed, Category 3 on the likelihood axis spans 0.9 to 18%
of the total range.
Relative Distance is Distorted. Lie Factor. According to Table
7, the consequence of a blowout is four times that of well control
(50/12.5). However, the ratio of their scores in the RM is only 1.2
(6/5). The difference in how risk is portrayed in the RM vs. the
expected values can be quantified by use of the LF.
The LF was coined by Tufte and Graves-Morris (1983) to
describe graphical representations of data that deviate from the
principle that “the representation of numbers, as physically meas-
ured on the surface of the graphic itself, should be directly propor-
tional to the quantities represented” (Tufte and Graves-Morris
1983). This maxim seems intuitive, but it is difficult to apply to
2 2.2 2.4 2.6 2.8 3 2 2.2 2.4 2.6 2.8 3
Multiplier, n Multiplier, n
Risk Ranking
Risk Ranking
Probability Sensitivity Analysis
(Scale 6-5-4-3-2-1)
Probability Sensitivity Analysis
(Scale 1-2-3-4-5-6)
Severe Losses (SL) Well Control (WC) Blowout (BO) Severe Losses (SL) Well Control (WC) Blowout (BO)
n = 2.50 n = 2.15
n = 2.20
n = 2.25
n = 2.55
Probability Probabilit
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
SL = 1| WC = 3 | BO = 2
SL = 2 or 3| WC = 2 or 3 | BO = 1
SL = 2 or 3| WC = 1 | BO = 2 or 3
SL = 1 or 2| WC = 3 | BO = 1 or 2
SL = 1| WC = 2 | BO = 3
Rank for each outcome: Rank for each outcome:
Fig. 7—Sensitivity of risk prioritization to probability categorization.
0 5 10 15 20 80
2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6
Multiplier, n Multiplier, n
Risk Ranking
Risk Ranking
Consequence Sensitivity Analysis
(Scale 6-5-4-3-2-1)
Consequence Sensitivity Analysis
(Scale 1-2-3-4-5-6)
Severe Losses (SL) Well Control (WC) Blowout (BO) Severe Losses (SL) Well Control (WC) Blowout (BO)
n = 3.0
n = 3.2
n = 3.4
SL = 1 or 2| WC = 1 or 2 | BO = 3
SL = 3| WC = 1 | BO = 2
SL = 2| WC = 3 | BO = 1
Rank for each outcome:
Consequence (USD Million)
Fig. 8—Sensitivity of risk prioritization to consequence categorization.
62 April 2014 SPE Economics & Management
data that follow an exponential relationship, for example. Such
cases often use log plots, in which the same transformation is
applied to all the data. However, RMs can distort the information
they convey at different rates within the same graphic.
Slightly modifying the Tufte and Graves-Morris (1983) defini-
tion, we define LF as
;and n>m:
The LF is thus calculated as the change in value (of probability
or consequence) over the mand ncategories divided by the
change in score over the mand ncategories. In calculating the
LF, we use the midpoint across the value and probability ranges
within each category.
From Fig. 1, the score of the consequence axis at m¼3 is
S¼3 and at n¼4 is S¼4. By use of the midpoint value for each
category, LF
¼11.4 ¼(j3,000–625j/625)/(j4–3j/3). The inter-
pretation of this is that the increase in the underlying consequence
values is 11.4 times larger than an increase in the score.
None of the 30 papers reviewed included enough quantitative
information for the LF to be calculated. We define the LF for an
RM as the average of the LFs for all categories. An alternative
definition might be the maximum LF for any category. Table 9
shows the result of our average LF calculation. All reviewed RMs
use infinity as the upper bound on the consequence axes. This
gives infinite LFs. However, in summarizing the LF for the
reviewed papers in Table 9, we have chosen to use the second
largest category as the upper limit for the consequences. This
obviously understates the actual LFs in the reviewed papers.
All nine papers have an LF greater than unity along at least
one axis. Paper SPE 142854, for example, has an LF of 96 on the
consequence axis and 5,935 on the probability axis.
Many proponents of RMs extol their visual appeal and result-
ing alignment and clarity in understanding and communication.
However, the commonly used scoring system distorts the scales
and removes the proportionality in the input data. How can it be
argued that a method that distorts the information underlying an
engineering decision in nonuniform and uncontrolled ways is an
industry best practice? The burden of proof is squarely on the
shoulders of those who would recommend the use of such meth-
ods to prove that these obvious inconsistencies do not impair deci-
sion making, much less improve it, as is often claimed.
A Consistent Approach to Risk Management
The motivation for writing this paper was to point out the gross
inconsistencies and arbitrariness embedded in RMs. Given these
problems, it seems clear to us that RMs should not be used for deci-
sions of any consequence. Our pointing out that RMs produce arbi-
trary rankings does not require us to provide another method in
their place, any more than we would be required to suggest new
medical treatments to argue against the once popular practice of
bloodletting. The arbitrariness of RMs is not conditional on whether
or not other alternatives exist. Nevertheless, the question is bound to
be raised, and thus this section provides a brief set of references to
what we consider to be a consistent approach to risk management.
Risk management is fundamentally about decision-making. The
objective of the risk-management process is to identify, assess,
rank, and inform management decisions to mitigate risks. Risks can
only be managed through our decisions, and the risk-management
objectives are best achieved with processes and tools that support
high-quality decision-making in complex and uncertain situations.
For centuries people have speculated on how to improve deci-
sion making, and a formal approach to decision and risk analysis
can be traced through the works of Bayes and Price (1763), Lap-
lace (1902), Ramsey (1931), De Finetti (1931, 1937), von Neu-
mann and Morgenstern (1944), Bernoulli (1954), and Savage
(1954). Over the last several decades, important supporting fields
Frequency Consequence
Paper Number Rating Percentage of Range Paper Number Rating Percentage of Range
SPE 127254 1 0.95% SPE 142854 1 0.10%
SPE 127254 2 0.02% SPE 142854 2 0.90%
SPE 127254 3 2.36% SPE 142854 3 9.00%
SPE 127254 4 96.67% SPE 142854 4 90.00%
SPE 142854 1 0.001% SPE 98423 1 1.00%
SPE 142854 2 0.10% SPE 98423 2 4.00%
SPE 142854 3 0.90% SPE 98423 3 15.00%
SPE 142854 4 99.00% SPE 98423 4 81.00
SPE 98852 1 0.04%
SPE 98852 2 1.96%
SPE 98852 3 18.00%
SPE 98852 4 80.00%
SPE 162500 1 0.09%
SPE 162500 2 0.90%
SPE 162500 3 9.00%
SPE 162500 4 90.00%
Average of Each Category
Paper Number LF of Consequence LF of Probability
SPE 142854 96 5,935
SPE 86838 30
SPE 98852 745 245
SPE 121094 5
SPE 74080 94
SPE 123861 28 113
SPE 162500 85 389
SPE 98423 16
IPTC 14946 1 3
April 2014 SPE Economics & Management 63
have been integrated to provide a discipline, decision analysis,
with the objective of informing and supporting decision making
in complex and uncertain environments (e.g., such as many of the
risk-management decisions we face in the O&G industry). Good
general references on decision analysis include Howard (2007)
and Clemen and Reilly (2013), whereas Bratvold and Begg
(2010) provide a recent O&G-oriented introduction.
There are also a number of excellent publications that apply the
fundamental concepts of decision analysis to the types of problems
to which RMs are commonly applied. A small, but relevant, sample
include Pate´-Cornell and Fischbeck’s (1994) work on performing a
probabilistic risk analysis of failure of the exterior surface tiles on
the US space shuttle orbiter; Pate´-Cornell’s (2002) use of probabil-
istic risk analysis to solve government safety decisions; Chapman
and Ward’s (2003) discussion of project risk management; and
Hubbard’s (2009) introduction of several alternatives to RMs.
These authors warn that the processes and tools they discuss, illus-
trate, and recommend are not perfect and should be used in accord-
ance with sound decision-analysis principles. However, unlike
RMs, the processes and tools drawn from decision analysis are con-
sistent, do not carry the inherent flaws of the RMs, and provide
clarity and transparency to the decision-making situation. Our best
chance for providing high-quality risk-management decisions is to
apply the well-developed and consistent set of processes and tools
embodied in decision science.
Discussion and Conclusions
As suggested by Hubbard (2009), for any risk-management method
used in the O&G industry, we should ask: “How do we know it
works?” If we cannot answer that question, then our first risk-man-
agement priority should be to find and adopt a risk-management
method that does work. RMs are among the most commonly used
tools for risk prioritization and management in the O&G industry.
The matrices are recommended by several influential standardiza-
tion bodies, and our literature search found more than 100 papers in
the OnePetro database that document the application of RMs in a
risk-management context. However, we are not aware of any pub-
lished empirical evidence showing that they actually help in man-
aging risk or that they improve decision outcomes.
In this paper, we have illustrated and discussed inherent flaws
in RMs and their potential impact on risk prioritization and miti-
gation. Inherent dangers such as risk-acceptance inconsistency,
range compression, centering bias, and category-definition bias
were introduced and discussed by Cox et al. (2005), Cox (2008),
Hubbard (2009), and Smith et al. (2009). We have also addressed
several previously undocumented RM flaws: ranking reversal,
instability resulting from categorization differences, and the LF.
These flaws cannot be corrected and are inherent to the design
and use of RMs.
The ranking produced by RMs was shown to be unduly influ-
enced by their design, which is ultimately arbitrary. No guidance
exists regarding these design parameters because there is very lit-
tle to say. A tool that produces arbitrary recommendations in an
area as important as risk management in O&G should not be con-
sidered an industry best practice.
There are undoubtedly O&G professionals who recognize and
understand the inherent inaccuracy of RMs and take steps to avoid
these dangers, to the extent that this is even possible. However,
we suspect that this does not apply to the majority of O&G profes-
sionals who develop or use RMs, on the basis of the literature
review and extensive data gathering conducted for this paper. Fur-
thermore, if the initial assessment of risk is not based on meaning-
ful measures, the risk-management decisions are likely to address
the wrong problems, resulting in a waste of money and time (at
best) and in severe HSE issues (at worst).
It may be true that using RMs to analyze and manage risks is
better than doing nothing [though even that may be debatable, as
pointed out by Cox (2008) and Hubbard (2009)]. Indeed, any
approach that generates some discussion of the risks in a particu-
lar activity will be helpful. The fact that these flaws have not been
raised as an issue before is evidence that RMs obscure rather than
enlighten communication. Instead of RMs, the O&G industry
should rely on risk- and decision-analytic procedures that rest on
more than 250 years of scientific development and understanding.
Alkendi, M.Y.M.S. 2006. ADNOC Environmental Impact Severity Ma-
trix, an Innovative Impact Rating Matrix. Presented at the SPE Interna-
tional Health, Safety & Environment Conference, Abu Dhabi, 2–4
April. SPE-98852-MS.
Al-Mitin, A.W., Sardesai, V., Al-Harbi, B. et al. 2011. Risk Based Inspec-
tion (RBI) of Aboveground Storage Tanks to Improve Asset Integrity.
Presented at the International Petroleum Technology Conference,
Bangkok, Thailand, 15–17 November. IPTC-14434-MS. http://dx.doi.
API RP 581, Risk-Based Inspection Technology. 2008. Washington DC:
Areeniyom, P. 2011. The Use of Risk-Based Inspection for Aging Pipe-
lines in Sirikit Oilfield. Presented at the International Petroleum Tech-
nology Conference, Bangkok, Thailand, 15–17 November. IPTC-
Bayes, T. and Price, R. 1763. An Essay Towards Solving a Problem in the
Doctrine of Chances. By the Late Rev. Mr. Bayes, F. R. S. Communi-
cated by Mr. Price, in a Letter to John Canton, A. M. F. R. S. Philosoph-
ical Transactions 53: 370–418.
Bensahraoui, M. and Macwan, N. 2012. Risk Management Register in
Projects & Operations. Presented at the Abu Dhabi International Petro-
leum Conference and Exhibition, Abu Dhabi, 11–14 November. SPE-
Berg, F.R. 2001. The Development and Use of Risk Acceptance Criteria
for the Construction Phases of the Karsto Development Project in Nor-
way. Presented at the SPE/EPA/DOE Exploration and Production
Environmental Conference, San Antonio, Texas, 26–28 February.
Bernoulli, D. 1954. Exposition of a New Theory on the Measurement of
Risk. Econometrica 22 (1): 23–36.
Bower-White, G. 2012. Demonstrating Adequate Management of Risks:
The Move from Quantitative to Qualitative Risk Assessments. Pre-
sented at the SPE Asia Pacific Oil and Gas Conference and Exhibition,
Perth, Australia, 22–24 October 2012. SPE-158114-MS. http://
Bratvold, R.B. and Begg, S.H. 2010. Making Good Decisions. Richardson,
Texas: Society of Petroleum Engineers.
Budescu, D.V., Broomell, S., and Por, H.H. 2009. Improving communica-
tion of uncertainty in the reports of the intergovernmental panel on cli-
mate change. Psychological Science 20 (3): 299–308. http://
Campbell, N.W., Tate, D.R.D. 2006. Attacking Metropolitan Driving Haz-
ards with Field-Proven Practices. Presented at the SPE International
Health, Safety & Environment Conference, Abu Dhabi, 2–4 April.
Chapman, C. and Ward, S. 2003. Project Risk Management: Processes,
Techniques and Insights, 2nd edition. New York: Wiley.
Clare, J.B. and Armstrong, L.J. 2006. Comprehensive Risk-Evaluation
Approaches for International E&P Operations. SPE Proj Fac & Const
1(3): 1-6. SPE-98679-PA.
Clemen, R.T. and Reilly, T. 2013. Making Hard Decisions with Decision-
tools, 3rd edition. Cengage Learning.
Coakley, B., Baraka, C., and Shafi, M. 2003. Enhancing Rig Site Risk
Awareness. Presented at the SPE/IADC Middle East Drilling Technol-
ogy Conference and Exhibition, Abu Dhabi, 20–22 October. SPE-
Cox Jr., L.A. 2008. What’s Wrong with Risk Matrices? Risk Analysis 28
(2): 497–512.
Cox Jr., L.A., Babayev, D., and Huber, W. 2005. Some limitations of qual-
itative risk rating systems. Risk Analysis 25 (3): 651–662. http:dx.
Da Silva, E.N., Neto, L.M., and Amaral, S.P. 2010. LOPA as a PHA com-
plementary tool: a Case Study. Presented at the SPE International
Howard (1988) defined the profession of decision analysis as a result of his work to
merge decision theory and systems engineering.
64 April 2014 SPE Economics & Management
Conference on Health, Safety and Environment in Oil and Gas Explo-
ration and Production, Rio de Janeiro, 12–14 April. SPE-127254-MS.
De Finetti, B. 1931. Probabilism. Erkenntnis 31 (2–3): 169-223. (Septem-
ber 1989).
De Finetti, B. 1937. Foresight: Its Logical Laws, Its Subjective Sources, trans.
H.E. Kyburg Jr., Vol. 7, 1–68. Paris: Presses Universitaires de France.
Dethlefs, J. and Chastain, B. 2012. Assessing Well-Integrity Risk: A Qual-
itative Model. SPE Drill & Compl 27 (2): 294–302. SPE-142854-PA.
Duguay, A., Baccino, B., and Essel, P. 2012. From 360 Deg Health
Safety Environment Initiatives on the Rig Site to Structured HSE
Strategy: A Field Case in Abu Al Bukhoosh Field. Presented at the
Abu Dhabi International Petroleum Conference and Exhibition, Abu
Dhabi, 11–14 November. SPE-161547-MS.
Howard, R.A. 1988. Decision Analysis: Practice and Promise. Management
Science 34 (6): 679–695.
Howard, R.A. 2007. The Foundations of Decision Analysis Revisited. In
Advances in Decision Analysis: From Foundations to Applications,
Chap. 3, 32–56. Cambridge University Press.
Hubbard, D.W. 2009. The Failure of Risk Management: Why It’s Broken
and How to Fix It. Hoboken, New Jersey: John Wiley & Sons, Inc.
ISO 31000:2009, Risk Management—Principles and Guidelines. 2009.
Washington DC: American National Standards Institute.
ISO/IEC 31010:2009, Risk Management—Risk Assessment Techniques.
2009. Washington DC: American National Standards Institute.
Jones, D.W. and Bruney, J.M. 2008. Meeting the Challenge of Technology
Advancement—Innovative Strategies for Health, Environment and
Safety Risk Management. Presented at the SPE International Confer-
ence on Health, Safety, and Environment in Oil and Gas Exploration
and Production, Nice, France, 15–17 April. SPE-111769-MS. http://
Kinsella, K.G., Kinn, S.J., Thomassen, O. et al. 2008. Development of a
Software Tool, EPRA, for Early Phase Risk Assessment. Presented at
the SPE International Conference on Health, Safety, and Environment
in Oil and Gas Exploration and Production, Nice, France, 15–17 April.
Laplace, P.S. 1902. A Philosophical Essay on Probabilities, first edition.
New York: John Wiley & Sons.
Lee, N.M. 2009. Safety Cultures—Pushing the Boundaries of Risk Assess-
ment. Presented at the Asia Pacific Health, Safety, Security and Envi-
ronment Conference, Jakarta, 4–6 August. SPE-123457-MS. http://
Leistad, G.H. and Bradley, A. 2009. Is the Focus too Low on Issues That
Have a Potential to Lead to a Major Incident? Presented at Offshore
Europe, Aberdeen, 8–11 September. SPE-123861-MS. http://dx.doi.
McCulloch, B.R. 2002. A Practical Approach to SH&E Risk Assessments
within Exploration & Production Operations. Presented at the SPE
International Conference on Health, Safety and Environment in Oil
and Gas Exploration and Production, Kuala Lumpur, 20–22 March.
McDermott, M.S. 2007. Risk Assessment (Hazard Management) Process
is a Continual Process, Not a One Off. Presented at the SPE Asia Pa-
cific Health, Safety, and Security Environment Conference and Exhibi-
tion, Bangkok, Thailand, 10–12 September. SPE-108853-MS. http://
NORSOK Standard S-012, Health, Safety and Environment (HSE) in con-
struction-related activities. 2002. Rev. 2, August. Oslo, Norway: Nor-
wegian Technology Centre (NTS).
Pate´-Cornell, M.-E. and Fischbeck, P.S. 1994. Risk Management for the
Tiles of the Space Shuttle. Interfaces 24 (1): 64–86.
Pate´-Cornell, E. 2002. Risk and Uncertainty Analysis in Government
Safety Decisions. Risk Analysis 22 (3): 633–646.
Petrone, A., Scataglini, L., and Cherubin, P. 2011. B.A.R.T (Baseline Risk
Assessment Tool): A Step Change in Traditional Risk Assessment
Techniques for Process Safety and Asset Integrity Management. Pre-
sented at the SPE Annual Technical Conference and Exhibition, Den-
ver, 30 October–2 November. SPE-146845-MS.
Piper, J.W. and Carlon, J.R. 2000. Application and Integration of Security
Risk Assessment Methodologies and Technologies into Health, Safety
and Environmental (SHE) Programs. Presented at the SPE Interna-
tional Conference on Health, Safety, and Environment in Oil and Gas
Exploration and Production, Stavanger, 26–28 June. SPE-61149-MS.
Poedjono, B., Chinh, P.V., Phillips, W.J., and Lombardo, G.J. 2009. Anti-Colli-
sion Risk Management for Real-World Well Placement. Presented at the
Asia Pacific Health, Safety, Security and Environment Conference, Jakarta,
4–6 August. SPE-121094-MS.
Poedjono, B., Conran, G., Akinniranye, G. et al. 2007. Minimizing the Risk
of Well Collisions in Land and Offshore Drilling. Presented at the SPE/
IADC Middle East Drilling and Technology Conference, Cairo, 22–24
October. SPE-108279-MS.
Pritchard, D., York, P.L., Beattie, S., and Hannegan, D. 2010. Drilling
Hazard Management : The Value of Risk Assessment. World Oil 231
(10): 43–52.
Ramsey, F.P. 1931. Truth and Probability. In The Foundations of Mathe-
matics and other Logical Essays, ed. R.B. Braithwaite, Chap. 7,
156–198. Routledge and Kegan Paul Ltd. (repr. Routledge, 2013).
Reynolds, J.T. 2000. Risk Based Inspection—Where Are We Today? Pre-
sented at CORROSION 2000, Orlando, Florida, 26–31 March. NACE-
Samad, S.A., Al Sawadi, O.S., Afzal, M., and Khan, N. 2010. Risk Register
and Risk Ranking of Non-Integral Wells. Presented at the Abu Dhabi
International Petroleum Exhibition and Conference, Abu Dhabi, 1–4
November. SPE-137630-MS.
Samad, S.A., Tarmoom, I.O., Binthabet, H.A. et al. 2007. A Comprehen-
sive Approach to Well Integrity Management. Presented at the SPE
Middle East Oil and Gas Show and Conference, Kingdom of Bahrain,
11–14 March. SPE-105319-MS.
Savage L.J. 1954. The Foundations of Statistics. New York: John Wiley &
Sons (repr. Dover Publications, 1972).
Smith, E.D., Siefert, W.T., and Drain, D. 2009. Risk matrix input data
biases. Systems Engineering 12 (4): 344–360.
Smith, N., BuTuwaibeh, O.I., Cruz, I.C., and Gahtani, M.S. 2002. Risk-
Based Assessment (RBA) of a Gas/Oil Separation Plant. Presented at
the SPE International Conference on Health, Safety and Environment
in Oil and Gas Exploration and Production, Kuala Lumpur, 20–22
March. SPE-73897-MS.
Theriau, R., Rispler, K., and Redpath, S. 2004. Controlling Hazards
through Risk Management - A Structured Approach. Presented at the
SPE International Conference on Health, Safety, and Environment in
Oil and Gas Exploration and Production, Calgary, 29–31 March. SPE-
Truchon, M., Rouhan, A., and Goyet, J. 2007. Risk Based Inspection
Approach for Topside Structural Components. Presented at the Off-
shore Technology Conference, Houston, 30 April–3 May. OTC-
Tufte, E.R. and Graves-Morris, P.R. 1983. The Visual Display of Quantita-
tive Information, Vol. 31. Chesire, Connecticut: Graphics Press.
Valeur, J.R. and Clowers, M. 2006. Structure and Functioning of the ISO
14001 and OHSAS 18001 Certified HSE Management System of the
Offshore Installation South Arne. Presented at the SPE International
Health, Safety & Environment Conference, Abu Dhabi, 2–4 April.
von Neumann, J. and Morgenstern, O. 1944. Theory of Games and Eco-
nomic Behavior. Princeton, New Jersey: Princeton University Press.
Windschitl, P.D. and Weber, E.U. 1999. The interpretation Of “likely”
depends on the context, but “70%” is 70%—right? The influence of
associative processes on perceived certainty. J Exp Psychol: Learn
Mem Cogn 25 (6): 1514–1533.
Zainuddin, Z.M., Samad, A.H., Hasyim, I.B. et al. 2002. Conducting Pub-
lic Health Risk Assessment in a Remote Drilling Site in Indonesia: An
Experience. Presented at the SPE International Conference on Health,
Safety and Environment in Oil and Gas Exploration and Production,
Kuala Lumpur, 20–22 March 2002. SPE-74080-MS.
April 2014 SPE Economics & Management 65
Paper Year Author(s)
Bias Centering Bias Scoring System
Corrosion 2000 2000 Reynolds, J.T. Yes Yes Not available Ascending
SPE 61149 2000 Piper and Carlon Yes Yes Not available Descending
SPE 66516 2001 Berg, F.R. Yes Yes Not available Ascending
SPE 73892 2002 McCulloch Yes Yes Not available
SPE 73897 2002 Smith et al. Yes Yes Yes Ascending
SPE 74080 2002 Zainuddin et al. Yes Yes Yes Descending
SPE 85299 2003 Coakley et al. Yes Yes Not available Ascending
SPE 86838 2004 Theriau et al. Yes Yes Not available Descending
SPE 98566 2006 Campbell and Tate Yes Yes Not available Ascending
SPE 98852 2006 Alkendi Yes Yes Not available Ascending
SPE 98679 2006 Clare and Armstrong Yes Yes Not available Ascending
SPE 98423 2006 Valeur and Clowers Yes Yes Not available Ascending
SPE 108279 2007 Poedjono et al. Yes Yes Not available Ascending
SPE 108853 2007 McDermott Yes Yes Not available Ascending
SPE 105319 2007 Samad et al. Yes Yes Not available Ascending
OTC 18912 2007 Truchon et al. Yes Yes Yes Descending
SPE 111549 2008 Kinsella et al. Yes Yes Not available Ascending
SPE 121094 2009 Poedjono et al. Yes Yes Not available Ascending
SPE 123457 2009 Lee Yes Yes Not available Ascending
SPE 123861 2009 Leistad and Bradley Yes No Not available Ascending
SPE 111769 2009 Jones and Bruney Yes Yes Not available Descending
SPE 137630 2010 Samad et al. Yes Yes Not available Ascending
SPE 127254 2010 Da Silva et al. Yes Yes Not available Ascending
IPTC 14434 2011 Al-Mitin et al. Yes Yes Not available Ascending
IPTC 14946 2011 Areeniyom Yes Yes Not available Ascending
SPE 146845 2011 Petrone et al. Yes Yes Yes Ascending
SPE 158114 2012 Bower-White Yes Yes Not available Ascending
SPE 162500 2012 Bensahraoui and Macwan Yes Yes Yes Ascending
SPE 142854 2012 Dethlefs and Chastain Yes Yes Yes Ascending
SPE 161547 2012 Duguay et al. Yes Yes Not available Ascending
A pp e nd i x  3 0 Se l ec t e d SPE P ap e rs a nd T he i r F la w s
Philip Thomas is a PhD candidate in petroleum investment
and decision analysis at the University of Stavanger and is
advised by R.B. Bratvold. He is interested in the applications of
decision analysis and real-options analysis in the O&G industry.
Thomas holds a master’s degree in petroleum engineering
from the University of Stavanger and a bachelor’s degree in
petroleum engineering from Bandung Institute of Technology,
Reidar B. Bratvold is a professor of petroleum investment and
decision analysis at the University of Stavanger and at the Nor-
wegian University of Science and Technology in Trondheim,
Norway. His research interests include decision analysis, valua-
tion of risky projects, portfolio analysis, real-option valuation,
and behavioral challenges in decision making. Before enter-
ing academia, Bratvold spent 15 years in the industry in various
technical and management roles. He is a coauthor of the SPE
Primer Ma kin g G oo d De c ision s. Bratvold is an associate editor
for SPE Eco n om ic s & M a na g e me nt and has twice served as
an SPE Distinguished Lecturer. He is a fellow and board mem-
ber in the Society of Decision Professionals and was made a
member of the Norwegian Academy of Technological Scien-
ces for his work in petroleum investment and decision analysis.
Bratvold holds a PhD degree in petroleum engineering and a
master’s degree in mathematics, both from Stanford Univer-
sity, and obtained business and management-science edu-
cation from INSEAD and Stanford University.
J. Eric Bickel is an assistant professor in both the Graduate Pro-
gram in Operations Research/Industrial Engineering (Depart-
ment of Mechanical Engineering) and the Department of
Petroleum and Geosystems Engineering at the University of
Texas at Austin. In addition, he is a fellow with the Center for
Petroleum Asset Risk Management. Bickel’s research interests
include the theory and practice of decision analysis and its
application in the O&G industry. Before returning to aca-
demia, he was a Senior Engagement Manager for Strategic
Decisions Group. Bickel holds a master’s degree and a PhD
degree from the Department of Engineering-Economic Sys-
tems at Stanford University.
66 April 2014 SPE Economics & Management
... In that sense, the discussion of risk management implementation should also include the tools to support this initiative. Thomas et al. (2014) searched the One-Petro database of the oil and gas exploration and production industry and found 527 papers that used risk matrices. Thomas and his co-authors found that the risk matrix is the primary risk management tool. ...
... Others followed, confirming his observations that suggest risk matrices fall short of performing as well as they should to protect against risks. Thomas et al. (2014) even indicate that there is no evidence that using risk matrices improves risk management at all and may hinder it. In the next section, we will review these shortcomings. ...
... Although this tool is widely used, there are recently recognized limitations and flaws in its use, which can be associated with the subjective approach needed to categorize the risks within the two-axes. Cox (2008) and Thomas et al. (2014) describe some of the inherent deficiencies in use of the risk matrix, such as: ...
Full-text available
Risk management is widely recognized as a valuable approach, and in almost all applications, to our knowledge, it is conducted using a risk matrix as the central tool. Until recent years, risk matrices were not examined for their efficacy in efficient risk reduction. Several authors examined risk matrices and found them flawed and we briefly review their findings. They instead recommended using traditional analytic techniques but did not offer clear guidance for those unfamiliar with these approaches. The intent of this paper is to provide improved risk management approaches that are both quickly accessible to the reader and efficiently implementable in organizations. In this paper, we define a heuristic approach and compare that to a typical risk matrix methodology using generated data to show improved performance. We also demonstrate that employment of optimization is both superior to the use of risk matrices and often improves recommendations developed with the heuristic. These approaches are simple to implement, and we have named them PRISM and PRISM + for clarity.
... Whilst the use of risk matrices is associated with a number limitations (Anthony Cox Jr, 2008;Duijm, 2015;Thomas et al., 2014), risk matrices can be required during the initial design stages of systems generally (DoD, 2012) and autonomous ships specifically (BV, 2019). Risk matrices still constitute a popular tool for decision-making in several industries (Duijm, 2015;Thomas et al., 2014) and is strongly recommended for use according to the Formal Safety Assessment procedures (Kontovas and Psaraftis, 2009). ...
... Whilst the use of risk matrices is associated with a number limitations (Anthony Cox Jr, 2008;Duijm, 2015;Thomas et al., 2014), risk matrices can be required during the initial design stages of systems generally (DoD, 2012) and autonomous ships specifically (BV, 2019). Risk matrices still constitute a popular tool for decision-making in several industries (Duijm, 2015;Thomas et al., 2014) and is strongly recommended for use according to the Formal Safety Assessment procedures (Kontovas and Psaraftis, 2009). Typical examples of risk matrices used in the maritime industry can be found in the class societies guidance for the assessment of novel technology (ABS, 2017;DNV GL, 2011) and the IMO Formal Safety Assessment (FSA) guidelines (IMO, 2018). ...
... However, the current regulations and guidance do not provide any direction on how to determine the risk matrix, risk ratings and contextualise them for the investigated problem. The ambiguity in connection to the risk matrix design can be of high importance, as an arbitrary defined risk matrix and risk ratings can directly influence the crewless ship or other maritime system design process and mislead the decision-making process (Anthony Cox Jr, 2008;Duijm, 2015;Thomas et al., 2014). The maritime industry, in this respect, has been lagging behind the aviation industry, where acceptable probabilities of failure that depend on the consequences of failures are already defined and employed in the design process (EASA, 2010;FAA, 2011;GOVINFO, 2002;IEC, 2010;Lawrence, 2011;SAE, 1996a). ...
The autonomous ships’ introduction is associated with a number of challenges including the lack of suitable risk acceptance criteria to support the risk assessment process during the initial design phases. The aim of this research is to develop a rational methodology for selecting appropriate risk matrix ratings, which are required to perform the risk assessment of autonomous and conventional ships at an early design stage. This methodology consists of four phases and employs the individual and societal risk acceptance criteria to determine the risk matrix ratings for the groups of people exposed to risks. During the first and second phase, input for the risk matrix ratings based on the individual risk and societal risk are calculated respectively. During the third phase, the risk matrix ratings are defined using input from the first and second phases. During the fourth phase, the equivalence between the different types of consequences is specified. The methodology is applied to a crewless inland waterways ship to assess her typical operation within north-European mainland. The results demonstrate that the inclusion of societal risk resulted in more stringent risk matrix ratings compared to the ones employed in previous studies. Moreover, the adequacy of the proposed methodology and its effectiveness to provide risk acceptance criteria aligned with societal and individual risk acceptance criteria as well as its applicability to conventional ships are discussed.
... Ranking reversal: where quantitatively smaller risks are assigned qualitatively higher rating levels than some quantitatively larger risks due to incorrect risk prioritization. [19,20,26,27] The risk matrix does not focus on risk prioritization, but is intended as a screening tool to assess the closure design. ...
... [19, 26,28,29] The number of risk categories should be developed with consideration of range compression. ...
... This can exacerbate range compression. [26,30] An extra category can be added to both sides of the expected range for the consequences and likelihoods, as suggested by Duijm [31]. ...
Full-text available
Tailings dams remain on site following mine closures and must be designed and reclaimed to meet long-term goals, which may include walk-away closure or long-term care and maintenance. The underperformance of these structures can result in significant risks to public and environmental safety, as well as impacts on the future land use and economic activities near the structure. In Alberta, Canada, the expectation is for a tailings dam to be reclaimed and closed so that it can undergo deregistration. To aid in assessing the risks of underperformance during and after closure, a Generalized Failure Modes and Effects Analysis (G-FMEA) framework was developed to assess the long-term geotechnical risks for tailings dams in Alberta, with the goal of assessing the potential success of a tailings dam closure strategy. The G-FMEA is part of an initiative to enhance closure evaluations in Alberta in a collaborative effort between industry, the regulator, and academia. The G-FMEA incorporates the element of time to account for the evolution of the system, which should be applied at the planning stage and updated continually throughout the life of the facility. This paper presents the developed G-FMEA framework for tailings dams in Alberta, including the developed risk matrix framework.
... Risks with high impact and medium or high probability were rated as "severe". Risks with low probability and high impact, and risks with medium impact and medium or high severity were rated as "medium" [29,30]. ...
... We used risk matrices to assess potential risks in the pre-implementation phase. While risk matrices have limitations due to their categorical nature and potential oversimplification of risks, they can serve as a valuable tool for getting an overview of potential challenges in a largescale implementation project when used appropriately [29,30]. ...
Full-text available
Background: The clinical implementation of pharmacogenomics (PGx) could be one of the first milestones towards realizing personalized medicine in routine care. However, its widespread adoption requires the availability of suitable clinical decision support (CDS) systems, which is often impeded by the fragmentation or absence of adequate health IT infrastructures. We report results of CDS implementation in the large-scale European research project Ubiquitous Pharmacogenomics (U-PGx), in which PGx CDS was rolled out and evaluated across more than 15 clinical sites in the Netherlands, Spain, Slovenia, Italy, Greece, United Kingdom and Austria, covering a wide variety of healthcare settings. Methods: We evaluated the CDS implementation process through qualitative and quantitative process indicators. Quantitative indicators included statistics on generated PGx reports, median time from sampled upload until report delivery and statistics on report retrievals via the mobile-based CDS tool. Adoption of different CDS tools, uptake and usability were further investigated through a user survey among healthcare providers. Results of a risk assessment conducted prior to the implementation process were retrospectively analyzed and compared to actual encountered difficulties and their impact. Results: As of March 2021, personalized PGx reports were produced from 6884 genotyped samples with a median delivery time of twenty minutes. Out of 131 invited healthcare providers, 65 completed the questionnaire (response rate: 49.6%). Overall satisfaction rates with the different CDS tools varied between 63.6% and 85.2% per tool. Delays in implementation were caused by challenges including institutional factors and complexities in the development of required tools and reference data resources, such as genotype-phenotype mappings. Conclusions: We demonstrated the feasibility of implementing a standardized PGx decision support solution in a multinational, multi-language and multi-center setting. Remaining challenges for future wide-scale roll-out include the harmonization of existing PGx information in guidelines and drug labels, the need for strategies to lower the barrier of PGx CDS adoption for healthcare institutions and providers, and easier compliance with regulatory and legal frameworks.
... And for the function form dimension, though, in a risk matrix, the risk is usually measured by the product of consequence and likelihood, other forms are possible, such as the logarithmic form and so on (Levine 2012;Chen et al. 2020). Moreover, in risk management (only adverse events are considered here), decision-makers are possibly risk-averse besides risk-neutral (Bedford 2013;Thomas et al. 2014). One may refer to Li et al. (2018;Cox 2008;Iec 2009) for examples of risk matrices designed with and without risk aversion. ...
... The aggregation of scenarios and consequences for a single event on different areas of concern, and for multiple hazards originating from a single activity; And the problems with the use of corporate-wide risk matrix designs. Thomas et al (2014) reviewed more than 520 papers published by the SPE in the last 15 years on the use of the risk matrix. Finally, a representative group of 30 papers was analysed, the limitations were discussed and inconsistencies of working with the RAM were addressed. ...
The PSRA (process safety risk assessment) is the most important element of an effective process safety management program. A PSRA is an organized and systematic effort but there are also highly complex tasks related to it. The effort is to create a complete identification of causes and consequences and subsequently set of effectiveness of barriers to manage the risk to acceptance level. The critical task is that the information to be compiled about the chemistry needs to be comprehensive enough for an accurate assessment of the reactivity hazards, fire and explosion characteristics and toxic releases. A successful PSRA also requires an unambiguously Risk Assessment Metric or risk graph, the availability of reliable clean (failure) data and must be organized properly. The organizational part of the PSRA's have to deal with demanding circumstances like time pressure, culture, workload, costs, motivation, procedures of classification and competent resources to complete the risk assessment in time. These elements together are typical ingredients introducing the human factor in PSRA's and the presence of so-called cognitive biases. These biases can have a negative influence on the validity of the risk assessment and can lead to incorrect and hence ineffective recommendations. Cognitive biases are defined by Kahneman as the tendency of systematic deviations when assessing risk instead of objective and rational judgment. The occurrence of these cognitive biases can be clarified by the understanding the dual-system of thinking (system 1 and system 2), Kahneman. The objective of this paper is to identify what kind of human factors and organisational factors are present in a PSRA and what can be done to prevent them? To identify these biases, we have evaluated multiple PSRA's from our company. The results of this evaluation show that human and organisational factor can be avoided, and more important incorrect conclusions and implementation of ineffective recommendations can be prevented.
Risk management is an integral part of the project management process, and project failure is an area of concern in many organizations. This chapter explains and discusses a new maturity model for assessing and managing project risk in the automotive industry. The research design was two-fold. First, a case study analysis in a major German automotive company was undertaken to develop the maturity model. The approach was qualitative and inductive, using data provided by in-depth interviews. Second, this model was applied in two major projects currently underway in the company – one involving implementing a cloud-based ERP system and the other the program management function responsible for product development and launch. The model adds to existing risk management maturity models and is unique to the automotive industry. It can be used by risk and project managers and can be adapted to other industry sectors.
As assets deteriorate and/or new technology becomes available, asset-intensive industries across the world struggle with planning and justifying the necessary reinvestment to renew and modernize their equipment. This paper presents a methodology for rapidly creating an optimized long-term asset renewal plan that targets the maximization of value to the organization. It ensures alignment with top-level strategic objectives, while at the same time is built from the bottom up, based on the assets’ condition, system functions and criticalities. It also involves broad participation and buy-in from technical staff, so there is widespread consensus on the emerging priorities. The methodology is based upon the 6-step SALVO Process for Strategic Asset Lifecycle Value Optimization, the product of a 5-year multi-sector R&D collaboration programme. Benefits of the method include the ability to calculate and demonstrate the monetized value, risks and other business impacts generated by each proposed intervention at different potential timings, and the optimization of combined effects within any overriding constraints (such as budgets, resources or timing commitments). This involves quantifying and modelling the trade-offs between Capex, Opex, risks, performance and sustainability, with mixed quality data and expert/tacit knowledge, using state-of-the-art decision support tools. It also achieves, usually for the first time, true alignment between technical and financial departments, providing a transparent and auditable basis for the interventions and funding requirements. A case study is demonstrated and discussed, with lessons learnt, from the successful creation of a 10-year renewal and modernization plan at a large electricity transmission company (ISA CTEEP) in Brasil. This work formed part of a wider 3-year asset management innovation project under the R&D programme supported by the Brazilian electrical sector regulator, ANEEL.
Full-text available
This book is a printed edition of the Special Issue Environmental Disasters and Individuals’ Emergency Preparedness: In the Perspective of Psychology and Behavior that was published in IJERPH. Environmental disasters are becoming more frequent. These disasters not only include the most common natural disasters, but also include man-made disasters, such as public health, accident disasters, etc., which have caused greater damage to human society and cities. Because of the limitations of a single government-led model in emergency response, the emergency preparedness of communities, families and individuals are more important. In particular, the emergency preparedness psychology and behavior of individuals directly determine whether or not they can effectively protect themselves and their families in the first time of disaster. This Special Issue focuses on environmental disasters and individuals’ emergency preparedness in the perspective of psychology and behavior.
European transmission system operators are facing challenging times in the next decade as majority of their transmission overhead lines are reaching the end of their projected lifetime. Traditional maintenance approaches would generate a significant wave of replacements that could be dispersed or postponed with more advanced decision-making methodologies. This paper presents a holistic risk-based maintenance decision-making methodology for transmission overhead lines and its practical implementation. The framework is refined with anomaly detection and health index prediction models that use machine learning algorithms to improve the input data quality. Asset health indices are used to determine the actual technical condition of each transmission overhead line tower separately and to calculate the probability of failure for each asset using survival analysis. The proposed methodology takes into the account transmission grid specific features where usually a failure in a meshed networks will not cause an electricity outage for customers and therefore a novel value of lost load approach is proposed. This paper also presents a case study based on Estonian transmission system where the proposed methodology enables to minimize risks in more cost-effective manner compared to traditional approaches and highlights the most critical elements in the grid.
A successful drilling hazard management (DHM) process depends on recognition of the project's risks by correctly interpreting drilling dynamics, enabling operators to make the right proactive decisions during operations. Risk assessment is applied at various stages of the well planning process such as analysis, design, execution, and for any change in the scope of the operation. Risk assessment success depends on the quality and range of the participants' knowledge and experience. The risk matrix can be adjusted for levels of likelihood or probability and costs and identifying costs associated with consequences evaluates An elliptical hole is an after-the-fact indicator, but recognizing this stress-induced hazard can help plan the next well to identify wellbore stability issues and assist in directional planning.
Conference Paper
The fleet of Total floating production units is at present highly increasing. They range from converted tankers to new built mega-FPSO projects as Girassol or Dalia (Angola). In order to better assess for the safety, environmental and economic aspects in these developments, it has been decided to develop and use Risk Based Inspection methodologies for all systems and sub-systems of the units. In the past decade, RBI methodologies have been developed and applied for the Hull part and the Process part of the FPSO's. This paper presents the Risk Based Inspection planning approach that has been developed and implemented for defining the inspection plan of Topsides Structural Components of FPSO's. This approach has been applied to the topsides modules of a specific FPSO used as reference and will be applied in the future to the full Total fleet of FPSO's. Due to the large variety of components types, a qualitative risk approach has been developed and applied. In this approach, failure modes are described and probabilities and consequences of failure are assessed. Risk level and corresponding inspection effort are estimated. An overall inspection plan is then provided for all structural components of the topside process modules: basic modules, pipe racks modules and manifolds. Such a methodology has the benefit to rank the components and results in the optimising of the inspection and maintenance effort. Typical input data and results of the application on the reference FPSO are included in the paper. Very few publications on RBI methodologies for Topsides Structural components exist and have been actually applied. The work presented in the paper allows now to provide such methodology. Redundancy analyses are used and Risk Analysis is performed taking into account the consequences of failures of Topsides Structural Components in terms of Physical injury, Environment and Asset (Loss of production, Unavailability, Costs of repair). The development of RBI approach for Topsides Structural Components, in addition to specific RBI methodologies already developed for Hull and Process, provide a full consistent approach for the maintenance and the inspection of offshore facilities, where all systems and sub-systems are addressed. Introduction Total is now operating and will operate in the future almost a dozen of assets having the function of storage, and/or production, and/or offloading - in short FPSO. Paper OTC- 18563 (this Conference) is giving an extensive panorama of this fleet [1]. These Floating Units can be ship-shaped or box-shaped or any other shape such as TLPs, SPARs, SEMIs, etc. They can be in steel or concrete and can handle various types of hydrocarbon products (oil, condensates, gas, LPG, LNG..). What are common to all are the topsides that are built in separate modules which are then put together on the main support in the construction yard. Each module is made of a structure that supports all process equipment. The inspection issue In conventional offshore development, i.e. jacket based, the inspection of topside structure is mainly driven by the painting deterioration and the subsequent corrosion issue.
Risk Management is a common element within industry SH&E Management Systems. Expectations for conducting risk assessments are often defined within these management systems, however, the process required for completing risk assessments is often misunderstood by operations personnel assigned with this responsibility. Decisions on the appropriate level of risk assessment (e.g., qualitative, semi-quantitative, quantitative) to use, as well as the appropriate level of risk to evaluate, can be problematic. Additionally, communicating identified risk issues and their relative risk priority to senior management unfamiliar with risk terminology can be a challenge. This paper describes a process used by Texaco Exploration and Production to effectively evaluate the SH&E risks associated with its operations. Risk categories included those associated with people, assets, environment, and reputation. Since the organization was in the midst of a company merger, transitional risk (i.e., reduction in emphasis on SH&E) was also evaluated. The process included the development of an SH&E risk checklist and risk matrix (probability vs. consequence) and the assessment of risk via interviews throughout all levels of the business unit organization. Field visits by multi-functional teams were also conducted to ensure a thorough understanding of the risks identified as a result of the employee interviews. Application of this practical, integrated SH&E risk assessment process has a positive impact on the business units. The results of the assessment focus on risks with higher probability and consequence (more than regulatory compliance issues), as well as a means to set SH&E strategies and objectives to reduce risk. Full integration of business unit representatives throughout the process gains a common understanding of the risks as well as a commitment for further mitigation action. Introduction Risk Management is a commonly found element within the Safety, Health, and Environment (SH&E) Management Systems of leading oil and gas companies. Trade associations such as the International Association of Oil && Gas Producers (OGP) and the UK Offshore Operators Association (UKOOA) have also recognized risk management as a key component of an effective management system process. The expectation to perform risk assessments to effectively identify the risks associated with oil and gas operations is critical to a successful risk management process. However, the techniques chosen to conduct risk assessments and the process used to communicate risk assessment results to senior management can be problematic. This paper discusses the practical approach developed by Texaco Worldwide Exploration and Production to perform risk assessments and integrate the results into the Business Unit strategic plans. Risk Assessment Process Developing the Strategy. The SH&E risk assessment process was developed by Texaco Worldwide Exploration and Production, with assistance from Pilko & Associates, Inc., to meet the Texaco SH&E Management System expectations. Pilko & Associates, Inc., a leading management consulting firm serving the energy industry, was retained to provide insight during the risk assessment protocol development, facilitate the effort, and provide an independent third-party perspective.
The exposure to risk from driving in field-type operations by companies engaged in the oil and gas industry is generally well recognized and, in many places, well addressed. Driving in a non-field or support-role environment provides similar exposure; however, even when this exposure is recognized, it is generally not well addressed. A challenge to a large oilfield service company was to leverage the best practices of a successful field-oriented journey management program for application in a metropolitan environment. The number of miles driven on company business in this non-field metropolitan environment is substantial, but driving to and from work (commuting) was found to eclipse this exposure by 25 times. Because the greatest exposure to our metropolitan employees is commuting, the challenge becomes even greater---to develop a journey management plan that not only addresses business driving but also commuter driving. Introduction A major international oilfield service company recognized many years ago that driving was its greatest exposure to a catastrophic event on a daily basis. This continues to hold true. The International Association of Oil & Gas Producers (OGP) has also established that the highest common cause of fatalities in the industry---21.9% (2004)---is driving related. The company responded to this critical situation with aggressive goals for reduction, and supported these with the development of comprehensive and multilayered practices and processes collectively referred to as journey management. These measures have been instrumental in significantly decreasing crash events. For example, in 2000 the crash rate was 2.0 per million miles driven and in 2004 it had been lowered to 0.5 per million miles. This has been achieved with a fleet of over 12,500 vehicles and mileages now totaling over 200 million miles annually (Fig. 1). With this reduction in the number of events that occur primarily in "the field," the company is now focusing more closely on "non-field" events. These non-field events have traditionally been associated with product/technology centers and headquarter support facilities that tend to be in metropolitan areas. Historically, these events don't receive the same attention as those occurring in the field as they have less impact on business continuity. However, it is realized that the impact on the employee is equal. This paper addresses the company's application of field-derived risk management process and controls to provide the basis for a journey management plan for the city of Houston, Texas. The exposure in this target city is composed of:11 company facilities5,500 employees plus contractors5,500 personal vehicles70 company vehicles40,000,000 commuter miles annually320,000 business miles annually1,200,000 mi/yr driven by visitors on business* * including "for hire" transport and rental vehicles