ArticlePDF Available

Abstract

The authors place in perspective the role and uses of expert judgment in examining complex technical and engineering problems. Specifically, they indicate how expert judgements are usually used in analyzing technical problems, how to improve the use of expert judgments, and how to interpret judgments in analysis. The value of quantifying expert judgments to complement the expert's qualitative thinking and reasoning is stressed. The relationships between procedures to quantify judgments and the general principles of engineering are discussed
IEEE
TRANSACTIONS
ON
ENGINEERING MANAGEMENT,
VOL.
36,
NO.
2,
MAY
1989
83
On the Uses
of
Expert Judgment on Complex
Technical Problems
RALPH L. KEENEY
AND
DETLOF VON WINTERFELDT
Abstract-This paper places in perspective the role and uses of expert
judgment in examining complex technical and engineering problems.
Specifically, we indicate
how
expert judgments are usually used in
analyzing technical problems, how to improve the use of expert
judgments and how to interpret expert judgments in analysis. The value
of quantifying expert judgments to complement the expert’s qualitative
thinking and reasoning is stressed. The relationships between procedures
to quantify judgments and the general principles of engineering are
discussed.
THE USE
OF
EXPERT JUDGMENT
ON
TECHNICAL PROBLEMS
IS
UNAVOIDABLE
AND
DESIRABLE
UDGMENT is extensively applied in searching for the
J
solution to any significant technical problem. Indeed,
judgments are necessary in all phases of dealing with technical
problems. A judgment is initially required in determining that
a problem is even worthy of attention. Then judgment is
needed to understand the problem dimensions, to develop
alternatives, to decide what data to collect and what not to
collect, to choose what models to build, to interpret the results
of any data collection or any calculation, and to put all the
information together to analyze and solve the problem. It is
better that these judgments are made by experts rather than
nonexperts
,
because experts have the knowledge and experi-
ence to make these judgments. Experts are sought to work on
complex problems precisely because of their expertise, not
because they are able to avoid the use of judgment.
EXPLICIT USE
OF
EXPERT JUDGMENT
IS
OFTEN VALUABLE
Since the use of expert judgment is unavoidable in examin-
ing technical problems, the main issue is whether it should
be
used implicitly or explicitly. Or, since expert judgment is
always partially given implicitly, a more precise statement
of
the issue is, under what circumstances is it worth the effort to
make certain expert judgments explicit? Although there is no
clear guideline, there are numerous problems where explicat-
ing expert judgments serves as a valuable complement to, not a
substitute for, the use of implicit expert judgments. Explicit
judgments typically break an implicit thought process into
smaller parts and apply logic to integrate these parts. Data or
calculations may provide numerical estimates for some of the
problem parts. In addition, the steps and the judgments used in
an explicit thought process can and should be clearly and
Manuscript received August
3,
1988. The review of
this
paper was
processed by Editor
D.
F. Kocaoglu. This work was partially supported by the
National Science Foundation Grant SES-8520252.
The authors are with the Systems Science Department, University
of
Southern California, Los Angeles, CA 90089-0021.
IEEE Log Number 8927479.
thoroughly documented to improve communication and facili-
tate peer review. To some extent, the use of explicit expert
judgments can be thought of as a consistency check of the
implicit thought process and vice versa.
Significantly more effort is required to make expert
judgments explicit than to use implicit expert judgment. It is
worth this effort when the problem is particularly important or
complex, when information is required from a range of
technical disciplines, or when communication and/or justifica-
tion of the experts’ thought processes or their implications are
important. When a technical problem is complex, it is
extremely difficult to informally process all of the information
in one’s head. This is one reason why engineers and scientists
build models to aid their thinking in complex situations.
If the knowledge of several disciplines is required on a
specific technical problem, then no individual has the expertise
to make overall implicit judgments. The problem must be
decomposed
so
expertise can be utilized from the various
disciplines. This knowledge can be integrated more reasonably
if the expert judgments are explicit. Then a model can
be
constructed to integrate and appraise the technical parts.
Implicit judgments are more difficult than explicit expert
judgments to communicate precisely. Clear communication
requires that judgments are made explicit for review and
appraisal. Indeed, asking an expert to explain the reasons,
assumptions, and thought processes underlying a particular set
of conclusions means that the process of explication is well
under way.
QUANTIFYING EXPERT JUDGMENT HAS MANY ADVANTAGES
Expert judgment can be explicated either quantitatively or
qualitatively. Experts are often uncomfortable with quantita-
tive expressions of their judgments, because they worry that
numbers reflect more precision and knowledge than they
really have. In particular, when data are limited or missing and
when calculations and models are unsatisfactory or even
contradictory, many experts prefer verbal qualifications over
numerical expressions of knowledge because words seem to
reflect their own vagueness.
Quantification of expert judgments has, however, many
advantages over words. First, words are ambiguous and
imprecise. For instance, the interpretation of “a small chance
of a moderate to large earthquate
in
the near future” is very
ambiguous compared to “a 10-percent chance of a Richter
magnitude
6
or greater earthquake in the next
5
years.”
Numerous researchers (see von Winterfeldt and Edwards
[
1
81)
have demonstrated that qualitative terms such as “small
chance” have large ranges of interpretation;
in
this case from
OO18-9391/89/05OO-OO83$01
.OO
0
1989 IEEE
84
IEEE
TRANSACTIONS ON
ENGINEERING
MANAGEMENT,
VOL.
36,
NO.
2,
MAY
1989
around 1 to 40 percent, depending
on
whom you ask. On the
other hand, a “10-percent chance” has an unambiguous
meaning,
so
quantification certainly facilitates communica-
tion.
Quantification also requires hard thought about the exact
meaning of a judgment. The comfortable vagueness of words
often reflects a vagueness about the question being asked
rather than vagueness about the answer. Furthermore, numeri-
cal expressions of judgments both allow and force experts to
be precise about what they know as well as to acknowledge
what they do not know.
PROBABILITIES APPROPRIATELY EXPRESS
AND
QUANTIFY EXPERT
JUDGMENTS
The purpose of quantifying expert judgments is to unambig-
uously record .the expert’s state of knowledge about something
requiring his or her expertise. Probabilities provide a mathe-
matical representation of experts’ state of knowledge about
what one knows and does not know. This may
be
interpreted
as offering several possible hypotheses of what may happen
with one’s judgment about the relative likelihood that each
proves to be true. Each statement reflects the degree of belief
in propositions about uncertain events. These propositions can
be
about uncertain phenomena (e.g., whether the probability
of the recurrence of an earthquake on a fault segment increases
with time since the last earthquake) or about parameters
underlying a probabilistic process (e.g., the average time
between major earthquakes
on
a fault).
The use of probabilities to express and process expert
opinions has a long history dating back over two centuries to
Bayes
[l].
In this century, Ramsey [13], de Finetti [3], and
Savage
[
141 laid the conceptual and philosophical foundation
for quantifying expert judgments as probabilities. More recent
discussion can be found in Kyburg and Smokler [7] and von
Winterfeldt and Edwards
[
181.
Since probabilities are numerical expressions of expert
judgments, their usefulness partially rests on the arguments for
quantification made in the previous section. In addition,
however, probabilities are useful, because they provide access
to the substantial apparatus of probability theory, which allows
for consistency checks and rules for updating uncertain
knowledge based
on
new information. Consistency checks can
be simple like “the probabilities of mutually exclusive and
collectively exhaustive events must sum to one” or compli-
cated like “the conditional probability of an event
A
given
event
B
is equal to the joint probability of the two events
divided by the marginal probability of event
B.
Expert
systems may substantially improve our ability to assess the
consistency of a complicated set of expert judgments (see
[
1 13). Bayes’ theorem prescribes how probabilities should be
revised to take into account new information (see [2]).
OBTAINING PROBABILITY JUDGMENTS FROM EXPERTS
The assessor wants to insure that the state of knowledge of
the expert is accurately reflected in the assessed probabilities.
This is done in a long series of questions. Over the last
20
years there has been an accumulation of significant applied
experience, a large number of experimental studies, and
several formal investigations that provide guidance in the
techniques of probability elicitation (for example, see [5], [8],
[lo],
[
151-[ 181. Probability elicitation requires experience,
skill, art, and science.
The art of assessment is crucial to help the expert feel
comfortable and to adapt the questioning process to facilitate
the expression of knowledge in the manner corresponding to
the expert’s thought process. The assessor is in some sense
both designing and playing a “chess game” with the expert,
with the special property that it is a cooperative rather than
competitive game. The science of assessment comes directly
from the fundamental axioms of probability theory and their
implications. In addition, all assessments must be consistent
with relevant scientific laws (e.g., gravity, laws of thermody-
namics, fluid flow, and chemical reactions) and should
account for any available data.
As
an illustration, a recent problem involved estimating the
amount of hydrogen that would be produced
in
a nuclear
reactor vessel during a specified accident (see [4]). This
would, among other things, depend on the amount of
zirconium available
to
be oxidized, the chemical reaction
between steam and zirconium to produce hydrogen and
zirconiumoxide, the pressure and temperature of the steam,
the circulation pattern of the steam in the reactor, and the
melting temperature of the alloy that shields the zirconium
from the steam. Scientific models and data are generally used
to describe each of these aspects individually under well
controlled conditions (e.g., known temperature, pressure,
steam flow, and exposed zirconium). However, it is the unique
conditions
of
tbe specific nuclear accident that are not
precisely known and the dynamic, as opposed to equilibrium,
conditions that are of interest. Hence, expert judgment is
necessary to integrate the data and model calculations with a
broader knowledge of the dynamics and sources of uncertainty
to provide an appropriate estimate of hydrogen production.
The result should be consistent with all the scientific knowl-
edge and the assessment process guided by this knowledge.
Three general principles of engineering are also used in
assessing expert judgments: 1) try a reasonable approach to
explicate expert judgments, if this fails try a different
reasonable alternative (e.g., trial and error without destructive
testing);
2)
use successively better approximations both to
converge to and to bound from above and below expert
judgments; and 3) use independent approaches to obtain
judgments to serve as consistency checks of assessed informa-
tion. Elaboration on each of these three may be appropriate.
There are many reasonable approaches to express knowl-
edge in terms of probabilities. Based on an understanding of
the technical process, an expert may feel that a quantity of
interest may
be
represented by a particular probability
distribution (e.g., lognormal, Poisson). One expert may feel
more comfortable providing the median and various fractiles
of a cumulative probability distribution, whereas another
expert may feel more comfortable ranking the relative
likelihoods of various intervals of a quantity that can be
normalized to yield probabilities.
For quantities where the expert is using calculations to aid
thinking, the assessor may wish to decompose the assessment
KEENEY
AND
VON
WINTERFELDT: USES
OF
EXPERT
JUDGMENT
ON TECHNICAL
PROBLEMS
-
85
into steps. In many cases, to estimate possible health effects
due to air pollution, assessments might first estimate pollutant
concentrations conditional on air pollution levels and then
health effects conditional on pollution concentrations (see
[9]).
In fact, these two assessments may rely on different experts
since a meteorologist is needed to assess pollutant concentra-
tions given emissions and a physiologist is required to assess
health effects conditional on pollutant concentrations. The
latter assessment might further be decomposed into a physical
effect (e.g., parts per million of the pollutant in the blood)
given exposure to different concentration levels and health
effects given different physical effects.
The use of successive approximations begins with easy
questions to bound a quantity of interest. An example based on
an analysis by Keeney and Lamont
[6]
concerned the
probability that a landslide would occur at a particular site due
to a magnitude
6
earthquake on a nearby fault. Although data
were available from soil testing and analysis at the site and
significant experience relating earthquakes to landslides under
numerous conditions, there was no direct way to calculate the
probability of a landslide. The assessor first asked the expert,
“If a magnitude
6
occurs on the nearby fault, do you think it is
at least
90
percent likely that a landslide would occur?” The
response was, “Nowhere near that high.” The assessor then
asked, “Is there a one-half chance of a landslide?” The
response was, “It is less than that.” This bounds the
probability of a landslide at 0.5. The next question was, “Is
there at least a 5-percent likelihood of a landslide given a
magnitude
6
earthquake?” The response ‘‘yes’’ bounded the
probability of a landslide from below with 0.05. The
0.05
response seemed more difficult to make than the 50-50
response,
so
the assessor asked,
“Is
the probability less than
0.3?”
The response was, “Yes, but you are getting there.”
“How about
15
percent, would it be that likely?” The expert
said, “It’s at least 15 percent; I think the likelihood of the
landslide is about
20
percent.
The assessor still proceeded
with, “How does
25
percent sound?” The expert stated, “It
could be that high, but I think the 20 percent is a better
estimate given my current knowledge of the site conditions.
This leads one to conclude that reasonable bounds on the
probability of the landslide are
0.15
and 0.25 with
0.2
being a
good estimate. The expert’s reasoning for this judgment was
then carefully documented.
A well designed assessment process has many consistency
checks based on a principle analogous to triangulation used in
surveying. In this case, if you want the elevation of site
B
relative to site
A,
you first directly measure the elevation of
B
relative to
A
and then compare the elevation of both sites to the
elevation of an intermediate site
C.
Then, the difference
between sites
A
and
B
should equal the sum of the differences
between
A
and
C
and between
B
and
C
for consistency.
Inconsistencies are resurveyed, often using additional interme-
diate points until consistency
is
achieved. With probabilities,
one directly assesses a probability distribution for the desired
quantity and then uses decomposed assessment as another
approach. Consistency is then checked and reassessments done
in the case of significant discrepancies. Consistency checks
can also include examining the shapes of probability densities,
the probabilities of different intervals on the quantity of
interest,
or
ranking the likelihood
of
various events and
comparing these with implications of the assessments. If
multiple lines of reasoning and judgments lead to the same
result (i.e., probabilities), you feel more comfortable using the
judgment as representing the current state of knowledge.
DOCUMENTATION
OF
EXPERT JUDGMENTS
An extremely important element of any elicitation of expert
judgment is the accompanying documentation. It is desirable
to make the reasoning on which explicit expert judgments are
based as clear as possible. Any assumptions
or
data used,
whether general
or
specific, should be listed along with the
logic supporting their relevance. For example, if an expert
uses data on small earthquakes to infer the relative frequency
of the occurrence of large earthquakes, the reasoning should
be stated. In short, a quality documentation of expert
judgments should be done for the same reasons, should answer
the same questions, and should lend credibility to the work
exactly as a quality documentation for any significant technical
or
scientific work.
There are two other advantages of explicating expert
judgments accompanied by a quality documentation. First, this
process enhances the thoroughness and ease with which peer
review can be conducted. And of course it should be clear that
peer review of expert judgments is as important as peer review
of other parts of complex technical analyses. Second, with
expert judgments made explicit and reasoning stated, it is
easier for both the experts themselves and appraisers to
identify both inadvertent and intentional biases in judgments.
This should have a positive influence on the quality of the
judgments produced.
USES
AND
MISUSES
OF
EXPERT JUDGMENTS
As is the case with all applied technical work, expert
judgments can be misinterpreted, misrepresented, and mis-
used. To reduce the likelihood of such incidents, it is
important to correctly interpret and use expert assessments.
Expert judgments are not equivalent to technical calcula-
tions based on universally accepted scientific laws
or
to the
availability of extensive data on precisely the quantities of
interest. Expert judgments should be made explicit for
problems where neither of the above are available. Expert
assessments in the form of probabilities represent a snapshot at
a given time of the state of knowledge of the individual expert
about a given item of interest. The probabilities afford the
opportunity to express both what the expert knows and does
not know. Indeed, by being explicit about expert judgments
and documenting the reasoning for them, it
is
possible to
design experiments that would best increase
our
knowledge
and understanding about a complex problem.
The main misuses of explicit expert judgments stem from
misrepresentation of or overreliance on them. Expert judg-
ments often have significant uncertainties, and it is critical to
include these when reporting expert judgments. For example,
just reporting an average without a range
or
a probability
distribution for a quantity of interest gives the illusion of too
much precision and of objectivity. Expert judgments are
sometimes inappropriately used to avoid gathering additional
86
IEEE TRANSACTIONS ON
ENGINEERING
MANAGEMENT,
VOL.
36,
NO.
2,
MAY
1989
management or scientific information. These judgments
should complement information that should be gathered, not
substitute for it. Sometimes decision makers with a predis-
posed desire to select a given alternative seek experts whose
views support or justify their position. This is clearly a misuse
of judgments. However, it is worth noting that with the
judgments made explicit, it is easier to identify weaknesses in
the reasoning behind a decision.
Since science and knowledge are constantly changing, it is
natural that the state of knowledge of an individual changes
SO
his or her assessments will probably be different in the future
than they are today.
Also,
any expert has constraints on the
time available to study and assimilate everything about an item
of interest. And a particular fact or data set may be overlooked
during an assessment. Expert assessments are designed to be
updated to account for such situations. Indeed, being explicit
both reduces the likelihood of omitting important information
to one’s judgments and enhances the likelihood that “short-
comings” in reasoning are detected. The need to change
expert assessments are not failures of the experts, the
assessments, or the assessment process. Rather it is a natural
and desired feature to deal with the reality of science,
knowledge, and complex problems.
As
a result of expert assessments, someone or some
organization may wish to ‘‘demonstrate that some assessments
could not be correct.” For example, suppose an organization
felt the range for possible hydrogen production in a nuclear
reactor during a specified accident was estimated “too high”
by the experts. If this led to additional experimentation that
clearly demonstrated their position, that would
be
a success for
the assessments and the explicit assessment process.
One
intent is to motivate the advancement and improved communi-
cation of science.
ASSESSMENTS WITH GROUPS
OF
EXPERTS
Since different experts may have different information or
different interpretations of information, they can naturally
have different judgments. For new and complex problems, a
diversity of opinions might be expected. If such differences
exist, this would be identified in expert assessments. Certainly
for complex problems it is useful to know the range of expert
interpretation that exists, and the reasoning behind any
differences.
A
large study of the risks of nuclear power plants
recently used multiple experts on many issues to understand
and document the range of expert opinion on this important
problem (see
[12]).
When the judgments
of
different experts conflict, the
judgments of the different experts can be used in analogous
analyses of the problem to appraise if the implications for
decision making are different. If
so,
perhaps the basis of these
particular judgments should be subjected to additional study
(e.g., experimentation) or analysis. The sources of informa-
tion, logic, and interpretations of the experts should be
appraised. It is often useful at this stage to have the experts
interact and share knowledge before revising their judgments
(see
[81).
SUMMARY
Expert judgment will always be a key ingredient of technical
analysis. We know much about how to elicit and use it, but we
still have much to learn. More research is needed on
qualitative judgments such as the relevance of particular
variables to a model of some complex phenomenon. Proce-
dures to improve our ability to identify assumptions on which
judgments rely would
be
helpful. Experiments to learn how
to improve the quality of judgments elicited over time (e.g.,
interest rates one year from now) from individual experts are
also a priority. Additional practical experience in organizing
groups of experts to appropriately share knowledge and
improve the resulting quality of judgments would also be very
useful.
The value of expert assessments to the study of a complex
problem should be appraised in terms of its usefulness for
communication, learning, understanding, and decision mak-
ing. To do this, one must understand the interpretation of
expert assessments and their proper uses. Expert assessments
are meant to be a complement to and motivation for scientific
studies and analysis, not as a substitute for either. With this
orientation, the potential value added to an analysis of a
complex problem by explicit expert assessment should be
substantial.
REFERENCES
T. Bayes, “Essay toward solving a problem in the doctrine of
chances,”
Biometrika,
vol. 45, pp. 293-315, 1958 (original 1763).
J. R. Benjamin and C. A. Cornell,
Probability, Statistics, and
Decision for Civil Engineers.
New York: McGraw-Hill, 1970.
B. de Finetti,
“La
prevision: Ses lois logique, ses sources subjective,”
Annales de I’Znstitute Henri Poincare,
vol. 7, pp. 1-68, 1937.
R. John, R.
L.
Keeney, and D. von Winterfeldt,
“Probabilistic
estimates of complex technical phenomena:
Estimating hydrogen
production during severe nuclear power plant accidents,” presented at
the Nat. Meet. Operations Res.
Soc.
Amer., Vancouver, B.C., May 8-
10, 1989.
R. L. Keeney,
Siting Energy Facilities.
New York: Academic Press,
1980.
R.
L.
Keeney and A. Lamont, “A probabilistic analysis of landslide
potential,” in
Proc.
2nd
U.S.
Nat.
Conf.
Earthquake Eng.,
Stanford Univ.
(Stanford, CA), Aug. 22-24, 1979.
H. E.
Kyburg, Jr., and
H.
E. Smokler, Eds.,
Studies in Subjective
Probability.
New York: Wiley, 1964.
M. L. Merkhofer, “Quantifying judgmental uncertainty: Methodology,
experiences, and insights,”
ZEEE Trans. Syst., Man, Cybern.,
vol.
M. G. Morgan,
S.
C. Morns, M. Henrion, D. A.
L.
Amaral, and W.
R. Rish, “Technical uncertainty in quantitative policy analysis-A
sulfur air pollution example,”
Risk Anal.,
vol. 4, pp. 201-216, 1984.
A. Mosleh, V. M. Bier, and G. Apostolakis, “A critique
of
current
practice for the use of expert opinions in probabilistic assessment,”
Reliab. Eng. Syst. Safety,
vol. 20, pp. 63-85, 1988.
J. L. Mumpower, L. D. Phillips,
0.
Rem, and V. R. R. Uppuluri,
Eds.,
Expert Judgment and Expert Systems.
Heidelberg, W.
Germany: Springer, 1987.
N.
R.
Ortiz,
T. A. Wheeler, M. A. Meyer, and R.
L.
Keeney, “Use
of
expert judgment in NUREG-1
150,”
presented at the Sixteenth Water
Reactor Safety Infor. Meet., Washington, DC, Oct. 24-27, 1988.
F.
P. Ramsey, “Truth and probability,” in
The Foundations of
Mathematics and Other Logical Essays,
R.
B.
Braithwaite, Ed.
New York: Harcourt, 1931.
L. J. Savage,
The Foundations of Statistics.
New York: Wiley,
1954.
C.
S.
Spetzler and C. A. Stael von Holstein, “Probability encoding in
decision analysis,”
Management Sci.,
vol. 22, pp. 340-352, 1975.
T.
S.
Wallsten and
D.
V.
Budescu, “Encoding subjective probabilities:
A psychological and psychometric review,”
Management Sei.,
vol.
R.
L.
Winkler, “The quantification
of
judgment: Some methodological
suggestions,”
J. Amer. Stat. Ass.,
vol. 62, pp.
1105-1
120, 1967.
D. von Winterfeldt and W. Edwards,
Decision Analysis and Behav-
ioral Research.
New York: Cambridge, 1986.
SMC-17, pp. 741-752, 1987.
29, pp. 151-173, 1983.
... Hillson and Hulett [30] promote "awareness, understanding, identification and managing sources of bias (both perceptual and heuristic)", to reduce bias by using different assessment techniques and then monitoring this performance ostensibly through the program's risk management working group and/or the risk review board. With regards to expert judgment, Keeney and Winterfeldt suggest, "the steps and the judgments used in an explicit thought process can and should be clearly and thoroughly documented to improve communication and facilitate peer review [31] . " Hubbard and Evans [32] proclaim that "calibration training" techniques have shown some positive evidence of improving subjective risk likelihood scoring, particularly in the area of overconfidence. ...
... Hillson and Hulett [30] promote "awareness, understanding, identification and managing sources of bias (both perceptual and heuristic)", to reduce bias by using different assessment techniques and then monitoring this performance ostensibly through the program's risk management working group and/or the risk review board. With regards to expert judgment, Keeney and Winterfeldt suggest, "the steps and the judgments used in an explicit thought process can and should be clearly and thoroughly documented to improve communication and facilitate peer review [31] . " Hubbard and Evans [32] proclaim that "calibration training" techniques have shown some positive evidence of improving subjective risk likelihood scoring, particularly in the area of overconfidence. ...
... This study adopts the expert judgement and quantitative physical and mathematical models as impact / result assessment techniques. Expert judgment (Keeney & Von Winterfeldt, 1989) is based on the professional option of experts that have considerable experience in the areas of assessment of sustainable competitiveness. Expert judgments can be used when limited data and information are available that did not allow for predictive modelling to explore the impacts. ...
Thesis
Full-text available
The theoretical section of the thesis reviewed the literature and gathered data on the concept of sustainable competitiveness, the factors affecting its implementation, the benefits of sustainability, and the factors affecting the development of Nigeria's oil and gas sector. In the analytical section of this thesis, multiple regression analysis was used to state the proposed model of sustainable competitiveness for evaluating performance in the Nigerian oil sector. Similarly, the chosen drivers and indicators of sustainable competitiveness are analysed to explore the link between sustainable competitiveness and profitability in Nigeria's oil industry. The findings were analysed, synthesised and suggestions generated
... Expert judgment quantification adds substantial value in analyzing complex problems when there are no universally accepted scientific laws or extensive data available. Keeney & Von Winterfeldt (1989) stressed the value of quantifying expert judgments to complement the expert's qualitative thinking and reasoning. They also highlighted the need for explicit judgments to avoid misinterpretations and misuse. ...
Article
Full-text available
Digital transformation has taken center stage in every IT organization. Data is being created at various sources: edge, core, and cloud at an unprecedented rate. For enterprise IT infrastructure, this means more places to store data and more ways to store them. Storage solutions can be broadly categorized as Direct Attached Storage (DAS), Storage Area Network (SAN), Network Attached Storage (NAS), Hyperconverged Infrastructure (HCI), and Public Cloud Storage, each with its advantages and potential drawbacks. Besides computing and networking, storage is one of the core physical components of an IT infrastructure. Application performance and availability depend strongly on their underlying storage. As such, the selection of storage systems is one of the critical decisions for IT executives. Assessment of Enterprise Data Storage Systems (EDSS) for selecting the one that provides a comprehensive solution requires not only the consideration of technical performance and economic feasibility but also other perspectives such as strategic, operational, and regulatory. An assessment model with multiple perspectives and related criteria will serve as a valuable reference in the decision-making process. This study uses expert judgment to validate an assessment model covering Strategic, Technological, Operational, Regulatory, and Economic (STORE) perspectives and their related criteria. Expert judgment is also used to calculate the criteria weights using the constant sum pairwise comparison method. The results can be used for the evaluation of various storage alternatives under consideration. It is anticipated that the STORE assessment model and criteria weights will be valid for IT organizations in their long-term strategic decision-making.
... Expert judgement is used to get the solution from qualitative data to become quantitative. Besides, expert judgement is needed to understanding the dimension of the problems, developing the alternative, collecting all of the data, choosing the best model to analyze and solve the problem [3]. Analytical Hierarchy Process is one of the methods that implementing expert judgement to solve complex problems [4], and ArcGIS is used to mapping landslide susceptibility. ...
Article
Full-text available
Referring to The Center for Research on the Epidemiology of Disaster (CRED) data shows that landslides are responsible for at least 17% of all fatalities from natural hazards worldwide. One of the disasters that commonly occur in Indonesia is landslides, especially in Tangse Sub-District, Pidie District, Aceh Province. The main objective of this research was to determine the parameters and the weight of each parameter that causes landslide susceptibility by using expert judgement criteria. The expert decision can be accepted if the Consistency Ratio (CR) <0.1. The results showed the value of CR from 0.04 to 0.3. Expert judgement decisions with a value of CR = 0.04 (<0.1) were the most reasonable criteria of parameters for mapping landslide susceptibility. Those parameters were slope (42%), rainfall (36%), soil type (12%), and land use (10%). By using these criteria, there were four classes of land susceptibility in this area, namely very high, high, moderate, and low covering an area of 805.40 Ha (1.03%), 46,526.72 Ha (59.27%), 30,600, 38 Ha (38.98%), and 573.61 Ha (0.73%), respectively. Disaster mitigation could be carried out by socializing the vulnerability of landslides, soil protection through slope stabilization, and vegetative conservation
... Our study makes a novel contribution to the risk analysis literature by showing that experts' qualitative judgment of risk, capturing the bottom-line meaning of information, is a stronger driver than their quantitative judgment. Although significant prior normative work (eg, 22,[62][63][64][65][66][67][68][69][70] has focused on eliciting precise probabilities from experts in engineering domains, comparatively little work has attempted to characterize experts' mental representations, and their consequent interpretations, of what factors drive their decisions, and how to incorporate this expertise into a process that can improve decision making. ...
Article
As engineers retire from practice, they must transfer their expertise to new recruits. Typically, this is accomplished using decision‐support systems that communicate precise probabilities. However, Fuzzy‐Trace Theory (FTT) predicts that the most experts prefer to rely on “gist” representations of risk over “verbatim” representations. We conducted a survey of 41 NASA employees (whose mathematical abilities are a prerequisite for their jobs) and 233 nonexperts. We tested whether experts designing space missions under the micrometeoroid and orbital debris (MMOD) impact – rely more on qualitative or quantitative risk representations. We tested three hypotheses: gist and verbatim representations of MMOD risk are distinct for both experts and nonexperts; gist representations are more predictive of decisions than are verbatim representations; and providing nonexperts with a bottom‐line meaning change their gists more than verbatim information does. Results support FTT's predictions: gist and verbatim representations were distinct, and gist representations were associated with decisions for both experts and nonexperts. We did not observe an association between quantitative risk estimates and decisions for either experts or nonexperts. We observed that exposing a nonexpert to an expert's gist modified that nonexpert's gist yet exposing quantitative risk information did not. Implications for expertise transfer are discussed.
... In 1975, the U.S. Nuclear Regulatory Commission documented and attempted to standardize its use of probabilistic expert elicitation for nuclear safety assessments (U.S. Nuclear Regulatory Commission, 1975). Papers by Keeney and von Winterfeldt advanced the case for use of expert judgments for complex technical problems, and outlined a formal elicitation process for obtaining judgments regarding nuclear safety (Keeney & von Winterfeldt, 1989, 1991. Morgan and Henrion published their book Uncertainty in 1990, which had an extensive discussion of probability elicitation as a key component of characterizing uncertainty for quantitative policy analysis (Morgan, Henrion, & Small, 1992). ...
Article
Full-text available
The formal mathematical structure for decision making under uncertainty was first expressed in Savage's axioms over 60 years ago. But while the underlying normative concepts for decision making under uncertainty remain constant, the practice of applying these concepts in real‐world settings, as conducted by decision analysis (DA) specialists working with agencies and interested parties, has seen a major transformation in recent decades. The purpose of this article is to provide perspectives that characterize and interpret how DA practice for societal risk management questions has grown and is being transformed over the last 40 years. It addresses a series of themes for parsing changes in how DA has evolved toward more flexible approaches, moving beyond strict theoretical assumptions and constrained settings, and addresses multiple interested parties to provide insights rather than a single correct answer. The article clarifies the path from the initial DA formulation as a set of normative axioms, through gradual change into what is now the most flexible and least restrictive form of policy analysis. The article shows how the practice of DA for societal risks has become more attuned to a wide array of interests and perspectives, more behaviorally informed, more creative, and more informative for governance process. It addresses the following themes: the evolution in the basic orientation of DA, the increasingly important role of stakeholders in DA practice, the importance and value of key problem‐structuring techniques, and evolution in approaches for eliciting values and technical judgments.
... As noted, this tool leverages the science and art of cost estimation through the application of expert judgment and algorithmic data modeling. Expert judgment is an estimate based on the expertise of one or more people familiar with the costs and scope of similar system developments (Keeney & Winterfeldt, 1989;Morris, 1974). In the case of Acquisition Research Program: Creating Synergy for Informed Change -268 -PTW, the expert judgment comes from people familiar with the market for that particular product. ...
Chapter
Full-text available
Expert elicitation plays a prominent role in fields where the data are scarce. As consulting multiple experts is critical in expert elicitation practices, combining various expert opinions is an important topic. In the Classical Model, uncertainty distributions for the variables of interest are based on an aggregation of elicited expert percentiles. Aggregation of these expert distributions is accomplished using linear opinion pooling relying on performance-based weights that are assigned to each expert. According to the Classical Model, each expert receives a weight that is a combination of the expert’s statistical accuracy and informativeness for a set of questions, the values of which are unknown at the time the elicitation was conducted. The former measures “correspondence with reality,” a measure of discrepancy between the observed relative frequencies of seed variables’ values falling within the elicited percentile values and the expected probability based on the percentiles specified in the elicitation. The later gauges an expert’s ability to concentrate high probability mass in small interquartile intervals. Some critics argue that this performance-based model fails to outperform the models that assign experts equal weights. Their argument implies that any observed difference in expert performance is just due to random fluctuations and is not a persistent property of an expert. Experts should therefore be treated equally and equally weighted. However, if differences in experts’ performances are due to random fluctuations, then hypothetical experts created by randomly recombining the experts’ assessments should perform statistically as well as the actual experts. This hypothesis is called the random expert hypothesis. This hypothesis is investigated using 44 post-2006 professional expert elicitation studies obtained through the TU Delft database. For each study, 1000 hypothetical expert panels are simulated whose elicitations are a random mix of all expert elicitations within that study. Results indicate that actual expert statistical accuracy performance is significantly better than that of randomly created experts. The study does not consider experts’ informativeness but still provides strong support for performance-based weighting as in the Classical Model.
Chapter
This chapter discusses the methodologies for siting energy facilities. The dual purposes of a particular siting study are to help find the best available site, and to provide a rationale and documentation for that decision. These are obviously different from the purposes of a siting methodology. A siting methodology should provide a framework for achieving the purposes of siting. Hence, the methodology is in some sense a means to aid in achieving the end purposes of particular siting studies. The chapter discusses the requirements, that is, desired properties, for such a methodology. This is intended to clarify the meaning of “provide a framework for achieving the purposes of siting.” The chapter presents the current siting methodologies and procedures. The purpose is to outline the conceptual approach used in these methodologies and to provide a procedural overview of their implementation. The chapter also discusses their shortcomings with respect to the requirements. It presents an overview of the decision analysis siting procedure and discusses the concept of value-free analyses.
Article
This study was performed as part of a feasibility study of using steam from an uncontrolled steam well in the area of the Geysers, California, to run a 5-Mw power plant. Near the site are three major active faults.
Article
Probability encoding is the process of extracting and quantifying subjective judgments about uncertain quantities. In the early 1970s, the Decision Analysis Group at SRI International recommended a formal process for probability encoding based on a structured interview between a trained interviewer and the subject. Additional methods have been developed for processing encoded probabilities to facilitate their use in decision analysis calculations. The SRI encoding process and its extensions are reviewed, and some examples and insights regarding the techniques that decision analysts have found effective for quantifying judgmental uncertainty are presented.
Article
The expert judgment process used in NUREG--1150, ''Severe Accident Risks: An Assessment for Five US Nuclear Plants,'' is an advance over those processes developed in previous probabilistic risk assessments. The new process was used to obtain expert judgment on issues expected to be the main contributors to the potential risk of five nuclear plants. The use of expert judgment helped to incorporate both the experience and the research results obtained since the Reactor Safety Study (US NRC, 1975). The new process also enabled NUREG--1150 to include a comprehensive uncertainty analysis, an analysis that had been lacking in the earlier Reactor Safety Study. This process for gathering expert judgment was developed in response to criticisms applied to the previous Reactor Safety Study by the Lewis Committee (Lewis, 1978) and to review comments of the Kouts Committee (1987) on the draft NUREG--1150. The process is based on accepted decision analysis techniques and findings from the numerous studies involving the quantification of judgment. The result, a formal process for eliciting and documenting expert judgment for risk assessment, is one of the major contributions of NUREG-1150. 19 refs., 5 figs., 1 tab.