Content uploaded by Ralph L. Keeney
Author content
All content in this area was uploaded by Ralph L. Keeney on Jan 27, 2015
Content may be subject to copyright.
IEEE
TRANSACTIONS
ON
ENGINEERING MANAGEMENT,
VOL.
36,
NO.
2,
MAY
1989
83
On the Uses
of
Expert Judgment on Complex
Technical Problems
RALPH L. KEENEY
AND
DETLOF VON WINTERFELDT
Abstract-This paper places in perspective the role and uses of expert
judgment in examining complex technical and engineering problems.
Specifically, we indicate
how
expert judgments are usually used in
analyzing technical problems, how to improve the use of expert
judgments and how to interpret expert judgments in analysis. The value
of quantifying expert judgments to complement the expert’s qualitative
thinking and reasoning is stressed. The relationships between procedures
to quantify judgments and the general principles of engineering are
discussed.
THE USE
OF
EXPERT JUDGMENT
ON
TECHNICAL PROBLEMS
IS
UNAVOIDABLE
AND
DESIRABLE
UDGMENT is extensively applied in searching for the
J
solution to any significant technical problem. Indeed,
judgments are necessary in all phases of dealing with technical
problems. A judgment is initially required in determining that
a problem is even worthy of attention. Then judgment is
needed to understand the problem dimensions, to develop
alternatives, to decide what data to collect and what not to
collect, to choose what models to build, to interpret the results
of any data collection or any calculation, and to put all the
information together to analyze and solve the problem. It is
better that these judgments are made by experts rather than
nonexperts
,
because experts have the knowledge and experi-
ence to make these judgments. Experts are sought to work on
complex problems precisely because of their expertise, not
because they are able to avoid the use of judgment.
EXPLICIT USE
OF
EXPERT JUDGMENT
IS
OFTEN VALUABLE
Since the use of expert judgment is unavoidable in examin-
ing technical problems, the main issue is whether it should
be
used implicitly or explicitly. Or, since expert judgment is
always partially given implicitly, a more precise statement
of
the issue is, under what circumstances is it worth the effort to
make certain expert judgments explicit? Although there is no
clear guideline, there are numerous problems where explicat-
ing expert judgments serves as a valuable complement to, not a
substitute for, the use of implicit expert judgments. Explicit
judgments typically break an implicit thought process into
smaller parts and apply logic to integrate these parts. Data or
calculations may provide numerical estimates for some of the
problem parts. In addition, the steps and the judgments used in
an explicit thought process can and should be clearly and
Manuscript received August
3,
1988. The review of
this
paper was
processed by Editor
D.
F. Kocaoglu. This work was partially supported by the
National Science Foundation Grant SES-8520252.
The authors are with the Systems Science Department, University
of
Southern California, Los Angeles, CA 90089-0021.
IEEE Log Number 8927479.
thoroughly documented to improve communication and facili-
tate peer review. To some extent, the use of explicit expert
judgments can be thought of as a consistency check of the
implicit thought process and vice versa.
Significantly more effort is required to make expert
judgments explicit than to use implicit expert judgment. It is
worth this effort when the problem is particularly important or
complex, when information is required from a range of
technical disciplines, or when communication and/or justifica-
tion of the experts’ thought processes or their implications are
important. When a technical problem is complex, it is
extremely difficult to informally process all of the information
in one’s head. This is one reason why engineers and scientists
build models to aid their thinking in complex situations.
If the knowledge of several disciplines is required on a
specific technical problem, then no individual has the expertise
to make overall implicit judgments. The problem must be
decomposed
so
expertise can be utilized from the various
disciplines. This knowledge can be integrated more reasonably
if the expert judgments are explicit. Then a model can
be
constructed to integrate and appraise the technical parts.
Implicit judgments are more difficult than explicit expert
judgments to communicate precisely. Clear communication
requires that judgments are made explicit for review and
appraisal. Indeed, asking an expert to explain the reasons,
assumptions, and thought processes underlying a particular set
of conclusions means that the process of explication is well
under way.
QUANTIFYING EXPERT JUDGMENT HAS MANY ADVANTAGES
Expert judgment can be explicated either quantitatively or
qualitatively. Experts are often uncomfortable with quantita-
tive expressions of their judgments, because they worry that
numbers reflect more precision and knowledge than they
really have. In particular, when data are limited or missing and
when calculations and models are unsatisfactory or even
contradictory, many experts prefer verbal qualifications over
numerical expressions of knowledge because words seem to
reflect their own vagueness.
Quantification of expert judgments has, however, many
advantages over words. First, words are ambiguous and
imprecise. For instance, the interpretation of “a small chance
of a moderate to large earthquate
in
the near future” is very
ambiguous compared to “a 10-percent chance of a Richter
magnitude
6
or greater earthquake in the next
5
years.”
Numerous researchers (see von Winterfeldt and Edwards
[
1
81)
have demonstrated that qualitative terms such as “small
chance” have large ranges of interpretation;
in
this case from
OO18-9391/89/05OO-OO83$01
.OO
0
1989 IEEE
84
IEEE
TRANSACTIONS ON
ENGINEERING
MANAGEMENT,
VOL.
36,
NO.
2,
MAY
1989
around 1 to 40 percent, depending
on
whom you ask. On the
other hand, a “10-percent chance” has an unambiguous
meaning,
so
quantification certainly facilitates communica-
tion.
Quantification also requires hard thought about the exact
meaning of a judgment. The comfortable vagueness of words
often reflects a vagueness about the question being asked
rather than vagueness about the answer. Furthermore, numeri-
cal expressions of judgments both allow and force experts to
be precise about what they know as well as to acknowledge
what they do not know.
PROBABILITIES APPROPRIATELY EXPRESS
AND
QUANTIFY EXPERT
JUDGMENTS
The purpose of quantifying expert judgments is to unambig-
uously record .the expert’s state of knowledge about something
requiring his or her expertise. Probabilities provide a mathe-
matical representation of experts’ state of knowledge about
what one knows and does not know. This may
be
interpreted
as offering several possible hypotheses of what may happen
with one’s judgment about the relative likelihood that each
proves to be true. Each statement reflects the degree of belief
in propositions about uncertain events. These propositions can
be
about uncertain phenomena (e.g., whether the probability
of the recurrence of an earthquake on a fault segment increases
with time since the last earthquake) or about parameters
underlying a probabilistic process (e.g., the average time
between major earthquakes
on
a fault).
The use of probabilities to express and process expert
opinions has a long history dating back over two centuries to
Bayes
[l].
In this century, Ramsey [13], de Finetti [3], and
Savage
[
141 laid the conceptual and philosophical foundation
for quantifying expert judgments as probabilities. More recent
discussion can be found in Kyburg and Smokler [7] and von
Winterfeldt and Edwards
[
181.
Since probabilities are numerical expressions of expert
judgments, their usefulness partially rests on the arguments for
quantification made in the previous section. In addition,
however, probabilities are useful, because they provide access
to the substantial apparatus of probability theory, which allows
for consistency checks and rules for updating uncertain
knowledge based
on
new information. Consistency checks can
be simple like “the probabilities of mutually exclusive and
collectively exhaustive events must sum to one” or compli-
cated like “the conditional probability of an event
A
given
event
B
is equal to the joint probability of the two events
divided by the marginal probability of event
B.
”
Expert
systems may substantially improve our ability to assess the
consistency of a complicated set of expert judgments (see
[
1 13). Bayes’ theorem prescribes how probabilities should be
revised to take into account new information (see [2]).
OBTAINING PROBABILITY JUDGMENTS FROM EXPERTS
The assessor wants to insure that the state of knowledge of
the expert is accurately reflected in the assessed probabilities.
This is done in a long series of questions. Over the last
20
years there has been an accumulation of significant applied
experience, a large number of experimental studies, and
several formal investigations that provide guidance in the
techniques of probability elicitation (for example, see [5], [8],
[lo],
[
151-[ 181. Probability elicitation requires experience,
skill, art, and science.
The art of assessment is crucial to help the expert feel
comfortable and to adapt the questioning process to facilitate
the expression of knowledge in the manner corresponding to
the expert’s thought process. The assessor is in some sense
both designing and playing a “chess game” with the expert,
with the special property that it is a cooperative rather than
competitive game. The science of assessment comes directly
from the fundamental axioms of probability theory and their
implications. In addition, all assessments must be consistent
with relevant scientific laws (e.g., gravity, laws of thermody-
namics, fluid flow, and chemical reactions) and should
account for any available data.
As
an illustration, a recent problem involved estimating the
amount of hydrogen that would be produced
in
a nuclear
reactor vessel during a specified accident (see [4]). This
would, among other things, depend on the amount of
zirconium available
to
be oxidized, the chemical reaction
between steam and zirconium to produce hydrogen and
zirconiumoxide, the pressure and temperature of the steam,
the circulation pattern of the steam in the reactor, and the
melting temperature of the alloy that shields the zirconium
from the steam. Scientific models and data are generally used
to describe each of these aspects individually under well
controlled conditions (e.g., known temperature, pressure,
steam flow, and exposed zirconium). However, it is the unique
conditions
of
tbe specific nuclear accident that are not
precisely known and the dynamic, as opposed to equilibrium,
conditions that are of interest. Hence, expert judgment is
necessary to integrate the data and model calculations with a
broader knowledge of the dynamics and sources of uncertainty
to provide an appropriate estimate of hydrogen production.
The result should be consistent with all the scientific knowl-
edge and the assessment process guided by this knowledge.
Three general principles of engineering are also used in
assessing expert judgments: 1) try a reasonable approach to
explicate expert judgments, if this fails try a different
reasonable alternative (e.g., trial and error without destructive
testing);
2)
use successively better approximations both to
converge to and to bound from above and below expert
judgments; and 3) use independent approaches to obtain
judgments to serve as consistency checks of assessed informa-
tion. Elaboration on each of these three may be appropriate.
There are many reasonable approaches to express knowl-
edge in terms of probabilities. Based on an understanding of
the technical process, an expert may feel that a quantity of
interest may
be
represented by a particular probability
distribution (e.g., lognormal, Poisson). One expert may feel
more comfortable providing the median and various fractiles
of a cumulative probability distribution, whereas another
expert may feel more comfortable ranking the relative
likelihoods of various intervals of a quantity that can be
normalized to yield probabilities.
For quantities where the expert is using calculations to aid
thinking, the assessor may wish to decompose the assessment
KEENEY
AND
VON
WINTERFELDT: USES
OF
EXPERT
JUDGMENT
ON TECHNICAL
PROBLEMS
-
85
into steps. In many cases, to estimate possible health effects
due to air pollution, assessments might first estimate pollutant
concentrations conditional on air pollution levels and then
health effects conditional on pollution concentrations (see
[9]).
In fact, these two assessments may rely on different experts
since a meteorologist is needed to assess pollutant concentra-
tions given emissions and a physiologist is required to assess
health effects conditional on pollutant concentrations. The
latter assessment might further be decomposed into a physical
effect (e.g., parts per million of the pollutant in the blood)
given exposure to different concentration levels and health
effects given different physical effects.
The use of successive approximations begins with easy
questions to bound a quantity of interest. An example based on
an analysis by Keeney and Lamont
[6]
concerned the
probability that a landslide would occur at a particular site due
to a magnitude
6
earthquake on a nearby fault. Although data
were available from soil testing and analysis at the site and
significant experience relating earthquakes to landslides under
numerous conditions, there was no direct way to calculate the
probability of a landslide. The assessor first asked the expert,
“If a magnitude
6
occurs on the nearby fault, do you think it is
at least
90
percent likely that a landslide would occur?” The
response was, “Nowhere near that high.” The assessor then
asked, “Is there a one-half chance of a landslide?” The
response was, “It is less than that.” This bounds the
probability of a landslide at 0.5. The next question was, “Is
there at least a 5-percent likelihood of a landslide given a
magnitude
6
earthquake?” The response ‘‘yes’’ bounded the
probability of a landslide from below with 0.05. The
0.05
response seemed more difficult to make than the 50-50
response,
so
the assessor asked,
“Is
the probability less than
0.3?”
The response was, “Yes, but you are getting there.”
“How about
15
percent, would it be that likely?” The expert
said, “It’s at least 15 percent; I think the likelihood of the
landslide is about
20
percent.
”
The assessor still proceeded
with, “How does
25
percent sound?” The expert stated, “It
could be that high, but I think the 20 percent is a better
estimate given my current knowledge of the site conditions.
”
This leads one to conclude that reasonable bounds on the
probability of the landslide are
0.15
and 0.25 with
0.2
being a
good estimate. The expert’s reasoning for this judgment was
then carefully documented.
A well designed assessment process has many consistency
checks based on a principle analogous to triangulation used in
surveying. In this case, if you want the elevation of site
B
relative to site
A,
you first directly measure the elevation of
B
relative to
A
and then compare the elevation of both sites to the
elevation of an intermediate site
C.
Then, the difference
between sites
A
and
B
should equal the sum of the differences
between
A
and
C
and between
B
and
C
for consistency.
Inconsistencies are resurveyed, often using additional interme-
diate points until consistency
is
achieved. With probabilities,
one directly assesses a probability distribution for the desired
quantity and then uses decomposed assessment as another
approach. Consistency is then checked and reassessments done
in the case of significant discrepancies. Consistency checks
can also include examining the shapes of probability densities,
the probabilities of different intervals on the quantity of
interest,
or
ranking the likelihood
of
various events and
comparing these with implications of the assessments. If
multiple lines of reasoning and judgments lead to the same
result (i.e., probabilities), you feel more comfortable using the
judgment as representing the current state of knowledge.
DOCUMENTATION
OF
EXPERT JUDGMENTS
An extremely important element of any elicitation of expert
judgment is the accompanying documentation. It is desirable
to make the reasoning on which explicit expert judgments are
based as clear as possible. Any assumptions
or
data used,
whether general
or
specific, should be listed along with the
logic supporting their relevance. For example, if an expert
uses data on small earthquakes to infer the relative frequency
of the occurrence of large earthquakes, the reasoning should
be stated. In short, a quality documentation of expert
judgments should be done for the same reasons, should answer
the same questions, and should lend credibility to the work
exactly as a quality documentation for any significant technical
or
scientific work.
There are two other advantages of explicating expert
judgments accompanied by a quality documentation. First, this
process enhances the thoroughness and ease with which peer
review can be conducted. And of course it should be clear that
peer review of expert judgments is as important as peer review
of other parts of complex technical analyses. Second, with
expert judgments made explicit and reasoning stated, it is
easier for both the experts themselves and appraisers to
identify both inadvertent and intentional biases in judgments.
This should have a positive influence on the quality of the
judgments produced.
USES
AND
MISUSES
OF
EXPERT JUDGMENTS
As is the case with all applied technical work, expert
judgments can be misinterpreted, misrepresented, and mis-
used. To reduce the likelihood of such incidents, it is
important to correctly interpret and use expert assessments.
Expert judgments are not equivalent to technical calcula-
tions based on universally accepted scientific laws
or
to the
availability of extensive data on precisely the quantities of
interest. Expert judgments should be made explicit for
problems where neither of the above are available. Expert
assessments in the form of probabilities represent a snapshot at
a given time of the state of knowledge of the individual expert
about a given item of interest. The probabilities afford the
opportunity to express both what the expert knows and does
not know. Indeed, by being explicit about expert judgments
and documenting the reasoning for them, it
is
possible to
design experiments that would best increase
our
knowledge
and understanding about a complex problem.
The main misuses of explicit expert judgments stem from
misrepresentation of or overreliance on them. Expert judg-
ments often have significant uncertainties, and it is critical to
include these when reporting expert judgments. For example,
just reporting an average without a range
or
a probability
distribution for a quantity of interest gives the illusion of too
much precision and of objectivity. Expert judgments are
sometimes inappropriately used to avoid gathering additional
86
IEEE TRANSACTIONS ON
ENGINEERING
MANAGEMENT,
VOL.
36,
NO.
2,
MAY
1989
management or scientific information. These judgments
should complement information that should be gathered, not
substitute for it. Sometimes decision makers with a predis-
posed desire to select a given alternative seek experts whose
views support or justify their position. This is clearly a misuse
of judgments. However, it is worth noting that with the
judgments made explicit, it is easier to identify weaknesses in
the reasoning behind a decision.
Since science and knowledge are constantly changing, it is
natural that the state of knowledge of an individual changes
SO
his or her assessments will probably be different in the future
than they are today.
Also,
any expert has constraints on the
time available to study and assimilate everything about an item
of interest. And a particular fact or data set may be overlooked
during an assessment. Expert assessments are designed to be
updated to account for such situations. Indeed, being explicit
both reduces the likelihood of omitting important information
to one’s judgments and enhances the likelihood that “short-
comings” in reasoning are detected. The need to change
expert assessments are not failures of the experts, the
assessments, or the assessment process. Rather it is a natural
and desired feature to deal with the reality of science,
knowledge, and complex problems.
As
a result of expert assessments, someone or some
organization may wish to ‘‘demonstrate that some assessments
could not be correct.” For example, suppose an organization
felt the range for possible hydrogen production in a nuclear
reactor during a specified accident was estimated “too high”
by the experts. If this led to additional experimentation that
clearly demonstrated their position, that would
be
a success for
the assessments and the explicit assessment process.
One
intent is to motivate the advancement and improved communi-
cation of science.
ASSESSMENTS WITH GROUPS
OF
EXPERTS
Since different experts may have different information or
different interpretations of information, they can naturally
have different judgments. For new and complex problems, a
diversity of opinions might be expected. If such differences
exist, this would be identified in expert assessments. Certainly
for complex problems it is useful to know the range of expert
interpretation that exists, and the reasoning behind any
differences.
A
large study of the risks of nuclear power plants
recently used multiple experts on many issues to understand
and document the range of expert opinion on this important
problem (see
[12]).
When the judgments
of
different experts conflict, the
judgments of the different experts can be used in analogous
analyses of the problem to appraise if the implications for
decision making are different. If
so,
perhaps the basis of these
particular judgments should be subjected to additional study
(e.g., experimentation) or analysis. The sources of informa-
tion, logic, and interpretations of the experts should be
appraised. It is often useful at this stage to have the experts
interact and share knowledge before revising their judgments
(see
[81).
SUMMARY
Expert judgment will always be a key ingredient of technical
analysis. We know much about how to elicit and use it, but we
still have much to learn. More research is needed on
qualitative judgments such as the relevance of particular
variables to a model of some complex phenomenon. Proce-
dures to improve our ability to identify assumptions on which
judgments rely would
be
helpful. Experiments to learn how
to improve the quality of judgments elicited over time (e.g.,
interest rates one year from now) from individual experts are
also a priority. Additional practical experience in organizing
groups of experts to appropriately share knowledge and
improve the resulting quality of judgments would also be very
useful.
The value of expert assessments to the study of a complex
problem should be appraised in terms of its usefulness for
communication, learning, understanding, and decision mak-
ing. To do this, one must understand the interpretation of
expert assessments and their proper uses. Expert assessments
are meant to be a complement to and motivation for scientific
studies and analysis, not as a substitute for either. With this
orientation, the potential value added to an analysis of a
complex problem by explicit expert assessment should be
substantial.
REFERENCES
T. Bayes, “Essay toward solving a problem in the doctrine of
chances,”
Biometrika,
vol. 45, pp. 293-315, 1958 (original 1763).
J. R. Benjamin and C. A. Cornell,
Probability, Statistics, and
Decision for Civil Engineers.
New York: McGraw-Hill, 1970.
B. de Finetti,
“La
prevision: Ses lois logique, ses sources subjective,”
Annales de I’Znstitute Henri Poincare,
vol. 7, pp. 1-68, 1937.
R. John, R.
L.
Keeney, and D. von Winterfeldt,
“Probabilistic
estimates of complex technical phenomena:
Estimating hydrogen
production during severe nuclear power plant accidents,” presented at
the Nat. Meet. Operations Res.
Soc.
Amer., Vancouver, B.C., May 8-
10, 1989.
R. L. Keeney,
Siting Energy Facilities.
New York: Academic Press,
1980.
R.
L.
Keeney and A. Lamont, “A probabilistic analysis of landslide
potential,” in
Proc.
2nd
U.S.
Nat.
Conf.
Earthquake Eng.,
Stanford Univ.
(Stanford, CA), Aug. 22-24, 1979.
H. E.
Kyburg, Jr., and
H.
E. Smokler, Eds.,
Studies in Subjective
Probability.
New York: Wiley, 1964.
M. L. Merkhofer, “Quantifying judgmental uncertainty: Methodology,
experiences, and insights,”
ZEEE Trans. Syst., Man, Cybern.,
vol.
M. G. Morgan,
S.
C. Morns, M. Henrion, D. A.
L.
Amaral, and W.
R. Rish, “Technical uncertainty in quantitative policy analysis-A
sulfur air pollution example,”
Risk Anal.,
vol. 4, pp. 201-216, 1984.
A. Mosleh, V. M. Bier, and G. Apostolakis, “A critique
of
current
practice for the use of expert opinions in probabilistic assessment,”
Reliab. Eng. Syst. Safety,
vol. 20, pp. 63-85, 1988.
J. L. Mumpower, L. D. Phillips,
0.
Rem, and V. R. R. Uppuluri,
Eds.,
Expert Judgment and Expert Systems.
Heidelberg, W.
Germany: Springer, 1987.
N.
R.
Ortiz,
T. A. Wheeler, M. A. Meyer, and R.
L.
Keeney, “Use
of
expert judgment in NUREG-1
150,”
presented at the Sixteenth Water
Reactor Safety Infor. Meet., Washington, DC, Oct. 24-27, 1988.
F.
P. Ramsey, “Truth and probability,” in
The Foundations of
Mathematics and Other Logical Essays,
R.
B.
Braithwaite, Ed.
New York: Harcourt, 1931.
L. J. Savage,
The Foundations of Statistics.
New York: Wiley,
1954.
C.
S.
Spetzler and C. A. Stael von Holstein, “Probability encoding in
decision analysis,”
Management Sci.,
vol. 22, pp. 340-352, 1975.
T.
S.
Wallsten and
D.
V.
Budescu, “Encoding subjective probabilities:
A psychological and psychometric review,”
Management Sei.,
vol.
R.
L.
Winkler, “The quantification
of
judgment: Some methodological
suggestions,”
J. Amer. Stat. Ass.,
vol. 62, pp.
1105-1
120, 1967.
D. von Winterfeldt and W. Edwards,
Decision Analysis and Behav-
ioral Research.
New York: Cambridge, 1986.
SMC-17, pp. 741-752, 1987.
29, pp. 151-173, 1983.