ArticlePDF Available

Abstract and Figures

Based on a unique new product development problem at Dow AgroSciences, this paper develops an optimization-based approach to estimate the mean and standard deviations of yield distributions for producing new products when an expert provides judgmental estimates for quantiles of distributions. The approach estimates both the mean and standard deviation as weighted linear combinations of quantile judgments, where the weights are explicit functions of the expert’s judgmental errors. It is analytically tractable, and provides flexibility to elicit any set of quantiles from an expert. The approach also establishes that using an expert’s quantile judgments to deduce the distribution parameters is equivalent to collecting data with a specific sample size and enables combining the expert’s judgments with those of other experts. The theory has been in use at Dow for two years for making an annual decision worth $800 million. An analysis on the firm’s historical decisions shows that the use of the approach has resulted in the following monetary benefits: (i) firm’s annual production investment has reduced by 6–7%, and (ii) profit has increased by 2–3%. We discuss the implementation at the firm, the estimation of these benefits, and practical guidelines for seeking expert judgment for operational uncertainties in industrial settings.
Content may be subject to copyright.
This article was downloaded by: [] On: 25 July 2017, At: 06:08
Publisher: Institute for Operations Research and the Management Sciences (INFORMS)
INFORMS is located in Maryland, USA
Operations Research
Publication details, including instructions for authors and subscription information:
Using Experts’ Noisy Quantile Judgments to Quantify
Risks: Theory and Application to Agribusiness Bansal, Genaro J. Gutierrez, John R. Keiser
To cite this article: Bansal, Genaro J. Gutierrez, John R. Keiser (2017) Using Experts’ Noisy
Quantile Judgments to Quantify Risks: Theory and Application to Agribusiness. Operations Research
Published online in Articles in Advance 24 Jul 2017
Full terms and conditions of use:
This article may be used only for the purposes of research, teaching, and/or private study. Commercial use
or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher
approval, unless otherwise noted. For more information, contact
The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness
for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or
inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or
support of claims made of that product, publication, or service.
Copyright © 2017, INFORMS
Please scroll down for article—it is on subsequent pages
INFORMS is the largest professional society in the world for professionals in the fields of operations research, management
science, and analytics.
For more information on INFORMS, its publications, membership, or meetings visit
Articles in Advance,pp. 1–16 ISSN 0030-364X (print), ISSN 1526-5463 (online)
Using Experts’ Noisy Quantile Judgments to Quantify Risks:
Theory and Application to Agribusiness
Saurabh Bansal,aGenaro J. Gutierrez,bJohn R. Keiserc
aThe Pennsylvania State University, University Park, Pennsylvania 16802; bThe University of Texas at Austin, Austin, Texas 78712; cDow
AgroSciences, Marshalltown, Iowa 50158
Contact:, (SB); (GJG); (JRK)
Received: July 31, 2015
Revised: September 17, 2015; July 6, 2016;
November 2, 2016
Accepted: December 15, 2016
Published Online in Articles in Advance:
July 24, 2017
Subject Classifications: decision analysis: risk;
forecasting applications; industries:
Area of Review: OR Practice
Copyright: ©2017 INFORMS
Abstract. Motivated by a unique agribusiness setting, this paper develops an optim-
ization-based approach to estimate the mean and standard deviation of probability distri-
butions from noisy quantile judgments provided by experts. The approach estimates the
mean and standard deviations as weighted linear combinations of quantile judgments,
where the weights are explicit functions of the expert’s judgmental errors. The approach is
analytically tractable, and provides flexibility to elicit any set of quantiles from an expert.
The approach also establishes that using an expert’s quantile judgments to deduce the
distribution parameters is equivalent to collecting data with a specific sample size and
enables combining the expert’s judgments with those of other experts. It also shows ana-
lytically that the weights for the mean add up to one and the weights for the standard
deviation add up to zero—these properties have been observed numerically in the liter-
ature in the last 30 years, but without a systematic explanation. The theory has been in
use at Dow AgroSciences for two years for making an annual decision worth $800 million.
The use of the approach has resulted in the following monetary benefits: (i) firm’s annual
production investment has reduced by 6%–7% and (ii) profit has increased by 2%–3%. We
discuss the implementation at the firm, and provide practical guidelines for using expert
judgment for operational uncertainties in industrial settings.
This research was supported, in part, by grants from the Center for Supply Chain Research
at Penn State, and the Supply Chain Management Center of the McCombs School of Business at
The University of Texas at Austin.
Supplemental Material:
The online appendix is available at
expert judgments
quantile judgments
estimating distributions
yield uncertainty
1. Introduction and Industry Motivation
1.1. Problem Context
Understanding and quantifying production-related
uncertainties is critical for decision making in busi-
nesses. The probability distributions for these uncer-
tainties are usually estimated using historical data
obtained during repetitive manufacturing. But these
data may not be available when firms frequently
launch new products in the market, e.g., at firms in the
semiconductor and in the agribusiness industry. In the
absence of historical data, these firms turn to domain
experts for obtaining subjective probability distribu-
tions (e.g., Baker and Solak 2014).
Prior literature (e.g., O’Hagan and Oakley 2004) ad-
vises that in these situations, one should avoid ob-
taining direct estimates of the standard deviation from
domain experts as this quantity is not intuitive to esti-
mate. This literature recommends obtaining experts’
input in the form of judgments for specific discrete
points on distributions, for example, judgments for spe-
cific quantiles, but also cautions that these judgments
are subject to judgmental errors (Ravinder et al. 1988).
However, a systematic approach that uses these judg-
ments to deduce the mean and standard deviation of
probability distributions, while explicitly modeling and
accounting for experts’ judgmental errors, is not yet estab-
lished. In this paper, we accomplish this task. Specifi-
cally, we develop an approach to deduce the mean and
standard deviation using judgments provided by one
or multiple experts for distribution quantiles (or frac-
tiles). This approach is analytically tractable, provides
the flexibility of using judgments for any set of quantiles
that an expert is willing to provide, and is amenable to
combining an expert’s quantile judgments with those
of other experts. The approach also establishes a novel
equivalence between the quality of an expert’s judg-
ments and the size of an experimental sample that is
equally informative about the distribution.
This approach was developed to manage a dynamic
new product development situation at Dow Agro-
Sciences (DAS) for an annual decision worth $800 mil-
lion. Analysis on the firm’s historical decisions shows
that the use of the approach has resulted in the fol-
lowing monetary benefits: (i) firm’s annual production
investment has reduced by 6%–7% and (ii) profit has
increased by 2%–3%. We also discuss the implementa-
tion of the approach developed at the firm, and prac-
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
2Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
tical guidelines for seeking expert judgment for opera-
tional uncertainties in industrial situations.
The rest of this paper is organized as follows. Sec-
tion 2provides an overview of our approach and
the contributions to the existing literature. Sections 3
and 4discuss a model for deducing mean and standard
deviation from quantile judgments, derive the solu-
tion and its structural properties. Section 5discusses
an equivalence of expertise with randomly collected
data and results for combining judgments from mul-
tiple experts. Section 6describes implementation of
the approach at DAS and quantification of benefits of
using the approach. Section 7concludes with insights
for practice.
2. Overview of Approach and Our
Contributions to the Existing Literature
2.1. Overview
We develop an optimization-based approach to esti-
mate the mean and standard deviation from quan-
tile judgments provided by an expert; specifically,
we obtain the minimum variance estimates of the
mean and standard deviation of yield distributions as
weighted averages of the quantile judgments provided
by the expert, subject to the constraint that the esti-
mates are unbiased. The model has two inputs. The
first input is an identification of the quantiles for which
the expert will provide judgments. For example, at
DAS, the expert chose to provide judgments for the
10th, 50th, and 75th quantiles because his software was
set to show these quantiles during data analysis, and he
was accustomed to thinking about these quantiles. The
second input is a quantification of the noise present
in the expert’s judgments for the quantiles. This quan-
tification is done separately by comparing the expert’s
judgments for the quantiles with the true values for
a number of distributions constructed using historical
data. We discuss this empirical estimation at DAS in
Section 6.
The solution to the optimization model assigns two
sets of weights to the quantile judgments (e.g., for
the 10th, 50th, and 75th quantiles). The first set of
weights is for estimating the mean as a weighted aver-
age of quantile judgments. The second set of weights
is used similarly to estimate the standard deviation.
The weights are specific to the noise quantified in the
second input discussed above for the expert’s quantile
2.2. Contributions to Literature
A large body of literature considers situations in which
experts provide their assessments for events with
binary outcomes (e.g., Ayvaci et al. 2017). In contrast,
we focus on situations where continuous distributions
need to be specified over the outcomes, and expert
judgment is sought to estimate these probability distri-
butions. Two streams of literature are relevant to our
focus: (i) models of judgmental errors and (ii) practice-
driven literature on the use of expert judgments.
2.2.1. Models on Model-Driven Theory on Judgments.
The first stream of related literature is on model-driven
theory of judgmental errors. The existing literature on
expert judgments acknowledges the potential severity
of judgmental errors and focuses on developing elici-
tation guidelines for reducing judgmental errors (e.g.,
Koehler et al. 2002). In contrast, articles on moment
estimation from quantile judgments have explored the
problem of deducing moments from the median and
two additional symmetric quantiles (typically, the 5th
and 95th) or four additional symmetric quantiles (typ-
ically, the 5th, 25th, 75th, and 95th), numerically with a
key assumption: no judgmental errors are present. Pear-
son and Tukey (1965) and Keefer and Bodily (1983)
follow this paradigm. But no prior articles consider
the problem where subjective quantile judgments from
multiple experts need to be combined to deduce the
mean and standard deviation or the case where an
expert provides judgments for an arbitrary set of quan-
tiles that are different from the standard ones men-
tioned above. We contribute to this literature by devel-
oping a tractable solution approach to this problem.
A salient feature of this approach is that one can use
any set of quantile judgments that an expert can pro-
vide (over the 5th, 25th, 75th, and 95th as discussed
in the prior literature) to estimate the mean and stan-
dard deviation. This feature is useful for practice since
an expert may not be willing to provide quantile judg-
ments for specific symmetric quantiles. For example,
the expert at DAS was habituated to seeing the 10th,
50th, and 75th quantiles for historical data on his soft-
ware and was willing to estimate only these quantiles.
The approach also provides the following struc-
tural insights. First, regardless of the magnitude of an
expert’s judgmental errors and the quantiles elicited,
the variance-minimizing weights for the estimation
of the mean and standard deviation add up to 1
and 0, respectively. This structural property explains
the numerical findings in Pearson and Tukey (1965),
Lau et al. (1999), among others, who all assume that
judgmental errors do not exist. Second, our approach
establishes a new quantification of expertise: it speci-
fies the size of a random sample that would provide
estimates of mean and standard deviation with the
same precision as that of the estimates obtained using
the expert’s judgments for quantiles. This equivalence
enables an objective comparison of experts. Finally, in
our approach, the optimal weights provide point esti-
mates and variability in the estimates for the moments.
This quantification of variability of moment estimates
enables us to combine quantile judgments from multi-
ple experts in a rational and consistent manner.
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS 3
Prior literature, e.g., O’Hagan (1998), Stevens and
O’Hagan (2002), discusses the role of expert judgments
in the absence of data for constructing prior distri-
butions; when data become available, posterior dis-
tributions for parameters are obtained using Bayesian
updating. The literature discusses two cases. When
conjugate priors are used, the posterior distributions
are obtained in closed form. When conjugate prior
cannot be used, numerical approaches must be used
to obtain posterior distributions. We make two con-
tributions to this literature. First, we develop a novel
approach to obtain the prior distributions on the
parameters for the mean and standard deviations of
distributions using expert judgments for quantiles.
Second, we show that the joint prior distributions
are correlated and are not conjugate priors, and we
develop a Copula-based approach to obtain the poste-
rior distributions.
2.2.2. Practice-Driven Tools and Insights. Our contri-
butions to practice are as follows. We provide a step-by-
step approach to quantifying an expert’s judgmental
errors and then discuss some practical issues observed
during this quantification at DAS. Specifically, we dis-
cuss a bootstrapping approach to separate judgmental
errors from sampling errors during the error quantifi-
cation process. Then, we show that the information
provided by the expert is equivalent to five to six years
of data collection at DAS using our approach. Such
quantification has not been reported in practice litera-
ture before. We also report that the expert at DAS was
reluctant to provide judgments for extreme quantiles
because of his inability to distinguish between random
variations and systematic reasons as causes of extreme
outcomes. This observation suggests that contextual
reasons and experts’ preferences can lead to elicitation
of quantiles that are different from the standard val-
ues (median and two or four symmetric quantiles); our
approach is especially useful in such situations.
3. Analytical Model
We consider a real-valued continuous random vari-
able X, whose distribution is to be estimated. The pro-
bability density function (PDF) of Xis denoted as
φ(x;θ), where θ[θ1, θ2]tare the parameters of the
PDF, and µ1,µ2denote the mean and standard devi-
ation, respectively. Similar to Lindley (1987), O’Hagan
(2006), we assume that the distribution family is known
from the application context, but the parameters are
not known. This framework is especially relevant to a
number of operations contexts in which the parametric
family of probability distributions is known from the
historical data available or from formal models. The
cumulative distribution function (CDF) of Xis denoted
as Φ. A source of information such as an expert pro-
vides quantile judgments ˆ
xicorresponding to probabil-
ity CDF values pifor i1,2, . . . , m. In vector notation,
we denote the quantile judgments as ˆ
x1, . . . , ˆ
and probability values as p[p1, . . . , pm]t.
We seek to develop an approach to deduce µ1, µ2
from the quantile judgments ˆ
x. From theoretical and
application perspective, it is desirable that the ap-
proach’s formulation provides a unique solution to the
problem, preferably in closed form, and is amenable
to sensitivity analysis. Prior literature in this domain
(e.g., Keefer and Bodily 1983, Johnson 1998) also sug-
gests that for an ease of implementation, the approach
should be consistent with moment matching and
with other probability discretization practices in use,
e.g., program evaluation and review technique (PERT)
for project management. Our approach accomplishes
these objectives and additionally provides a quantifica-
tion of the quality of expert’s judgments into an equiv-
alent sample size.
3.1. Preliminaries for Expert Judgments
We assume that the quantile judgments are obtained
using an underlying process or mental model (we dis-
cuss the mental model used by the expert at DAS,
in Figure 1, Section 6.1), which is error prone but is
used consistently for generating quantile judgments.
This assumption means that the expert’s judgmental
errors are stable during elicitation. We further assume
an additive error structure that is used frequently in the
literature (e.g., Ravinder et al. 1988): the quantile judg-
ment ˆ
xiis composed of a true value xiand an additive
error ei:
In vector notation, the error model is ˆ
xx+e. Con-
sistent with this literature, we assume that the error ei
has two parts: a systematic component or bias δiand
a random component or noise i, such that eiδi+i
and E[i]0. The bias δicaptures the average devi-
ation of the judgments for quantile ifrom the true
value. The noise icaptures the spread in the error
due to random variations. In vector notation, the bias
and residual variation are denoted as δand , respec-
tively. The noise is quantified in variance-covariance
matrix . The diagonal elements of this matrix ωi i
Var(i)denote the variance in the unbiased judgment
of quantile i. The off-diagonal elements are covariances
of unbiased judgments ωi j Cov(i, j). We discuss the
empirical estimation of δand separately in Section 6,
and assume for now that these quantities are available.
From the biased judgments ˆ
x, the unbiased judg-
ments ˆ
qare obtained by removing bias as ˆ
Substituting this relationship into ˆ
we obtain
The matrix for is used as an input in the optimiza-
tion model discussed next.
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
4Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
3.2. Optimization Problem
We seek to obtain the estimates of the mean ˆ
standard deviation ˆ
µ2as pooled or weighted linear
functions of the debiased quantile judgments as ˆ
q;k1,2with the weights w1≡ [w11 ,w12 , . . . , w1m]t
and w2≡ [w21 ,w22 , . . . , w2m]t. Since the unbiased judg-
ments ˆ
qare subject to noise , the estimates ˆ
have variances Var[wt
q]. Smaller values of the vari-
ances of these estimates are desirable as it would imply
that the estimates are more precise. To this end, it is
desirable to select weights wk;k1,2that lead to a
small variance in the estimates ˆ
µk. We first restate the
variance of estimates ˆ
µkin terms of the weights as
k(x+) − E[wt
k(x+) − wt
Prior literature (e.g., Bates and Granger 1969, Granger
1980) shows that only minimizing this variance is not
informative as it is minimized by setting wk0and
the resultant weighted linear estimate is always equal
to ˆ
µk0for all judgments ˆ
q. This literature suggests
adding constraints to make statistical estimates respon-
sive to forecasts or judgments. Our focus will be on
a specific class of such constraints. We seek variance-
minimizing weights such that the obtained estimates
qare unbiased, i.e., E[wt
q]µk, leading to the fol-
lowing optimization formulations for k1,2:
s.t. E[wt
Problem (5) consists of finding the weights wkthat
lead to the minimum variance unbiased estimates
of µk. In the next section, we determine these weights
for location-scale distributions using structural proper-
ties of these distributions. The focus on location-scale
distributions is motivated by their widespread applica-
tion in numerous operations management contexts (see
Kelton and Law 2006, for a list of these applications)
as well as their specific application at DAS, where the
in-house statistics team has shown using existing data
that yields are normally distributed. This analysis is in
Section 6.2.
4. Solution: Weights for
Quantile Judgments
In Section 4.1 we specialize the problem (5) for distri-
butions of a location-scale family, and obtain the opti-
mal weights for quantile judgments and the weights’
structural properties in Section 4.2.
4.1. Reformulation and Solution for Distributions
of a Location-Scale Family
We now assume that Xis a location-scale random vari-
able with location and scale parameters θ1and
θ2++, respectively, and transform the constraint
q]µkin formulation (5) using two properties of
location-scale distributions. The first property enables
us to rewrite the left-hand side (LHS) of this constraint
as a function of θ1,θ2. If Xis a location-scale ran-
dom variable with PDF φ;θ), then a specific value x
that corresponds to probability pcan be expressed as
xθ1+θ2z, where zdenotes the value of standardized
random variable with the standardized PDF φ;[0,1]t)
for probability p(Casella and Berger 2002, p. 116). We
write this expression in vector form as
where Zis the m×2matrix formed as Z[1,z],zis the
column vector of standardized quantiles correspond-
ing to the probabilities p, and 1is a column vector of
ones. Substituting (6) into the LHS of the error model
in (2), ˆ
qx+, it follows that
k(x+)] wt
The second property, formalized in Lemma 1below,
enables us to rewrite the right-hand side (RHS) of
the unbiasedness constraint E[wt
q]µkas a function
of θ1, θ2.
Lemma 1 (Characterization of Location-Scale Moments).
If Xis a location-scale random variable with parameters θ
[θ1, θ2]twith finite jth moments for j1,2, . . ., then,
(a) the raw moments E[Xj]are given by E[Xj]
(b) the central moments are given by E[(Xµ1)j]
where the constants κiare κ01and κjE[Zj]for j
The proof is in Appendix A1. The values of κjare
documented in the literature for location-scale distri-
butions (see, e.g., Johnson et al. 1994). For example,
for a normal distribution, we have (κ0, κ1, κ2)(1,0,1).
It follows from part (a) of Lemma 1that µ1E[X]
[1, κ1]θ. It follows from part (b) of the lemma that
variance of Xis equal to E[(Xµ1)2]θ2
and therefore the standard deviation is equal to µ2
1]θ. We write
both relationships in vector notation as
with at
1[1, κ1]and at
Substituting (7) and (8) into the LHS and RHS of
the constraint E[wt
q]µk, respectively, we obtain the
following condition on the weights wkfor the estimate
qto be unbiased. This condition will help us solve
the problem in a tractable form.
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS 5
Proposition 1. If Xis a random variable with a location-
scale distribution, the weighted linear estimator wt
qis un-
biased for µk, if and only if the weights wksatisfy
Proof. By definition, the estimate wt
qis unbiased if
and only if E[wt
q]µk. Substituting (7) and (8) into
the LHS and RHS of the constraint, it follows that the
estimate is unbiased if and only if wt
kθfor all
values of θ1and θ2. It follows that the estimator is
unbiased if and only if wt
k, i.e., Ztwkak;k
The implication of the iff in Proposition 1is that we
can replace the constraint E[wt
q]µkin formulation
(5) with the condition on weights Ztwkak;k1,2.
After this substitution, we obtain the formulation for
s.t. Ztwkak.(10)
The matrix is a covariance matrix, and therefore it
must be positive semidefinite. It follows that the prob-
lem (10) is a quadratic convex problem, and its solu-
tion is obtained by solving a Lagrange formulation of
the problem. The next result establishes this unique
Theorem 1. The weights that solve problem (10) are given
by w
The proof is in Appendix A2. The conspicuous fea-
ture of the optimal weights w
kis that they are explicit
functions of the expert’s precision encoded in . There-
fore, a change in an expert’s precision in providing
quantile judgments will modify , which, in turn,
will change the optimal weights w
kfor the quan-
tile judgments. Finally, we note that the variance of
the estimates Var[ˆ
µk]at the optimal weights is equal
to Var[ˆ
k, which simplifies to Var[ˆ
k(Zt1Z)1ak, establishing a direct link between the
variance in the estimates ˆ
µkto the expert-specific .
4.2. Structural Properties and Generalization of
Results Available in Literature
The development thus far provides new generaliza-
tions and insights to the existing literature. First, in our
approach, the expert can provide judgments for any set
of quantiles that he is comfortable estimating, i.e., he
is no longer restricted to providing his judgments for
the 5th, 25th, 50th, 75th, and 95th quantiles as speci-
fied in extant literature such as Lau et al. (1996). This
flexibility is useful since we no longer need to convince
an expert to provide judgments for these specific quan-
tiles and instead can focus on understanding why the
expert believes that he can provide better judgments
for his chosen quantiles. We discuss one such exam-
ple in Section 6.2. The second generalization of our
approach is that it provides an analytical foundation
to a numerical property observed consistently in the
existing literature that the weights add up to a constant
as follows.
Proposition 2. The optimal weights for quantiles add up to
constants. Specifically, Pm
1i1and Pm
The proof is in Appendix A3. This result is true
regardless of the numerical values of ; therefore, it
holds true even when the judgmental errors are arbi-
trarily small, e.g., when limλ0λI. A number of
prior articles (Pearson and Tukey 1965, Perry and Greig
1975, Keefer and Bodily 1983, Johnson 1998) numer-
ically discuss the limiting case when the errors are
absent. They select specific numerical test cases of
means and standard deviations of a distribution and
obtain the 5th, 50th, and 95th quantiles or other spe-
cific symmetric quantiles for these cases. Then, they
consider various sets of candidate weights. For each set
of weights, they estimate the means of all test cases as
weighted linear combinations of the quantile values.
Finally, they identify the set of weights that result in
the smallest squared deviations between the true and
the estimated means over all cases. A similar analy-
sis provides the weights to obtain the standard devia-
tion. The weights recommended in this literature add
up to 1 and 0 for the mean and standard deviation,
respectively. Proposition 2establishes that the additiv-
ity properties observed numerically in these articles are
structural properties of probability distributions, and
hold true for any magnitude of judgmental errors.
Third, these additivity properties are also shared by
the weights assigned in the project management tech-
nique PERT to the estimates for the optimistic, pes-
simistic, and most likely scenarios. The weights for the
mean are (1/6, 4/6, 1/6), respectively, for the estima-
tion of the mean adding up to one, and (1/6, 0, 1/6)
for the estimation of the standard deviation adding
up to zero. Fourth, one can show using straightfor-
ward algebra that our approach automatically assigns
lower weight to a quantile judgment that has large
noise. In situations where an expert provides a num-
ber of quantile judgments, this feature is useful in
identifying which quantile judgments have large noise
and are therefore not useful for the estimation of the
moments; the weights for these quantile judgments
will be negligible.
5. Data Equivalence, Multiple Experts, and
Other Relationships
In Section 5.1, we determine the size of a randomly
drawn sample that is equivalent in terms of precision to
the expert’s judgments. In Section 5.2 we discuss com-
bining judgments of one expert with the judgments
from other experts. In Section 5.3, we discuss the con-
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
6Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
sistency of the approach developed with least squares
and moment matching.
5.1. Equivalence between Expertise and
Size of a Random Sample
Expert input is sought for estimating probability dis-
tributions when collecting data is costly. The expert’s
quantile judgments, after using our approach, provide
point-estimates ˆ
µkand the variances in these estimates
µk]. We can compare this variance with the vari-
ance of the mean and standard deviation obtained
from a sample of random observations for Xif data
collection is possible. Specifically, it is well known that
a sample mean has a variance of σ2/N1, where N1is
the sample size. In our approach, the variance in the
estimate of the mean is equal to Var[ˆ
Appendix A4 for proof), which can be simplified to
[1, κ1](Zt1Z)1[1, κ1]t. By equating these two vari-
ances, we can determine the size of a randomly col-
lected sample that would provide the same precision
of the estimate of the mean as the expert does. We call
this size an equivalent sample size for the mean. A simi-
lar analysis provides the equivalent sample size for the
standard deviation. The next result provides expres-
sions of these equivalent sample sizes.
Proposition 3. The precision of the estimates ˆ
using an expert’s quantile judgments with judgmental error
matrix is comparable to the precision of estimates obtained
from an iid sample of size Nk, where
[1, κ1](Zt1Z)1[1, κ1]tand
The proof is in Appendix A5. This result has two
profound implications. First, using this result, multi-
ple experts can be compared objectively based on their
judgmental errors quantified in . More specifically,
for two experts A and B with matrices Aand B, the
ratios of equivalent sample sizes are given as NA
([1, κ1](Zt1
BZ)1[1, κ1]t)/([1, κ1](Zt1
AZ)1[1, κ1]t)
and NA
1]t), and they are
independent of the true value of µ1, µ2. For exam-
ple, if NA
k2, then the estimates of µkobtained
from expert A are two times as reliable as the esti-
mates obtained from expert B. This benefit from using
expert A over expert B is equal to the benefit from
doubling the sample size of experimental or field
data for the purposes of estimating µk. Second, some
recent literature (e.g., Akcay et al. 2011) quantifies the
marginal benefit of improving estimates of probabil-
ity distributions by collecting more data before making
decisions under uncertainty. Proposition 3provides
a natural connection to these results by quantifying
the economic benefits of improving the precision in
5.2. Combining Estimates from Multiple Experts
The technical development extends to multiple experts
j1,2, . . . , nas follows. We first construct the com-
bined matrix
11 12 ··· 1n
12 22 ··· 2n
1n2n··· nn
where 11 is the m×mmatrix for residual errors of
expert 1, the matrix 12 is the m×mcovariance matrix
for the errors of experts 1 and 2, and so on. Then, the
matrix is used in Theorem 1along with matrix Ztof
size 2×mn,Zt[Zt
2, . . . , Zt
n], where each Zj[1,zj],
zjis the column vector of standardized quantiles corre-
sponding to the probabilities pjthat expert jhas cho-
sen to provide judgments for, and 1is a column vector
of ones. The use of Theorem 1 provides mn weights; the
first mweights for the first expert, the next mweights
for the second expert, and so on.
We discuss a special case of interest here. Suppose n
experts j1,2, . . . , nprovide judgments for the same
set of quantiles, i.e., Zt[Zt
0, . . . , Zt
0], and the covari-
ance matrix of each expert jis given by j j rj0j
(i.e., the judgmental error structure of one expert is a
scaled version of another expert) and further assume
that the errors of any two experts are mutually inde-
pendent, i.e., all elements of i j ,i,jare equal to 0.
The weights for the mn quantile judgments obtained
using Theorem 1 are denoted as w
kwith elements w
for t1,2, . . . , mn and k1,2. The first mweights are
for expert 1, the next mweights are for expert 2, and
so on. We can write these weights as w
k, . . . , wn
where wj
kis the vector of weights of expert j. We can
also decompose the weights w
kas the product of con-
stant αjfor expert jand a common weight vector of m
weights wc
kthat would be obtained if each expert was
the only one available, i.e., w
k, . . . , αnwc
k]. The
values of αjand the relationships between wj
kand wc
are as follows.
Proposition 4. Consider experts j1,2, . . . , n, whose co-
variance matrices are rj0;j, and further assume that the
judgmental errors across experts are mutually independent.
(i) If any expert jwas the only expert available, the opti-
mal weights for his unbiased judgments would be wc
0independent of the value of rj.
(ii) When the quantile judgments of the nexperts are
considered simultaneously, the weights for each expert jare
obtained as wj
kwith αj(1/rj)/R, where R
The proof is in Appendix A6. As an illustration, sup-
pose that we have two experts with r11and r22, i.e.,
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS 7
expert 2 is half as precise as expert 1. Further, consider
the case when
80 30 35
30 22 30
35 30 68
for the estimation of the 10th, 50th, and the 75th quan-
tiles. If the quantile judgments of only expert jare con-
sidered separately, the estimation weights are obtained
as wcT
1[−0.167 1.484 0.317]and wcT
0.190 0.386]for either expert jby using
Zt1 1 1
1.285 0 0.674
and 0in Theorem 1, as stated in part (i) of Proposi-
tion 4.
When both experts are available, the optimal weights
for their quantile judgments are obtained by multi-
plying the independent weights wc
kwith the expert-
specific marginal weight αjas wj
k. It follows
from part (ii) of the proposition that the expert-specific
constants are α1(1/r1)/(3/2)2/3and α2(1/r2)/
(3/2)1/3. The weights for the mean, for example,
are obtained as w1
1(2/3) × (−0.167,1.484,0.317)
(−0.111,0.989,0.211)and w2
1(1/3) × (−0.167,1.484,
0.317)(−0.055,0.495,0.105)for experts 1 and 2,
respectively. The same weights are also obtained di-
rectly by first constructing the combined matrix
and using it in Theorem 1with matrix
1 1 1 1 1 1
1.285 0 0.674 1.285 0 0.674 ,
which gives w
5.3. Relationship with Classical Least Squares
Regression and Moment Matching
We now discuss how our model and its solution is con-
sistent with (i) the classical least square minimization-
based regression framework and (ii) with moment
matching. In the classical regression framework, the
variance-covariance matrix is K0, where K>0
is a scalar, the diagonals elements of 0are equal
to 1, and the off-diagonal elements are equal to 0, i.e.,
0I. In our context, this would be the noninforma-
tive case when the expert is equally good at estimating
all quantiles and his judgmental errors are mutually
independent. We showed in Theorem 1 that the opti-
mal weights are equal to w
Now, substituting KI, we obtain the weights as
kZ(ZtZ)1ak, or alternately, as the familiar kernel
of the ordinary least squares in the big parentheses:
In the moment matching framework, we would seek
to minimize the squared deviations of the debiased
quantile judgments ˆ
qiobtained from the expert for
probability pifrom the unobserved values of mean
and standard deviation, i.e., we would seek to solve:
minµ1, µ2{Pm
i1(Φ1(pi;µ1, µ2) − ˆ
qi)2}. This approach is
codified in many commercial software (e.g., @RISK)
and has been used in prior academic literature (e.g.,
Wallsten et al. 2013). For location-scale distributions,
Φ1(pi;µ1, µ2)θ1+ziθ2. Using the properties that θ1
µ1− (κ1/pκ2κ2
1)µ2and θ2(1/pκ2κ2
1)µ2, we can
rewrite this problem as
µ1, µ2
The next result establishes the classical least squares
and moment matching as a special case of our
Proposition 5. Consider the original optimization prob-
lem: minwwt
kwksubject to E[wt
(1) This problem reduces to ordinary least squares solu-
tion when K0, where 0I.
(2) Consider the moment matching problem in the form
minµ1, µ2{Pm
i1(Φ1(pi;µ1, µ2) − ˆ
qi)2}. It’s solution is ˆ
qfor a∈ {µ1, µ2}and it is identical to the solution
obtained from the original problem for K0, where
The proof is in Appendix A7. The second part of
the proposition implies that given quantile judgments
and no information for the noise in the judgments,
the best estimates of the mean and standard deviation
(under quadratic penalty) for location-scale distribu-
tions are linear functions of the quantile judgments.
And these estimates coincide with solution obtained
in our approach for the noninformative case of I.
Our approach extends the moment matching model to
account for expert’s judgmental errors as captured in
as minwwt
kwksubject to E[wt
q]µkfor the case
when information for the expert’s judgmental errors
is available, i.e., when ,KI. Finally, we note that
our approach is amenable to Bayesian updating using
Markov chain Monte Carlo methods, and we omit
details for sake of brevity. We next discuss the imple-
mentation of the approached developed at DAS.
6. Implementation Details and
Benefits at DAS
6.1. Industry Context: Estimating Production Yield
Distributions for Hybrid Seeds
DAS produces seeds for various crops such as corn and
soybean, and sells these seeds to farmers. Our focus is
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
8Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
on the production of hybrid seed corn. DAS decides
annually how many acres of land to use to produce
hybrid seed corn. The yield, or amount of hybrid seed
corn obtained per acre of land by DAS, is uncertain.
Under this yield uncertainty, producing hybrid seed
corn on a large plot of land may result in a surplus
with a large up-front production cost if the realized
yield is high; using a small plot of land may result
in costly shortages if the realized yield is low. Math-
ematical models that incorporate the yield distribu-
tion can determine the optimal area of land that DAS
should use, but the historical yield data are not avail-
able for obtaining a statistical distribution. The unique
industry-specific reasons for this lack of historical data
are discussed next, but before discussing these reasons,
we note an important characteristic of our focus. Our
focus is on the production yield realized by DAS when
it produces hybrid seeds, and this is the context in
which the term yield will be used in the remainder of
the paper.
6.1.1. Biological Context for Expert Judgments. DAS
has a pool of approximately 125 types of parent or
purebred seed corn; each type has a unique genetic
structure, and these purebred varieties are used to pro-
duce hybrid varieties of seed corn. To replenish the
stock of a specific parent seed, DAS plants this seed
in a field. Self-pollination among the plants produces
seeds of the same type, this is why the parent seed
are purebred seeds. This inbreeding is carried out reg-
ularly to maintain inventories of parent seeds, and
statistical distributions of the yields obtained during
this inbreeding process are available from historical
data. But these seeds are not sold to farmers; rather,
the seeds sold to farmers are hybrid seeds that are
obtained by cross-mating pairs of parent seeds. This
cross-mating occurs when two different parent seeds
are planted in the field. Corn plants have male and
female reproductive parts. Plants growing from the
Figure 1. (Color online) During Cross-Pollination, the Male YChanges the Inbred Yield Distribution of XShown on the Left
Genetic makeup XGenetic makeup X
Female parent
Inbred seed:
Yield data available
Benchmark for hybrid yields
Expert’s judgment to
assess possible impact of Y
Hybrid seed:
Limited yield data available
Genetic makeup Y
Male parent
Notes. The expert’s mental model involves judgments about changes in the location and/or spread of the distribution due to Y. Possible
distributions after crossbreeding are shown in dotted lines on the right.
parent seeds of one type; say, X, are treated chemi-
cally and physically (in a process called detasseling,
e.g., see to make
them act as female, and the plants growing from the
parent seeds of the other type; say, Y, are made to act
as male. The cross-pollination between these parents
produces the hybrid seeds. DAS offers more than 200
varieties of hybrid seed corn every year in the market
targeted to diverse soil and climate zones of the con-
tinental United States. Each variety is obtained from a
different set of parents.
Due to the rapid pace of innovation in this indus-
try, the average life of hybrid varieties is short. DAS
produces and sells most hybrid varieties only three or
four times before replacing them with new hybrids.
Therefore, sufficient historical yield data necessary to
obtain statistical distributions are not available for most
hybrid seeds. In the absence of these data, DAS relies
on a yield expert to estimate the yield distributions
for producing the seeds. Before describing the process
the expert uses, we note that a set of hybrid seeds has
also been produced and sold repetitively. The historical
yield data of these hybrids serve an important purpose
in our estimation approach.
6.1.2. Expert’s Mental Model. The yield expert at DAS
uses a mental model for estimating the yield distribu-
tion for the production of a hybrid seed without his-
torical data. This model is illustrated in Figure 1for
the hybrid seed obtained by crossing varieties Xand
Y. Female parent plants (type X) provide the body on
which the hybrid seed grows; the male parent plants
(type Y) provide the pollen to fertilize the female plant.
Since the female plant nurtures the seed, the available
statistical distribution for the inbreeding for type X
provides a statistical benchmark (left part of Figure 1)
for the hybrid seed. The male parent affects this distri-
bution during cross-pollination through its pollinating
power and other genetic characteristics, leading to var-
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS 9
ious likely distributions as shown in dotted lines in the
right part of the figure. This may include a shift in the
median and/or changes in the spread of the distribu-
tion. The expert’s contextual knowledge for the biology
of both parents provides him with insights into how
the distribution might change during cross-pollination.
6.1.3. Practice at DAS Before New Approach. In the
past, the yield expert has adjusted the median of the
inbreeding female distribution higher or lower to pro-
vide an estimate of the median yield for the production
of hybrid seed. Thus the estimate of the median seed
production yield has been based on indirect data and
is judgmental in nature. This median yield was used
for production planning decisions as follows. Man-
agers would first calculate the area needed as area
demand/median yield judgment and then increase it
by 20% or 30% to account for high profit margins. In
our interactions, managers articulated the need for a
rigorous approach to estimate the spread in the uncer-
tain yields, which could then be used to determine
the number of acres for each hybrid using optimiza-
tion models. Furthermore, since the yield expert is
required to provide judgments for almost 200 hybrid
seeds within a span of two weeks every seed produc-
tion season, it was necessary to develop an approach
that could be implemented within this time window.
Our analytical approach accomplishes these tasks.
In Section 6.2, we discuss how our approach con-
tinues to use the expert’s judgments for the median
(that he has estimated in the past) to exploit the mental
model that he has developed and used over years as
well as two additional quantiles selected by him. DAS
makes the production planning decision once a year,
typically during January–February. Our approach was
first used in 2014, and has been in use since then. In
the first step of our implementation, we determined
the bias vector δ, the matrix of judgmental errors,
and the matrix Zcorresponding to the quantiles for
which the expert will provide judgments. This was
done using historical yield data for a set of hybrid
seeds that have been produced repetitively in the past.
Details of this step are in Section 6.2. From these quan-
tities, we obtained the optimal weights w
Details of this determination are in Section 6.3. Finally,
we quantified the benefit from using our approach
using the data from the 2014 production planning deci-
sions. This analysis is presented in Section 6.4. In Sec-
tion 6.5, we discuss the integration of our approach
into DAS’s operational decision making.
6.2. Implementation: Data Collection at DAS and
Calibration of Judgmental Errors
The first task during the implementation was to col-
lect data from the firm to determine the appropriate
distribution to use to model yield uncertainty, and to
calibrate the expert. We first describe this data and the
Table 1. Tests to Accept/Reject Normality of Historical Yield
Data for a Subset of Seeds
Test: pvalue pvalue pvalue
H0: Data are normal for seed 1 for seed 2 for seed 3
Kolmogorov-Smirnov test >0.15 >0.15 >0.15
Anderson-Darling test 0.90 0.51 0.51
Lilliefors-van Soest test >0.20 >0.20 >0.20
Cramer-von Mises test 0.92 0.56 0.59
Ryan-Joiner test >0.10 >0.10 >0.10
statistical tests performed to determine the paramet-
ric family of yield distributions. Then, we describe the
process used for calibrating the expert at DAS.
We asked DAS to identify a set of hybrid seeds
that have been produced repetitively in the last few
years. Overall, DAS found L22 such hybrid seeds
indexed by l1,2, . . . , 22 and provided us with the
historical yield data for these seeds. Using this data
and other sources, we sought to determine the appro-
priate parametric family to model yield distribution.
First, we analyzed the available yield data and ran
a battery of tests, including the Kolmogorov-Smirnov
test, Anderson-Darling test, and Lilliefors-van Soest
test and found that they all failed to reject the hypoth-
esis that the data was normally distributed. Table 1
shows the results for three such seeds. These tests only
confirm that normality cannot be ruled out. We then
ran a second test to see if normality provided the best
fit with the data. In this test, we determined the para-
metric family with the best fit with the data using the
chi-square test and Anderson-Darling test. The can-
didate distributions were the normal distribution, the
gamma distribution, the uniform distribution, the log-
normal distribution, the beta distribution, the Gumbel
distribution, the exponential distribution, the Weibull
distribution, the logistic distribution, and the inverse
normal distribution. The normal distribution was the
best fit on the Anderson-Darling test for all hybrid
seeds. On the chi-square test, the normal distribution
provided the best fit for a majority of the seeds.
In addition to this statistical proof for the hybrid
seeds, DAS has extensive data for inbred seeds that sup-
ports normality. Since the biological factors at play dur-
ing plant growth are the same in hybrid seeds, the yields
of the hybrid seed corn also would be normal. Recently,
Comhaire and Papier (2015) also provided statistical
evidence for normality of yields during seed corn pro-
duction. After identifying the normal distribution to be
appropriate, we determined the quantiles that the yield
expert at DAS was comfortable estimating (to obtain Z
for the normal distribution), as well as determined his
error structure (δand ), as described next.
6.2.1. Step 1: Selection of Quantiles for Elicitation and
Determination of Z.For each of the hybrid seeds l
1,2, . . . , L, we asked the expert to select three quantiles
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
10 Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
to estimate, without looking at the historical yield data
of these hybrids. The selection of three quantiles (rather
than more than three) was motivated by existing liter-
ature that suggests that three quantiles perform almost
as well as five quantiles (Wallsten et al. 2013), as well
as the time constraints faced by the expert. The expert
is an agricultural-scientist; he is well trained in statis-
tics and has worked extensively with yield data. His
quantitative background and experience were helpful
as he clearly understood the probabilistic meaning and
implications of quantiles. The first quantile he selected
was the 50th quantile since he has estimated this quan-
tile regularly in the last few years. The extant literature
also has established that estimating this quantile has the
intuitive 50–50 high-low interpretation that managers
understand well (O’Hagan 2006).
We then asked the expert to provide us with his
quantile judgments for two other quantiles, one in each
tail of the yield distribution, that he was comfortable
estimating. The yield expert chose to provide his judg-
ments for the 10th and 75th quantiles for several rea-
sons. First, he has developed familiarity with these
quantiles in the last few years: his statistical software
(JMP) typically provides a limited number of quantile
values, including these two quantiles during data anal-
ysis, and he is accustomed to thinking about them. Sec-
ond, the expert suggested the use of these asymmetric
quantiles because if asked for symmetric quantiles, he
would intuitively “estimate one-tail quantile and calcu-
late the other symmetric quantile using the properties
of the normal distribution.” This will be equivalent to
estimating only one quantile instead of two.
Finally, the expert was not comfortable in provid-
ing judgments for quantiles that were further out in
the tails, such as the 1st and the 95th quantiles. This
reluctance was interesting and highlighted some subtle
disconnects between theory and practice. Some arti-
cles (e.g., Lau et al. 1998, Lau and Lau 1998) have sug-
gested weights for extreme quantiles such as 1 per-
centile, assuming no judgmental errors. However, the
expert found it difficult to estimate extreme quantiles.
Specifically, he was concerned that he might not be
able to differentiate between random variations (that
we seek to capture) and acts of nature such as torna-
dos and floods (that we seek to exclude since the yield
expert cannot predict these events) that lead to extreme
We then determined the matrix Zfor the 10th, 50th,
and 75th quantiles. For the normal distribution, this
matrix is calculated as
1 0
1 0.67
where the value of 1.28 is equal to the inverse of the
standard normal distribution at the probability 0.1 and
so on.
6.2.2. Step 2: Elicitation Sequence and Consistency
Check. For each distribution l, we obtained the three
quantile judgments ˆ
xil (pi);i1,2,3;l1,2, . . . , 22;pi
0.1,0.5,0.75 from the expert; the expert did not have
access to the historical yield data for these hybrids
during this estimation. We obtained the expert’s judg-
ments in two rounds. In Round 1, for each hybrid l,
the expert followed his usual procedure for studying
the yield distribution for the female parent, looking
at the properties of the male parent and providing
his judgment for the median (see Figure 1). We then
asked the expert to provide his quantile judgments for
the 10th and 75th quantiles, in that order. This cus-
tomized sequence is consistent with the extant liter-
ature that suggests first obtaining an assessment for
50–50 odds (Garthwaite and Dickey 1985), and then
focusing further on quantiles in the tails. In Round 2 of
estimation, to encourage a careful reconfirmation of the
judgments provided in Round 1, we used a feedback
mechanism. We used the information from two quan-
tile judgments to make deductions about the third one,
and then asked the expert to validate these deductions.
If the expert did not concur with the deductions, we
provided him an opportunity to fine tune the quantile
As an example, consider a specific seed for which the
expert provided values of 15, 70, and 100 for the 10th,
50th, and 75th quantiles, respectively. The stated val-
ues of the 10th and 50th quantiles imply a mean yield
of 70 and standard deviation of 42.92 for normally dis-
tributed yields. These two values imply that there is
a 50% chance that the yield will be between 41 and
99 (the implied 25th and 75th quantile). We asked the
yield expert the following question: “Your estimate of
the 10th quantile implies that there is a 50% chance that
the yield will be between 41 and 99. If you think that
this range should be narrower, please consider increas-
ing the estimate of the 10th quantile. If you think the
range should be wider, please consider decreasing the
estimate of the 10th quantile.” We implemented this
feedback in an automated fashion so that the values
in the feedback question were generated automatically
using his quantile estimates. The expert could revisit
his input and the accompanying feedback question any
number of times before moving to the next feedback
question for the judgment for the 75th quantile (using
the deduced 35th and 85th quantile values obtained
from his judgments for the 50th and 75th quantiles).
After finishing this feedback, he moved to the next
seed. Throughout this process, we emphasized that the
objective of this fine tuning was to help him reflect on
his estimates carefully without leading him to any spe-
cific set of numbers. Analysis showed that after this
feedback, the standard deviations reduced by 33% for
the tail quantiles in round 2, confirming that the feed-
back was indeed helpful to the expert in improving the
quality of his estimates.
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS 11
6.2.3. Step 3: Separation of Sampling Errors Using
Bootstrapping. After elicitation was complete, we
quantified the judgmental errors by comparing the
expert’s stated values for the quantiles with the val-
ues obtained from the historical data. For our analysis
in Section 2, we assumed that the true values of the
quantiles xiwere available. However, since the num-
ber of data points for each seed at DAS was limited
(the largest sample size was 53), the quantile values
obtained from the data were subject to sampling varia-
tions that needed to be explicitly accounted for. Specif-
ically, let ˜
xidenote the value of quantile ifor the empir-
ical distribution. Then, for the true value xiand the
expert’s estimate ˆ
xi, we have the following decomposi-
tion of errors:
Total Error Judgmental Error +Sampling Error.(12)
The comparison of the expert’s assessment ˆ
the empirical value ˜
xihas two sources of errors: the
expert’s judgmental error and the sampling error. The
judgmental error is the difference between the quan-
tile judgment and the true quantile (ˆ
xixi). The sam-
pling error (xi˜
xi)captures the data variability that is
present because the empirical distribution is based on
a random sample of limited size from the population.
The expert did not see the historical data, therefore
both sources of errors can be considered to be mutually
Writing (11) in a vector form, we have ˆ
x). It follows that the total bias is equal to
xx)] +E[(x˜
where δtis the total bias, and δand δsare the expert’s
judgmental bias and the sampling bias, respectively.
The expert’s judgmental bias is computed as δδtδs.
Similarly, the variance in the estimates of quantiles,
assuming independence of the data-specific sampling
error and the expert-specific judgmental error, is
xx)] +Var[(x˜
We can write this equation in matrix notation as
Table 2. Variance-Covariance Matrix and Biases After Bootstrap Adjustment
113.41 50.09 46.83
50.09 42.92 51.46
46.82 51.46 93.37
34.42 20.71 13.50
20.71 21.00 21.16
13.49 21.16 25.20
78.99 29.38 33.33
29.38 21.92 30.30
33.33 30.30 68.17
δt9.43 0.94 2.48 ˆ
δs1.05 0.00 0.55 ˆ
δs10.48 0.94 3.03
where is the matrix of covariances of judgmental
errors and needs to be estimated for use in our analyt-
ical development described earlier. This matrix is esti-
mated as ts. The matrix must be checked
for positive definiteness to be able to take an inverse to
obtain the weights using Theorem 1. We next discuss
the estimation of δtand tusing DAS’s data and the
estimation of δsand susing bootstrapping. Note that
with a large number of historical observations, 't,
δ'δt, and the bootstrapping approach is not required.
For DAS’s data, the total bias δtand matrix twere
determined using the expert’s assessments as follows.
In each of the two rounds of elicitation, the expert’s
quantile judgments ˆ
xil (pi);i1,2,3for hybrid lwere
compared to the quantiles of the empirical distribu-
tion, ˜
xil (pi). The differences provided the total errors
il ˆ
xil (pi) − ˜
xil (pi). The average error ˆ
eil /L
provided the total bias for each quantile. The vector
of biases ˆ
iconstituted ˆ
δt. We then obtained unbiased
errors as ˆ
il ˆ
il ˆ
i; using these, we estimated the
3×3variance-covariance matrix ˆ
t. As discussed ear-
lier, a comparison of ˆ
tfrom the first round without
feedback and the second round with feedback showed
that the feedback reduced the spread of the errors sig-
nificantly (by 30%). The covariance matrix ˆ
tand the
bias ˆ
δtobtained after the second round are shown in
Table 2.
The sampling bias δsand the variance-covariance
matrix swere estimated by bootstrapping as fol-
lows. We had data y1l,y2l, . . . , ynllfor seed land cor-
responding quantiles ˜
xil estimated using these data.
For each distribution l, we drew a sample indexed pof
size nlwith replacement from the data y1l,y2l, . . . , ynll
and obtained the quantiles for this bootstrapping sam-
ple, ˜
xilp. We repeated the process for p1,2, . . . , P
times. Then, we obtained the differences ilp
xilp ˜
xil ), determined the average difference ¯
Ppilp/P, and calculated the unbiased differences
ilp ilp ¯
il . From these 3×Punbiased differ-
ences, we obtained the covariance matrix ˆ
sl for seed l.
To ensure a stable variance-covariance matrix ˆ
sl , we
used a large value of P,P1,000,000. Finally, swas
estimated as ˆ
sl /L, implying that each covari-
ance matrix ˆ
sl is equally likely to be present for each
elicitation in the future. The sampling bias for quan-
tile iwas estimated as ˆ
il /L. The vector of these
biases constituted ˆ
δs. For DAS’s data, the values of ˆ
and the bias vector ˆ
δsare shown in Table 2. The esti-
mated judgmental bias ˆ
δwas obtained as ˆ
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
12 Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
and the estimated matrix of judgmental errors ˆ
obtained as ˆ
s, and are shown in Table 2.
6.3. Implementation: Determination of Weights
For the variance-covariance matrix ˆ
in Table 2and the
1 0
1 0.67
Theorem 1provides the weights w
0.33]for estimating the mean and w
0.38]for estimating the standard deviation to be used
on the expert’s judgments for the 10th, 50th, and 75th
quantiles for yield distributions for hybrid seeds with-
out historical data. For these results, the following
regime was used at DAS in 2014 for estimating the pro-
duction yield distributions of each of more than 100
hybrid varieties that did not have historical yield data.
First, the expert estimated 10th, 50th, and 75th quan-
tiles ˆ
xfor the yield distribution of that hybrid seed. He
provided judgments for these quantiles using the same
mental model that he used during calibration, i.e., he
looked at the historical statistical distribution of the
production yield of the female parent on his computer,
considered the pollinating power and other biological
factors of the male parent, and then provided the quan-
tile judgments for the hybrid.
From this information, the debiased estimates were
obtained as ˆ
δby subtracting the biases ˆ
δ33.03. Next, the mean and stan-
dard deviation were obtained using the weights above
on the debiased estimates, ˆ
qand ˆ
Finally, these estimates were used in an optimization
framework that an in-house team was developing in
parallel to determine the optimal area of land to pro-
duce each hybrid. Since 2014, this approach has formed
the basis of decisions worth $800 million annually.
Equally important, since the approach leveraged the
expert’s experience and intuition, which he had been
using for a few years, the decision to implement the
approach at DAS was reached quickly.
6.4. Estimation of Monetary Benefits Using
Managerial Decisions
6.4.1. Status Quo Approach for Comparison. Before
adopting our approach for estimating the mean and
standard deviation for yield distributions, DAS used
the following method to determine the area of land
to use to grow each hybrid. The expert provided his
estimate of the median ˆ
x2. The production manager
used this point estimate to determine the area to use as
x2)f, where Dwas the demand of the seed,
x2was the median value provided by the expert and
fwas the risk adjustment factor of 1.2 or 1.3 based on
a subjective high/low perceived uncertainty in yield.
This framework provides a benchmark for quantifying
the benefit of using our approach.
6.4.2. Measures for Quantifying Benefits from Our
Approach Over Status Quo.The benchmark status quo
approach affected the firm’s finances systematically in
three ways. First, the profit margins of the seed did not
influence the acreage decision at all even though they
clearly should affect the decision. Second, only two
values of the factor fdid not completely capture the
complete range of yield standard deviations that were
present in the portfolio. After our approach was imple-
mented to estimate the mean and standard deviation,
the firm used them as inputs to a stochastic optimiza-
tion problem for expected profit maximization, i.e.,
the firm determine Qarg maxQ{−cQ +p E[h(Q,D)]},
where his the revenue function, cis the per acre
cost, and pis the selling price per bag. This process
change was a direct consequence of the availability of
the standard deviation. One could then calculate the
optimal ratio fQˆ
x2/D. At Dow, these ratios varied
from 1 to 1.4, suggesting that the use of only 1.2 or
1.3 was not optimal. The dollar capital investment in
a seed is equal to: Capital Investment $4,500 ×Area,
as the per acre cost of growing seed corn is approx-
imately $4,500 (the number is modified to preserve
confidentiality). A reduction in the area used for grow-
ing hybrids directly translates into a reduction in ini-
tial capital investment, with the savings being equal to
P4,500 × (QhQ), where the summation is over all
200 seeds. Over the complete portfolio, the cost savings
were significant, as we discuss shortly. This reduction
in the cost is the first measure for quantifying the ben-
efit of our approach.
Third, as we documented in earlier sections, the yield
expert’s judgments for the median ˆ
x2has judgmental
error. When using the status quo approach, this judg-
mental error leads to an error in the calculation of the
Unadjusted Area demand/ˆ
x2. This error was further
amplified by the use of the scaling factor f>1dur-
ing the calculation of the adjusted area using Qh
x2)f. For some hybrids, this error in the cal-
culation of adjusted area can be very large and may
result in a substantial suboptimal decision with a sub-
stantial loss in profit. This loss of profit is the second
measure for quantifying the benefits of our approach.
The benefit on the two measures was quantified
using historical decisions made at DAS, as discussed
6.4.3. Analysis for Quantifying the Benefits. Due to
confidentiality concerns, we do not provide here the
specific numerical values for all 200 seeds, and instead,
focus on the process used and the benefits observed.
Our approach was first used in 2014 to make the
production planning decision. For a number of seeds
involved in this decision, we documented the area used
for the annual crop plan in two ways: (a) status quo
approach and (b) using yield distributions estimated
using our approach. In approach (a), we determined the
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS 13
area used as Qh(D/ˆ
x2)fat f1.2,1.3for the median
estimates ˆ
x2provided by the expert. In approach (b),
we estimated the mean and standard deviation of the
yield distribution from the expert’s quantile judgments
using our approach and then determined the optimal
area using a profit-maximization formulation Q(dis-
cussed in Bansal and Nagarajan 2017), which needs
the specification of yield distributions to determine
the optimal area. The acreage decisions obtained from
our approach were implemented at DAS along with
a record of the decisions made using the status quo
approach that would have been made in the absence of
our approach.
The benefit of using our approach was estimated
using the sale data available at the end of the season.
Specifically, an in-house business analytics team com-
pared the cost of using the area that our approach
recommended with the cost of using the status quo
approach. These results showed that the annual pro-
duction investment decreased by 6%–7% using our
approach. Equally important, DAS did not see a drop
in the service levels of the seeds after the adoption of
this new approach for estimating yield distributions.
Subsequently, an analysis was performed on the
profit. For this analysis, the key item was that the
demand, yield, revenue, and profit for each hybrid had
been observed by the end of the year. For each hybrid,
these quantities provided the revenue if the area in the
status quo approach had been used. From this revenue,
the cost was subtracted to obtain the profit. Compar-
ing the profit from this status quo approach with actual
profit suggested that our approach led to between 2%
and 3% improvement in profit. These documented ben-
efits have led to a continuous use of our approach for
estimating yield distributions at the firm. We next dis-
cuss how this approach has been integrated into DAS’s
operations, but first, we discuss some nonmonetary
benefits accrued.
6.4.4. Nonmonetary Benefits. Several features of our
approach were perceived to be of managerial impor-
tance during the implementation. First, it provided
a unique quantification of the quality of the expert’s
judgments. This quantification was important for the
firm in understanding the benefit of identifying and
training experts in other seed businesses (soybean, cot-
ton, etc.) for which new varieties are being developed.
Specifically, at DAS, the yield distributions have a vari-
ance of µ2
2400 on average. At ˆ
shown in Table 2,
the variance w
118. Using Proposition 3, it fol-
lows that our approach extracts information from the
expert’s quantile judgments that is equivalent to the
information provided by 400/Var(ˆ
µ1)400/18 22
data points. We were told that this is equivalent to
approximately five to six years of test data at DAS. Sec-
ond, the approach provides a rational effort to estimate
the variability in production yields, enabling the yield
expert to support his estimates for yield distributions
with scientific tools.
6.5. Integration into Firm’s Operations
After the initial implementation in 2014, DAS recog-
nized the value of formal statistical modeling and
analysis for yield forecasting and production planning
decisions. The firm created a new business analytics
group, and two members of this group were tasked
with developing optimization protocols to inform
DAS’s operations. The team was composed of trained
statisticians with experience in biostatistics. This niche
skill set was considered necessary since the yield dis-
tributions and other properties of seeds are driven by
biology, and an understanding of plant biology as well
as statistics would enable the team to develop context-
informed models.
For the annual production planning decision, the
team implements the approach in the following man-
ner. The production planning decision is made every
year a few weeks before the advent of spring. In the
weeks preceding this decision, the team obtains a list
of hybrid seeds from the seed business manager that
are under consideration for being offered to the market.
The portfolio of hybrid seeds offered changes annu-
ally and this information is necessary for the team to
estimate yield distributions to support the production
planning decision. The team then sends this list to the
yield expert who is located at a different geographi-
cal location. This expert does travel back and forth to
the team’s location, nevertheless, DAS has emphasized
the development of computer-based tools that can be
accessed from anywhere. The yield expert obtains this
list and provides his judgments for yield distributions.
The team of statisticians processes these quantile judg-
ments using the process described earlier to deduce
means and standard deviations. A list of these val-
ues is then sent back to the business analytics team
that is responsible for making the production planning
The business analytics team then uses an optimiza-
tion framework to determine the number of acres that
should be used to grow each hybrid seed. Yield distri-
butions constitute the major source of stochasticity in
this model. Under this uncertainty, the model seeks to
balance the trade-off between using a very large or a
very small area. The per acre tilling and land lease cost
is high and using a large area of land needs up-front
high investment and could lead to a surplus inventory
of hybrid seeds. The use of a small area of land requires
less up-front capital investment in the production, but
can lead to shortages. Estimating the yield distribu-
tions enables the firm to optimize this trade-off in a
mathematical fashion, in addition to providing a quan-
titative decision support.
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
14 Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
7. Discussion and Future Research
7.1. Summary of Approach
In changing environments, historical data do not exist
to provide probability distributions of various uncer-
tainties. In such environments, judgments are sought
from experts. But expert judgments are prone to judg-
mental errors. In this paper, we develop an analytical
approach for deducing the parameters of probability
distributions from a set of quantile judgments pro-
vided by an expert, while explicitly taking the expert’s
judgmental errors into account.
From a theory-building perspective, the optimiza-
tion approach proposed is consistent with moment
matching, has a unique analytically tractable solution,
and is amenable for comparative static analysis. The
approach also provides an analytical foundation for
results documented numerically in the prior literature.
From a practice perspective, a salient feature of the
approach is that an expert is no longer required to pro-
vide judgments for the median and specific symmetric
quantiles studied in the literature, but can provide his
judgments for any set of quantiles. The approach also
establishes a novel equivalence between an expert’s
quantile judgments and a sample size of randomly col-
lected data; this equivalence is useful for ranking and
comparing experts objectively. Finally, the modeling
framework explains a consistent numerical finding in
the prior literature that the weights for the mean and
the standard deviation add up to 1 and 0, respectively.
Equally important, it provides for a linear pooling
of quantile judgments from multiple experts, thereby
providing a practical toolkit for combining judgments
in practice.
From an implementation perspective, the approach
has several features that make it viable for an easy
adoption by firms. First, it usesjudgments for any three
or more quantiles that an expert is comfortable pro-
viding. In a specific application at DAS, we used the
yield expert’s judgments for the 10th, 50th, and 75th
quantiles to deduce the mean and standard deviations
of a large number of yield uncertainties. The expert
chose to estimate these quantiles based on his experi-
ence with obtaining and using these quantiles in his
data analysis responsibilities. Second, the final out-
come of the approach is a set of weights that are used to
estimate means and standard deviations as weighted
linear functions of quantile judgments. The implemen-
tation of this procedure requires simple mathemati-
cal operations that can be performed in a spreadsheet
environment, and it has led to an expedited adoption
at DAS. Third, the weights are specific to the expert and
capture how good he is at providing estimates of vari-
ous quantiles. This explicit incorporation of an expert’s
judgmental errors is useful since we can then deter-
mine how the estimated parameters (and the decision
based on this estimated distribution) will vary as the
quality of the expert’s judgmental errors improve or
deteriorate. More specifically, in using Theorem 1, one
can analytically determine how the weights wchange
when the variance-covariance matrix changes.
7.2. Other Potential Approaches
In this section, we discuss three other potential ap-
proaches to obtain mean and standard deviation from
quantile judgments: parameter estimation through
entropy minimization, by minimizing sum of absolute
errors, and by nonparametric approaches.
In relative entropy methods, the entropy of the dis-
tribution obtained from iid randomly sampled data
relative to a benchmark distribution is computed to
evaluate the similarity of two distributions. In our prob-
lem, only three imperfect quantile judgments are avail-
able from the expert. Therefore the conventional theory
available for comparing distributions with iid randomly
sampled data using entropy-based measures is not
directly applicable. Motivated by the weighted linear
approach suggested by moment matching (in Propo-
sition 5), one possibility is to estimate moments from
quantile judgments as ˆ
i1wji ˆ
qi;j1,2, where
the quantile judgments ˆ
qicorrespond to probabilities
pi. For the normal distribution, the cross-entropy or the
Kullback-Leibler (KL) distance of the estimates ˆ
true values µjis given as (Duchi 2007)
KL log ˆ
For each debiased quantile judgment, ˆ
i, where the term iis the noise in the judgment, and
therefore E[i]0; then it follows that
and since E[Pm
i1iw1i]0, this implies that (i)
i1w1i1and (ii) Pm
i1ziw1i0. Similarly, since µ2
qiw2i], it follows that (iii) Pm
i1w2i0and (iv)
Using the properties (i)–(iv), the KL distance can be
expressed as
KL log µ2+Pm
This KL distance is a random variable, which is a
function of the estimation errors i, thus a plausible
approach would be to select the weights wji that mini-
mize the expected value of the KL distance, E[KL]. The
limiting behavior of E[KL]provides a point of com-
parison between this approach and the one developed
earlier in this paper. As the expert becomes increas-
ingly more reliable, we have on the limit E[KL] → 0as
Var(i) → 0, for any values of w1iand w2ithat satisfy
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS 15
conditions (i)–(iv). Since the value of KL is nonnega-
tive by construction, on the limit any such (w1i,w2i)
minimize E[KL]. Moreover, since in the optimization,
we can select 2mweights and we have only four con-
straints, anytime we elicit more than two quantiles, in
general, we may have an infinite number of optimal
weight combinations. The unique weights obtained by
our approach automatically satisfy conditions (i)–(iv),
hence they also optimize E[KL]on the limit.
With respect to the general case of this approach
(minimizing (15)), we make three observations:
1. Notice from Equation (15), that E[KL]is a nontriv-
ial function of the entire error covariance matrix , and
obtaining the E[KL]-minimizing weights will require
numerical optimization.
2. The above definition of E[KL]requires knowledge
of µ2, which we do not have.
3. The uniqueness of the weights is not guaranteed.
Comparing this estimation approach with the one
proposed and implemented at Dow, we can appreci-
ate an important difference. Both approaches would
require us to estimate the covariance matrix from
the calibration data set. But the E[KL]minimization
approach also requires knowledge of the parameter µ2,
which Dow did not have. The estimation approach
developed in Sections 35does not require this knowl-
edge. These challenges associated with the E[KL]mini-
mization approach will need to be addressed by future
research before the approach can be used in practice.
The problem of estimating distribution parameters
by minimizing the sum of absolute errors (instead of
the sum of squared errors) is stated as minwik Pj|Piwi k ·
qji ˆ
µjk |, where ˆ
µjk is the mean (k1)and stan-
dard deviation (k2)of the calibration distribution j.
The optimal weights for this model are not obtainable
in closed form, rather this problem must be solved
numerically using a linear programming formulation,
and it not always has a unique solution (Harter 1977,
Bassett and Koenker 1978, Chen et al. 2008). Further-
more, there is no direct relationship between the sum
of squared errors and sum of absolute errors for the
data. Due to these two issues, the equivalent sample
size for an expert, akin to the result in Proposition 3,
cannot be determined.
Nonparametric methods explore various functional
forms to fit data, while minimizing the squared dis-
tances between the fitted and true values. The Spline
fitting approach fits one or more splines of various
degrees to the data. The recommended functional form
for the predictive model tends to be sensitive to the
data (Härdle et al. 2012) and, in our context, may
change with the inclusion/exclusion of even one prob-
ability distribution in the calibration set. Similarly,
the additive kernel model can be sensitive to tun-
ing parameters, which need to be selected subjectively
(Härdle et al. 2012). This sensitivity and subjectivity
in model recommendation implies, in our context, that
the nonparametric model for new seeds may have to be
modified for every season, which could be undesirable
when a firm seeks to develop a stable and transpar-
ent model for a repetitive use. Finally, a direct least
squares analysis provides a strong basis for using the
linear functional form used in the paper. Proposition 5
shows that the conventional least squares formulation
to deduce mean and standard deviation from quantile
judgments for location-scale distribution results in the
estimation of mean and standard deviation as linear
combinations of the quantile judgments. Our approach
exploits this result and develops it further in the form
of tractable and ease to use results discussed in vari-
ous propositions.
7.3. Future Research
A scant but important stream of literature has quanti-
fied the benefit of a more reliable estimation of oper-
ational uncertainties. Akcay et al. (2011), in collabora-
tion with SmartOps Corporation, show that using the
demand information computed from 20 data points
over 10 data points for inventory decision making
reduces the operating cost by 10% (Tables 2—4, p. 307).
Our quantification provides a new addition to this lit-
erature, especially when the information for an uncer-
tainty is obtained from an expert. In the future, this
quantification should be sharpened using Monte Carlo
simulation studies for the seed industry as well as other
industries. Future research should also explore tighter
connections between the yield of hybrid seed produc-
tion and the genomes of both parents crossed. This
industry is making significant investments in genetic
research, and a large amount of genomic information
for some corn varieties is becoming available. Unfortu-
nately, currently, this task is daunting as corn has one
of the most complex plant genomes with some mapped
varieties showing sequences of more than two billion
genes (Dolgin 2009); this is in stark contrast with the
sparsity of the yield data available. Finally, an impor-
tant requirement for the approach developed for a use,
in practice, is that we calibrate the experts by compar-
ing their quantile judgments with the true values for
some distributions that are specific to the context, and
for which historical data is available at the firm. How-
ever, this data may not be available in all businesses.
The future research should explore whether it is pos-
sible to calibrate experts on almanac events, and then
use this information for estimating probability distri-
butions specific to the business.
The authors gratefully acknowledge the suggestions made by
three anonymous reviewers, associate editor, and area edi-
tor Andres Weintraub, which resulted in a much improved
paper. The authors thank Dow AgroSciences, especially
Sue Gentry and J. D. Williams, for their support in this
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
Bansal et al.: Using Experts’ Judgments to Quantify Risks
16 Operations Research, Articles in Advance, pp. 1–16, ©2017 INFORMS
collaboration. The first version of the paper was developed
when the first author was visiting the Department of Supply
Chain and Operations at University of Minnesota. The Lab-
oratory for Economics, Management and Auctions (LEMA)
at Penn State provided laboratory settings to test the the-
ory developed in the paper before its field deployment. The
authors also thank Mike Blanco, Marilyn Blanco, Murali
Haran, and Dennis Lin at Penn State for their help during a
Akcay A, Biller B, Tayur S (2011) Improved inventory targets in the
presence of limited historical demand data. Manufacturing Ser-
vice Oper. Management 13(3):297–309.
Ayvaci MUS, Ahsen ME, Raghunathan S, Gharibi Z (2017) Timing
the use of breast cancer risk information in biopsy decision mak-
ing. Production Oper. Management. Forthcoming.
Baker E, Solak S (2014) Management of energy technology for sus-
tainability: How to fund energy technology research and devel-
opment. Production Oper. Management 23(3):348–365.
Bansal S, Nagarajan M (2017) Product portfolio management with
production flexibility in agribusiness. Oper. Res. 65(4):914–930.
Bassett G Jr, Koenker R (1978) Asymptotic theory of least absolute
error regression. J. Amer. Statist. Assoc. 73(363):618–622.
Bates JM, Granger CWJ (1969) The combination of forecasts. Oper.
Res. Quart. 451–468.
Casella G, Berger RL (2002) Statistical Inference, 2n ed. (Duxbury
Press, Pacific Grove, CA).
Chen K, Ying Z, Zhang H, Zhao L (2008) Analysis of least absolute
deviation. Biometrika 95(1):107–122.
Comhaire P, Papier F (2015) Syngenta uses a cover optimizer to deter-
mine production volumes for its European seed supply chain.
Interfaces 45(6):501–513.
Dolgin E (2009) Maize genome mapped. Nature News 1098.
Duchi J (2007) Derivations for Linear Algebra and Optimization. Working
paper, University of California, Berkeley, Berkeley, CA.
Garthwaite PH, Dickey JM (1985) Double- and single-bisection meth-
ods for subjective probability assessment in a location-scale fam-
ily. J. Econometrics 29(1–2):149–163.
Granger CWJ (1980) Forecasting in Business and Economics (Academic
Härdle WK, Müller M, Sperlich S, Werwatz A (2012) Nonparametric
and Semiparametric Models (Springer , New York).
Harter HL (1977) Nonuniqueness of least absolute values regression.
Comm. Statist.-Theory and Methods 6(9):829–838.
Johnson D (1998) The robustness of mean and variance approxima-
tions in risk analysis. J. Oper. Res. Soc. 49(3):253–262.
Johnson NL, Kotz S, Balakrishnan N (1994) Continuous Univariate Dis-
tributions, Vol. 1, Wiley Series in Probability and Mathematical
Statistics: Applied Probability and Statistics (Wiley, New York).
Keefer DL, Bodily SE (1983) Three-point approximations for contin-
uous random variables. Management Sci. 29(5):595–609.
Kelton WD, Law AM (2006) Simulation Modeling and Analysis, 4th ed.
(McGraw Hill, New York).
Koehler DJ, Brenner L, Griffin D (2002) The calibration of expert
judgment: Heuristics and biases beyond the laboratory. Heuris-
tics and Biases: The Psychology of Intuitive Judgment (Cambridge
University Press, New York).
Lau HS, Lau AHL (1998) An improved PERT-type formula for stan-
dard deviation. IIE Trans. 30(3):273–275.
Lau HS, Lau AHL, Ho CJ (1998) Improved moment-estimation for-
mulas using more than three subjective fractiles. Management
Sci. 44(3):346–351.
Lau HS, Lau AHL, Kottas JF (1999) Using Tocher’s curve to con-
vert subjective quantile-estimates into a probability distribution
function. IIE Trans. 31(3):245–254.
Lau AHL, Lau HS, Zhang Y (1996) A simple and logical alternative
for making PERT time estimates. IIE Trans. 28(3):183–192.
Lindley DV (1987) Using expert advice on a skew judgmental distri-
bution. Oper. Res. 35(5):716–721.
O’Hagan A (1998) Eliciting expert beliefs in substantial practical
applications. J. Roy. Statist. Soc.: Ser. D (The Statistician)47(1):
O’Hagan A (2006) Uncertain Judgements: Eliciting Experts’ Probabilities,
Vol. 35 (John Wiley & Sons, Chichester, UK).
O’Hagan A, Oakley JE (2004) Probability is perfect, but we can’t elicit
it perfectly. Reliability Engrg. System Safety 85(1–3):239–248.
Pearson ES, Tukey JW (1965) Approximate means and standard devi-
ations based on distances between percentage points of fre-
quency curves. Biometrika 52(3–4):533.
Perry C, Greig ID (1975) Estimating the mean and variance of subjec-
tive distributions in pert and decision analysis. Management Sci.
Ravinder HV, Kleinmuntz DN, Dyer JS (1988) The reliability of sub-
jective probabilities obtained through decomposition. Manage-
ment Sci. 34(2):186–199.
Stevens JW, O’Hagan A (2002) Incorporation of genuine prior infor-
mation in cost-effectiveness analysis of clinical trial data. Inter-
nat. J. Tech. Assessment in Health Care 18(04):782–790.
Wallsten TS, Nataf C, Shlomi Y, Tomlinson T (2013) Forecasting
values of quantitative variables. Paper presented at SPUDM24,
Barcelona, Spain, August 20, 2013.
Saurabh Bansal is an assistant professor of supply chain
management and information systems, and a faculty member
of operations research at the Pennsylvania State University.
His research focuses on developing mathematical models,
algorithms, and protocols to estimate business risks and opti-
mize business operations under risks.
Genaro J. Gutierrez is an associate professor of informa-
tion risk and operations management at McCombs School of
Business, The University of Texas at Austin, where he teaches
operations management and supply chain analytics. He is
the Director of the Executive MBA Program that McCombs
School offers in Mexico City.His current research interests
include, in general, the incorporation of data analytics in
the supply chain management domain. Specific research
projects include: combination of statistical and judgmental
approaches for estimating demand, data-driven models to
optimize the supply chain for digital advertising, procure-
ment of traded commodities, and reliability models for fore-
casting and procurement of high cost spare parts. Recent
publications of Professor Gutierrez have appeared in Manage-
ment Science, Operations Research, IIE Transactions, and Euro-
pean Journal of Operations Research.
John R. Keiser is the global technical expert for corn seed
production research, and is responsible for providing guid-
ance and coordination between all corn production research
programs globally, as well as technical oversight for the NA
Production Research program. He earned a PhD in Crop Pro-
duction and Physiology from Iowa State University.
Downloaded from by [] on 25 July 2017, at 06:08 . For personal use only, all rights reserved.
... For such a continuous quantity of interest, experts provide not only a point estimate (or best guess) but also a prediction interval (or quantile estimates) to express their degrees of confidence (Cooke 1991, Clemen andWinkler 2007). Quantile judgments are popular in many fields such as meteorology, risk assessment (Cooke 1991), medicine, finance (Jain et al. 2013), and operations management (Russo and Schoemaker 1992, Tong and Feiler 2016, Bansal et al. 2017). ...
... With exact knowledge about the theoretical distributions, it is feasible to disentangle sampling errors and judgmental biases in a post hoc exploratory analysis. (Even if the theoretical distributions are unavailable, we can still resort to techniques such as bootstrapping for such decomposition (Bansal et al. 2017). Having known the distributions makes the analysis simpler.) Figure 1 illustrates the sampling and judgmental processes of our experiments. ...
The present study aims to investigate the quality of quantile judgments on a quantity of interest that follows the lognormal distribution, which is skewed and bounded from below with a long right tail. We conduct controlled experiments in which subjects predict the losses from a future typhoon based on losses from past typhoons. Our experiments find underconfidence of the 50% prediction intervals, which is primarily driven by overestimation of the 75th percentiles. We further perform exploratory analyses to disentangle sampling errors and judgmental biases in the overall miscalibration. Finally, we show that the correlations of log-transformed judgments between subjects are smaller than is justified by the information overlapping structure. It leads to overconfident aggregate predictions using the Bayes rule if we treat the low correlations as an indicator for independent information.
Input model bias is the bias found in the output performance measures of a simulation model caused by estimating the input distributions/processes used to drive it. When the simulation response is a nonlinear function of its inputs, as is usually the case when simulating complex systems, input modelling bias is amongst the errors that arise. In this paper, we introduce a method that recalibrates the input parameters of parametric input models to reduce the bias in the simulation output. The proposed method is based on sequential quadratic programming with a closed form analytical solution at each step. An algorithm with guidance on how to practically implement the method is presented. The method is shown to be successful in reducing input modelling bias and the total mean squared error caused by input modelling error. Summary of Contribution: This paper furthers the understanding and treatment of input modelling error in computer simulation. We provide a novel method for reducing input model bias by recalibrating the input parameters used to drive a simulation model. A sequential quadratic programming approach with an explicit solution is provided to recalibrate the input parameters. The method is therefore computationally inexpensive. An algorithm outlining our proposed procedure is provided within the paper. An evaluation of the method shows the method successfully reduces input model bias and may also reduce the mean squared error caused by input modelling in the output of a simulation model.
Full-text available
Many times, expert judgment is used in a two-step procedure: (i) obtain judgments for calibration quantities for which true/empirical values are available to calibrate the expert's judgmental errors, and then (ii) obtain judgments for focal quantities (quantities of interest but without historical data), adjust them using the calibration information, and then use the adjusted judgments to assist decision making. In such situations, should a decision analyst share the calibration information with the expert before the expert provides judgments for focal quantities? We answer this question using laboratory experiments. We specifically investigate the role of task complexity, numeracy, and self-awareness on the use of calibration information by the expert. We find that: (a) Expert judgment tends to be of a worse quality as task complexity increases, (b) Calibration feedback does not improve managerial judgments in a less complex task, but it does reduce the bias and (especially) noise in a more complex task, (c) Numeracy does not impact the use of calibration information by experts regardless of task complexity, and (d) Individuals are able to directionally discern whether they are doing well in less complex tasks, but not in more complex tasks. As such these results suggest that when faced with complex tasks experts do benefit from receiving calibration information. However, this benefit comes at the expense of rendering this information inapplicable for the decision analyst to adjust the focal judgments provided by an expert. In contrast, experts do not benefit from receiving calibration information for simple tasks, allowing decision analysts to continue to use the calibration information to adjust managerial judgments. 2
Full-text available
In this chapter we first introduce the commercial seed business in the continental USA. Subsequently, we discuss the problem formulation and solution for a firm that offers a portfolio of hybrid seed corn in the market and seeks to optimize the acreage for each hybrid. We also discuss implementation of the solution developed at the firm. As such, this chapter highlights contextual factors that make the production planning problems in the agribusiness unique and in need for customized solutions.
Advances in machine learning methods and the availability of new data sources show promise for improving prediction of operational risk. Maritime transportation is the backbone of global supply chains and maritime accidents can lead to costly disruptions. We describe a case study performed for the United States Coast Guard (USCG) to develop a prototype risk prediction system to provide early alerts of elevated risk levels to vessel traffic managers and operators in the Lower Mississippi River, the second largest port of entry in the United States. Integrating incident and accident data from the USCG with environmental and traffic data sources, we tested existing machine learning algorithms in their predictive ability. We found poor accident prediction accuracy in cross-validation using the traditional measures of precision and sensitivity. In this specific operational context, however, such single-class accuracy metrics can be misleading. We define action precision and action sensitivity metrics that measure the accuracy of predictions in engendering the correct behavioral response (actions) among vessel operators, rather than getting the specific event classification correct. We use these operationally appropriate measures for maritime risk prediction to choose an algorithm for our prototype system. While the traditional metrics indicated that none of the algorithms would perform sufficiently well to use in the early warning system, the modified metrics show that the top performing algorithm will perform well in this operational context.
Full-text available
Agribusiness firms, with an eye toward increasing population and evolving weather patterns, are investing heavily into developing new varieties of staple crops that can provide higher yields and are robust to weather fluctuations. In this paper, we describe a multiyear effort at Dow Agrosciences (now Corteva) to manage its seed corn portfolio, which includes several hundred seeds and is valued at more than $1 billion. The effort had two mutually interacting parts: (1) developing a decision analytic theory to estimate the production yield distributions for new seed varieties from discrete quantile judgments provided by plant biology experts, and (2) developing an optimization protocol to determine Dow's annual production plan for the seed portfolio with the flexibility of backup production in South America, under production yield uncertainty. The first part, owned by the research and development (R&D) function, provides yield probability distributions as inputs to the optimization protocol of the second part, which the production function owns. The results of the optimization problem, which include information about the attractiveness of specific future varieties, are returned to R&D. Both parts incorporate contextual details specific to this industry. In this paper, we show the optimality of linear policies for both problems. Additionally, the linear policies have many attractive structural properties that 1 continue to hold for the more complex instances of the problems. A major strength of the theory we developed is that it is implementable in a transparent fashion, providing managers with a user-friendly real-time decision support tool. The implementation of the theory developed has led to significant monetary and managerial benefits at Dow.
Literature supports that buyers can influence suppliers to adopt sustainable practices. Much of this literature, however, is descriptive and takes only the perspective of a buyer trying to diffuse sustainability into its supply base. Alternatively, this paper augments the literature through the development of a prescriptive mathematical model that maximizes a supplier's expected revenue from making a bid to a large buyer, inclusive of a sustainability criterion. Specifically, the model uses decision theory to determine an optimal level of sustainability to integrate into a bid to balance economic versus sustainable trade‐offs based on perceived buyer preference. To illustrate the model's utility, it is then applied to a rich scenario, using data from an industrial supplier bidding for a contract to distribute a fuel additive in Brazil by balancing the trade‐off between decreased carbon emissions and increased distribution cost. A sensitivity analysis is then performed to illustrate how different forms of the buyer's objective function would influence the propagation of sustainability into optimal supplier bids. The research thereby adds to our understanding of sustainable sourcing by prescriptively modeling both buyer and supplier perspectives, when analyzing how buyer preferences can drive supplier behavior.
Full-text available
In this paper we consider the problem in which a firm offers a portfolio of products (agricultural seeds) to multiple customer segments comprised of farmers under aggressive fill rate constraints and some, but not all, customers will accept a substitute to their preferred choice. This business situation is not adequately represented by traditional inventory management models where a firm initiate a substitution based on its monetary considerations. By exploiting some recent results on polyhedral expectations, we develop a decomposition based approach to determine optimal inventory levels for the firm's seed portfolio under aggressive fill rate targets. The approach provides an exact solution that is implementable in managerial friendly environments and permits a what-if analysis for a real-time decision support. Subsequently we extend the technical development to establish: (i) a simple computable bound on the value of substitution, (ii) a procedure for determining implied penalty costs for substitutable seeds, (iii) comparative static results for seed portfolio. We also discuss the implementation of the technical development at a Fortune 100 firm which has resulted in significant monitory savings. Finally we provide geography-and climate-specific managerial insights for managing seed substitution by end-users.
Full-text available
A common practice in research and development (R&D) program management is to create multi-functional teams that pursue new products. Typically the teams pursue multiple projects and periodically need to select a subset of products to further develop, based on their assessments for the risk and potential returns from individual products. The individual forecasts by the team members for the potential of projects usually differ, and there is often a need to aggregate these point judgments into measures of both the potential and the risk of each project. Furthermore, team members may bring complementary or substitutive perspectives to the teams, and some experts may be better at estimating the project potential than others. The existing literature does not provide a systematic approach to aggregate multiple point forecasts into actionable signals while explicitly accounting for these expert-specific factors. In this paper we develop a new characterization of multiple point forecasts provided by experts, and use them in an optimization framework to deduce actionable signals including the mean, standard deviation, or a combination of the two for underlying probability distributions. This framework consists of three steps: (i) calibrate experts' point forecasts using historical data to determine which quantile they provide on average, when asked for forecasts, (ii) quantify the precision in the experts' forecasts around their average quantile, and (iii) use this calibration information in an optimization framework to deduce the signals of interest. We also show that precision and accuracy in expert judgments are complementary in terms of their informativeness. We also discuss the implementation of the development and the realized benefits at a large government project in the agribusiness domain.
Full-text available
The acquisition of production flexibility is a well documented strategy pursued by many firms to counteract certain operational constraints. However these flexibilities can increase the complexity of a production system and the difficulties in managing increased complexity may hinder exploiting the full benefit of flexibility. In this paper we consider one such flexibility paradox at an agribusiness firm for an annual $800 million production decision: The firm produces a number of products (hybrid seeds) using limited inventories of several raw materials (parent seeds) and a production process that is subject to random variations. To handle the raw material availability constraint and to partially mitigate the supply risk, the firm invests in a costly second production in South America that can be used in case the yield in the first production in North America is low. We solve this joint problem of raw material allocation and sequential production by reformulating it as a tractable simultaneous optimization problem. This tractable reformulation provides an exact solution in practical time durations for large assortments of products. We also establish that when profit margins are sufficiently high, sequential production has less cost on average than single production. The solution developed is in use at the firm and has led to an estimated increase in profit by 2-3% annually.
Full-text available
The European seed business of Syngenta relies on its supply chain to supply seed products to the different European markets, which led to more than 1.2 billion USD revenues in 2013. The seed supply chain is, however, exposed to a high level of uncertainty – from the demand side as well as from the supply side. Determining optimal production volumes in a highly volatile environment and more than one year before the sales period is not only a complex business decision but also one which strongly affects the company's profitability through lost sales and unsold supply. In order to better handle the production volume planning, Syngenta has developed a planning tool which determines optimal production volumes by taking the different levels of uncertainty into account. We report on this tool, the impact it has achieved, its integration into the planning process at Syngenta, and its technical design. In 2013, its first year of application, the production optimization tool has already avoided approximately 1.5 million USD in supply discards and has led Syngenta to revise the way how it handles uncertainty in its supply chain planning.
Full-text available
This paper presents a general framework based on copulas for modeling dependent multivariate uncertainties through the use of a decision tree. The proposed dependent decision tree model allows multiple dependent uncertainties with arbitrary marginal distributions to be represented in a decision tree with a sequence of conditional probability distributions. This general framework could be naturally applied in decision analysis and real options valuations, as well as in more general applications of dependent probability trees. While this approach to modeling dependencies can be based on several popular copula families as we illustrate, we focus on the use of the normal copula and present an efficient computational method for multivariate decision and risk analysis that can be standardized for convenient application.
One standard approach for estimating a subjective distribution is to elicit subjective quantiles from a human expert. However, most decision-making models require a random variable's moments and/or distribution function instead of its quantiles. In the literature little attention has been given to the problem of converting a given set of subjective quantiles into moments and/or a distribution function. We show that this conversion problem is far from trivial, and that the most commonly used conversion procedure often produces large errors. An alternative procedure using “Tocher's curve” is proposed, and its performance is evaluated with a wide variety of test distributions. The method is shown to be more accurate than a commonly used procedure.
Available clinical evidence is inconclusive on whether radiologists should use the patient risk profile information when interpreting mammograms. On the one hand, risk profile information is informative and can improve radiologists' performance, but on the other hand, it may impair their judgment by introducing biases in mammography interpretation. Therefore, it is important to assess whether and when profile information use translates into improved outcomes. We model the use of profile information in mammography, using a decision theoretic approach and explore the value of profile information using three process design choices: mammography only, unbiased, and biased reading. We estimate the parameters of our model using clinical data and find that using profile information along with the mammography information can achieve a better performance than not using the profile information. However, the better performance is contingent on the weight assigned to the profile information as well as the extent of bias due to profile information. Translating our findings into clinical practice would require properly designed experiments aiming to quantify the effect of the timing and the use of profile information on performance while accounting for radiologist and patient characteristics. When conducting an experiment is not feasible, a uniform operational sequence for interpreting mammograms and related guidelines may be a useful starting point to improve the quality of mammography operations.