ArticlePDF Available

Abstract and Figures

1. Expert judgment informs a variety of important applications in conservation and natural resource management, including threatened species management, environmental impact assessment, and structured decision-making. However, expert judgments can be prone to contextual biases. Structured elicitation protocols mitigate these biases and improve the accuracy and transparency of the resulting judgments. Despite this, the elicitation of expert judgment within conservation and natural resource management remains largely informal. We suggest this may be attributed to financial and practical constraints which are not addressed by many existing structured elicitation protocols. 2. In this paper, we advocate that structured elicitation protocols must be adopted when expert judgments are used to inform science. In order to motivate a wider adoption of structured elicitation protocols, we outline the IDEA protocol. The protocol improves the accuracy of expert judgments and includes several key steps which may be familiar to many conservation researchers, such as the four-step elicitation, and a modified Delphi procedure (“Investigate”, “Discuss”, “Estimate” and “Aggregate”). It can also incorporate remote elicitation, making structured expert judgment accessible on a modest budget. 3. The IDEA protocol has recently been outlined in the scientific literature, however, a detailed description has been missing. This paper fills that important gap by clearly outlining each of the steps required to prepare for and undertake an elicitation. 4. Whilst this paper focuses on the need for the IDEA protocol within conservation and natural resource management, the protocol (and the advice contained in this paper), is applicable to a broad range of scientific domains, as evidenced by its application to biosecurity, engineering, and political forecasting. By clearly outlining the IDEA protocol, we hope that structured protocols will be more widely understood and adopted, resulting in improved judgments and increased transparency when expert judgment is required. This article is protected by copyright. All rights reserved.
Content may be subject to copyright.
1
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Structured Expert Elicitation using the IDEA
1
Protocol
2
V. Hemming
a
, M.A. Burgman
b
, A.M. Hanea
a
, M.F. McBride
c
, B.C.
3
Wintle
a,d
4
a
Centre of Excellence for Biosecurity Risk Analysis, The University of
5
Melbourne, Australia
6
b
Centre for Environmental Policy, Imperial College, London, UK
7
c
Harvard Forest, Harvard University, Petersham, Massachusetts, USA
8
d
Centre for the Study of Existential Risk, University of Cambridge, David
9
Attenborough Building, Pembroke Street, Cambridge, UK
10
Contact:
11
Victoria Hemming,
12
Centre of Excellence for Biosecurity Risk Analysis,
13
University of Melbourne, Victoria,
14
Australia 3010
15
Email: hemmingv@student.unimelb.edu.au
16
Phone: +61431175515
17
2
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Word count 8070 (excludes tables, figures, title page and abstract).
18
19
Summary
20
21
1.
Expert judgment informs a variety of important applications in conservation
22
and natural resource management, including threatened species
23
management, environmental impact assessment, and structured decision-
24
making. However, expert judgments can be prone to contextual biases.
25
Structured elicitation protocols mitigate these biases and improve the
26
accuracy and transparency of the resulting judgments. Despite this, the
27
elicitation of expert judgment within conservation and natural resource
28
management remains largely informal. We suggest this may be attributed to
29
financial and practical constraints which are not addressed by many existing
30
structured elicitation protocols.
31
2.
In this paper, we advocate that structured elicitation protocols must be
32
adopted when expert judgments are used to inform science. In order to
33
motivate a wider adoption of structured elicitation protocols, we outline the
34
IDEA protocol. The protocol improves the accuracy of expert judgments and
35
includes several key steps which may be familiar to many conservation
36
researchers, such as the four-step elicitation, and a modified Delphi
37
3
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
procedure (“Investigate”, “Discuss”, “Estimate and “Aggregate). It can
38
also incorporate remote elicitation, making structured expert judgment
39
accessible on a modest budget.
40
3.
The IDEA protocol has recently been outlined in the scientific literature,
41
however, a detailed description has been missing. This paper fills that
42
important gap by clearly outlining each of the steps required to prepare for
43
and undertake an elicitation.
44
4.
Whilst this paper focuses on the need for the IDEA protocol within
45
conservation and natural resource management, the protocol (and the advice
46
contained in this paper), is applicable to a broad range of scientific domains,
47
as evidenced by its application to biosecurity, engineering, and political
48
forecasting. By clearly outlining the IDEA protocol, we hope that structured
49
protocols will be more widely understood and adopted, resulting in
50
improved judgments and increased transparency when expert judgment is
51
required.
52
1.
Keywords:
Structured expert judgment, quantitative estimates,
53
forecasting, IDEA protocol, expert elicitation, Delphi, four-step elicitation.
54
4
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
1
Introduction
55
Conservation and natural resource management often involve decisions for which data are absent
56
or insufficient and consequences are potentially severe. In such circumstances the elicitation of
57
expert judgment has become routine and informs a variety of important applications from
58
forecasting biosecurity risks (Wittmann
et al.
2015), threatened species management (Adams
59
Hosking
et al.
2015), priority threat management (Chadés
et al.
2015; Firn
et al.
2015),
60
predictive models (Krueger
et al.
2012), environmental impact assessment (Knol
et al.
2010) and
61
inputs into structured decision-making (Gregory & Keeney 2017). Expert judgment also
62
underpins some of the most influential global environmental policies including the IUCN Red
63
List and IPCC Assessments (Mastrandrea
et al.
2010; IUCN 2012).
64
Whilst expert judgment can be remarkably useful when data are absent or incomplete, experts
65
make mistakes (Burgman 2004; Kuhnert, Martin & Griffiths 2010). This is often due to a range of
66
contextual biases and heuristics such as anchoring, availability, and representativeness (Kahneman
67
& Tversky 1973), groupthink (Janis 1971) overconfidence (Soll & Klayman 2004), and difficulties
68
associated with communicating knowledge in numbers and probabilities (Gigerenzer & Edwards
69
2003). Inappropriate and ill-informed methods for elicitation can amplify these biases by relying
70
on subjective and unreliable methods for selecting experts (Shanteau
et al.
2002), asking poorly
71
specified questions (Wallsten
et al.
1986), ignoring protocols to counteract negative group
72
interactions (Janis 1971), and applying subjective or biasing aggregation methods (Lorenz
et al.
73
2011; Aspinall & Cooke 2013).
74
Structured elicitation protocols can improve the quality of expert judgments, and are especially
75
important for informing critical decisions (Morgan & Henrion 1990; Cooke 1991; Keeney & von
76
5
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Winterfeldt 1991; O'Hagan
et al.
2006; Mellers
et al.
2014). These protocols treat each step of the
77
elicitation as a process of formal data acquisition, and incorporate research from mathematics,
78
psychology and decision theory to help reduce the influence of biases and to enhance the
79
transparency, accuracy, and defensibility of the resulting judgments.
80
Structured protocols have been increasingly adopted in conservation and natural resource
81
management, for example, Cooke’s Classical Model (Cooke 1991) has been applied to case studies
82
on the Great Lakes fisheries in North America (e.g. Rothlisberger
et al.
(2012) and Wittmann
et
83
al.
(2015)) as well as sea-level rise and ice-sheet melt (Bamber & Aspinall 2013). However,
84
reviews by Burgman (2004), Regan
et al.
(2005), Kuhnert, Martin and Griffiths (2010), and
85
Krueger
et al.
(2012) highlight that informal methods for expert elicitation continue to prevail.
86
Furthermore, few elicitations provide sufficient detail to enable review, critical appraisal and
87
replication (Low Choy, O'Leary & Mengersen 2009; French 2012; Krueger
et al.
2012).
88
These reviews have highlighted challenges which may present barriers to the implementation of
89
existing structured protocols within conservation and natural resource management. These include
90
difficulties experts face in expressing judgments in quantitative terms (Martin
et al.
2012a), the
91
cost and logistics associated with face-to-face elicitations of more than one or two experts (Knol
92
et al.
2010; Kuhnert, Martin & Griffiths 2010), as well as challenges experienced by experts in
93
translating their knowledge into quantiles or probability distributions (Garthwaite, Kadane &
94
O'Hagan 2005; Low Choy, O'Leary & Mengersen 2009; Kuhnert, Martin & Griffiths 2010).
95
Whilst no structured protocol has yet been proposed to overcome such challenges, a range of
96
individual and practical steps have been developed and tested. For example, Speirs-Bridge
et al.
97
(2010) provided a means of eliciting a best estimate and uncertainty in a language that respects the
98
6
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
bounded rationality of experts in communicating their knowledge as probabilities and quantities.
99
The approach also reduced judgment overconfidence. Burgman
et al.
(2011) found that reliable
100
experts in conservation cannot be predicted
a-priori
. Instead, improved judgments result from a
101
diverse group of individuals engaged in a structured modified Delphi process. McBride
et al.
102
(2012) demonstrated that it is feasible to elicit judgments from conservation experts across the
103
globe on a modest budget by using remote elicitation.
104
Encouragingly, the steps listed above are being readily adopted by conservation scientists to solve
105
a range of problems. For example, the four-step elicitation has been incorporated by Metcalf and
106
Wallace (2013), Ban, Pressey and Graham (2014), Chadés
et al.
(2015), Firn
et al.
(2015), and
107
Adams
-
Hosking
et al.
(2015) whilst Delphi protocols have been utilised by Runge, Converse and
108
Lyons (2011), Adams
-
Hosking
et al.
(2015), and Chadés
et al.
(2015). The incorporation of such
109
steps highlights that there is a willingness to adopt more rigorous approaches to expert elicitation,
110
however, that an alternative approach to existing protocols may be required to help overcome the
111
constraints faced by practitioners in conservation and natural resource management.
112
Whilst an alternative approach is required, such an approach should not be a compromise. Rather
113
it should meet the requirements of more rigorous definitions of structured elicitation protocols.
114
That is, it should treat the elicitation of expert judgments in the same regard as empirical data, by
115
using repeatable, transparent methods and addressing scientific questions (not value judgments) in
116
the form of probabilities and quantities (Aspinall 2010; French 2011; Aspinall & Cooke 2013;
117
Morgan 2014). Importantly, it should account for each step of the elicitation including the
118
recruitment of experts, the framing of questions, the elicitation and aggregation of their judgments,
119
using procedures that have been
tested
and clearly demonstrated to improve judgments (e.g. Cooke
120
7
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
(1991), and Mellers
et al.
(2014)). Finally, it should enable judgments to be subject to review and
121
critical appraisal (French 2012).
122
In this paper, we suggest the IDEA structured protocol provides a much needed alternative
123
approach to structured expert elicitation that meets these requirements, and overcomes many of
124
the constraints faced by conservation and natural resource management practitioners when eliciting
125
expert judgments. The protocol is relatively simple to apply, and has been tested and shown to
126
yield relatively reliable judgments. Importantly, the protocol incorporates a range of key steps that,
127
as noted above, have already been adopted to some extent by the conservation community, such
128
as the four-step elicitation and a modified Delphi procedure. Its applicability to remote elicitation,
129
makes it more cost-effective than methods that rely on face-to-face meetings.
130
Whilst this paper emphasises the suitability of the approach in conservation and natural resource
131
management, it should be noted that the IDEA protocol is equally suited to a wide variety of
132
scientific and technical domains. This is evidenced by its effective application in geo-political
133
forecasting (Wintle
et al.
2012; Hanea
et al.
2016a; Hanea
et al.
2016b) and engineering (van
134
Gelder, Vodicka & Armstrong 2016).
135
Although the protocol has been introduced elsewhere (Burgman 2015; Hanea
et al.
2016a) and
136
closely parallels the approach used by Burgman
et al.
(2011) and Adams
-
Hosking
et al.
(2015), to
137
date, there has been no detailed description of the steps required to carry out the method. This work
138
fills that important practical gap.
139
8
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
2
The IDEA protocol
140
The acronym IDEA stands for key steps of the protocol: “Investigate”, “Discuss”, “Estimate”, and
141
“Aggregate” (Figure 1). A summary of the basic steps is as follows. A diverse group of experts is
142
recruited to answer questions with probabilistic or quantitative responses. The experts are asked to
143
first
Investigate
the questions and to clarify their meanings, and then to provide their private,
144
individual best-guess point estimates and associated credible intervals
(Speirs-Bridge
et al.
2010;
145
Wintle
et al.
2012). The experts receive feedback on their estimates in relation to other experts.
146
With assistance of a facilitator, the experts are encouraged to
Discuss
the results, resolve different
147
interpretations of the questions, cross-examine reasoning and evidence, and then provide a second
148
and final private
Estimate.
Notably, the purpose of discussion in the IDEA protocol is not to reach
149
consensus but to resolve linguistic ambiguity, promote critical thinking, and to share evidence.
150
This is based on evidence that incorporating a single discussion stage within a standard Delphi
151
process generates improvements in response accuracy (Hanea
et al.
2016b).
The individual
152
estimates are then combined using mathematical
Aggregation
.
153
The IDEA protocol initially arose in response to Australian Government requests to support
154
improved biosecurity decision-making. Over the past 10 years individual steps of the protocol have
155
been tested in public health, ecology and conservation (Speirs-Bridge
et al.
2010; Burgman
et al.
156
2011; McBride
et al.
2012; Wintle
et al.
2013). More recently, the protocol was refined and tested
157
in its entirety as part of a forecasting tournament that commenced in 2011 as an initiative of US
158
Intelligence Advanced Research Projects Activity (IARPA) (Wintle
et al.
2012; Hanea
et al.
159
2016a; Hanea
et al.
2016b). The results demonstrated the value of many steps of the IDEA protocol
160
including using diverse experts in deliberative groups, of giving experts the opportunity to examine
161
one-another’s estimates and to reconcile the meanings of questions through discussion. It verified
162
9
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
that prior performance on questions of a similar kind can be used to identify the most valuable
163
experts (Hanea
et al.
2016b).
164
3
Preparing for the IDEA Protocol
165
Undertaking a structured elicitation requires substantial planning to ensure timelines are met and
166
experts are appropriately engaged (EPA 2009). The IDEA protocol is no different in this regard;
167
planning is the key to a successful elicitation. Key considerations and timelines are outlined below,
168
and summarised in Figure 2.
169
3.1
Develop a timeline
170
The first step is to develop a timeline of tasks and a schedule of key dates for each step of the
171
elicitation (Figure 2). Enable sufficient time for delays caused by human subject research approval
172
(if necessary), late replies by experts, and delays in the analysis.
173
In our experience, preparation can take anywhere from two weeks to four months, depending on
174
how well defined the questions and the purpose of the elicitation are. The subsequent elicitation of
175
quantities from a single group of experts ranges between two and six weeks, depending on whether
176
face-to-face elicitation or remote elicitation is used, and assuming a maximum of 20-30 questions
177
(Figure 2, and Supplementary Material A).
178
3.2
Form a project team
179
An expert elicitation team typically consists of a coordinator, a facilitator, an analyst and the
180
problem owner (Table 1). If no conflict of interest exist and time permits, then these roles may be
181
undertaken by one person or shared between many (Martin
et al.
2012a).
182
10
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
The specific roles of each member are outlined in Table 1, however, it is important that all members
183
have an understanding of the many ways in which biases and heuristics can affect the accuracy
184
and calibration of expert judgments. Common problems include overconfidence (Soll & Klayman
185
2004; Speirs-Bridge et al. 2010), anchoring (Tversky & Kahneman 1975; Furnham & Boo 2011),
186
failure to adequately consider counterfactual information (Nickerson 1998), linguistic ambiguity
187
(Kent 1964; Wallsten et al. 1986), and groupthink (Janis 1971). Useful introductions to these biases
188
and heuristics can be found in Cooke (1991), O'Hagan
et al.
(2006), Hastie and Dawes (2010),
189
McBride
et al.
(2012) and Burgman (2015).
190
If the team is approaching expert elicitation for the first time, then we would also recommend that
191
someone with experience in structured expert elicitation is engaged to review the questions for
192
unintended bias.
193
194
4
Decide Elicitation format
195
A key advantage of the IDEA protocol is that it is flexible enough to be conducted face-to-face, in
196
workshops or by remote elicitation. The most appropriate format will ultimately depend on time
197
and budget, and the location and availability of experts.
198
The inception meeting should be undertaken either via a group teleconference or workshop. In
199
Round 1 and Round 2, experts must be free to answer the questions posed by the facilitator
200
independently from others in the group. This can be achieved remotely or in a face-to-face
201
environment. The discussion phase can take place by teleconference, email, webpage or a
202
11
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
combination of platforms. Alternatively, experts can be invited to a workshop with the purpose of
203
discussing Round 1 results.
204
If using remote elicitation for Round 1 and / or Round 2 the questions can be sent in a simple
205
document that can be accessed both off and online. We have used Excel spreadsheets, Word
206
documents, and PDF forms (Supplementary Material A and B). If eliciting judgments face-to-face,
207
the facilitator may schedule interviews with individual experts to elicit their judgments, or elicit
208
individual judgments in a group session by asking them to enter estimates on a device or on paper.
209
Discussion facilitated over email or web forum can be inexpensive and enable all experts to take
210
part regardless of their location and work commitments. However, the drawbacks may include
211
substantial time investment by experts, lower engagement levels, and the possibility that some
212
conversations may not be resolved, especially if experts are distracted by local commitments or
213
are late to join the discussion process (McBride
et al.
2012). Workshops usually result in better
214
buy-in and acceptance of the outcomes than do exercises that are exclusively remote (Krueger
et
215
al.
2012; McBride
et al.
2012). However, they can be expensive or logistically infeasible (Knol
et
216
al.
2010).
217
4.1
Develop clear questions
218
Like many other structured approaches (Cooke 1991; O'Hagan
et al.
2006; Mellers
et al.
2014),
219
IDEA requires experts to estimate numerical quantities or probabilities. The objective is to obtain
220
approximations of facts that can be cross-examined and used to inform decisions and models
221
(Morgan 2014).
222
Achieving this requires the formulation of questions that are relatively free from linguistic
223
ambiguity and framing that may generate unwanted bias. This also means providing details such
224
12
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
as the time, place, methods of investigation, the units of measurement, the level of precision
225
required and other caveats (Table 2, and Supplementary Material A). Questions should also aim to
226
elicit information in a format that most closely aligns with the domain knowledge and experiences
227
of the experts (Aspinall & Cooke 2013).
228
The IDEA protocol incorporates two alternative question formats depending on whether quantities
229
or probabilities are being elicited (Table 2). The four-step format (Speirs-Bridge
et al.
2010) is
230
mostly used to elicit quantities (Table 2), however, it can also be used to elicit other types of data
231
such as percentages and ratios. Wherever possible, questions should be framed in their frequency
232
format because it is less prone to linguistic ambiguity than asking experts for these estimates
233
directly (refer to Gigerenzer and Edwards (2003), supplementary material A). The four-step format
234
involves asking for upper and lower plausible bounds, a best guess, and a 'degree of belief’ (how
235
sure are you). Taken together, an expert’s responses to the four-step format are designed to be
236
interpreted as a credible interval, (i.e. the degree of belief that an event will occur, given all
237
knowledge currently available). If an expert provides a certainty level of, say, 70% for 10 similar
238
events, then the truth should lie within their credible intervals 7 out of 10 times.
239
The three-step format (Wintle
et al.
2012; Burgman 2015), is used in place of the four-step format
240
when eliciting single event probabilities (and thus differs from other three-step formats mentioned
241
in the literature – e.g. Speirs-Bridge
et al.
(2010) and Soll and Klayman (2004)). It was developed
242
to avoid difficulties associated with asking experts to specify second order probabilities (i.e. their
243
confidence in their degree of belief). It involves asking experts for their degree of belief that an
244
event will occur by asking for their lowest probability, their highest probability and their best guess
245
of the probability that the described event will occur (Table 2).
246
13
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Usually, we suggest that experts using the four-step elicitation method constrain their confidence
247
levels to between 50% and 100% (Speirs-Bridge
et al.
2010) (Table 2). If an expert states they are
248
less than 50% confident that their intervals contain the truth, it implies that they are more confident
249
that the truth lies outside their intervals than within them, which experience suggests is rarely what
250
the expert actually believes.
251
Both the wording and question order in the three and four-step question formats are important
252
(Table 2). The words ‘plausible’ and ‘realistic’ are intentionally used to discourage people from
253
specifying uninformative limits (such as 0 and 1 for bounds on probability estimates). Asking for
254
lowest and highest estimates first, encourages consideration of counterfactuals and the evidence
255
for relatively extreme values, and avoids anchoring on best estimates (Morgan & Henrion 1990).
256
It is important to highlight that the IDEA protocol centres around the three-step and four-step
257
question formats, as they have been shown to assist experts in constructing and converting their
258
knowledge into quantitative form, however, it does not automatically dictate what this information
259
is (e.g. the best guess could be interpreted as a mean, median, or mode if desired but is not defined).
260
Thus, the basic/standard protocol described is this paper was not designed, on its own, to elicit a
261
probability distribution, but rather a best estimate and uncertainty bounds.
262
That said, the responses elicited using the IDEA question format can be used to assist in
263
construction of a probability distribution if desired, for example by taking the best guess and
264
interval bounds as moments of a distribution and fitting via least squares (McBride, Fidler &
265
Burgman 2012; Chadés
et al.
2015). However, the testing and refinement required to develop a
266
standardised protocol for fitting distributions has not yet been done.
267
14
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
If taking this approach, careful consideration must be given to the underlying assumptions involved
268
in converting the information to a distribution, for example, what the best guess represents and
269
how to extrapolate uncertainty bounds, as well as to the choice of distribution(s) (e.g. O'Hagan
et
270
al.
(2006) and references therein). We suggest that all assumptions are documented and are made
271
clear to experts before and throughout the elicitation.
272
Alternatively, experts may be asked to provide their estimates as fixed quantiles (e.g. as in Cooke
273
and Goossens (2000)), however, this may be challenging for many experts, particularly if remote
274
elicitation is utilised. For this reason, additional iteration between the project team and experts may
275
be necessary to ensure their beliefs are being adequately represented. Additional research and
276
testing of methods to elicit such information remotely in a language more familiar to experts is
277
needed.
278
The number of questions will depend on their difficulty, the time available and the motivations of
279
the experts. To avoid expert fatigue we recommend asking no more than 15-20 questions in a single
280
day of face-to-face elicitation (Speirs-Bridge
et al.
(2010)). It is possible to ask more questions
281
when elicitations are run remotely (over the web or by email) though at the risk of reducing
282
participation levels if experts are not sufficiently motivated. If there are many questions, we
283
recommend dividing questions and subsequently eliciting judgments over a number of weeks or
284
months. If time is limited, then we suggest recruiting more experts and dividing questions between
285
multiple groups.
286
The inclusion of previous data, current trends, and useful links can help to clarify questions and
287
inform responses. However, experts may anchor on background information, even on irrelevant
288
quantities that the background information provides (see McBride, Fidler and Burgman (2012)).
289
15
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
This may result in a biased group estimate. We recommend providing background information for
290
the discussion following Round 1, so that experts initially use their own reasoning and source data
291
independently.
292
Practice questions can be developed and sent to help familiarize experts with the question style
293
and the overall process. Practice questions should have a resolution but needn’t be taken from the
294
domain of the elicitation and could focus on events for which there will be a short-term outcome,
295
such as weather forecasts, traffic incidents or stock prices. Alternatively, the project team may
296
have access to data that is not available to the experts, which can be used to develop practice
297
questions.
298
We recommend that at least two subject matter experts are engaged to review draft questions.
299
Ideally these experts should be sourced from outside of the expert group, although this may not be
300
possible for highly specialised topics. The experts should consider whether the questions are within
301
the domain of expertise of the participants, are free from linguistic ambiguity or biases and can be
302
completed in the time-frame available. The problem owner should also review the questions to
303
ensure they provide the required data.
304
4.2
Ethics clearance and useful project documents
305
Depending on the nature of the elicitation, human subjects research approval (ethics clearance)
306
may be mandated by your institution or funding source. Many journals insist on ethics clearance.
307
Ethics clearance can take some time to organise, and should be sought as soon as the project details
308
are specified. Approval should be obtained before experts are recruited. Some important elements
309
outlined below should be considered regardless of whether ethics clearance is required (refer to
310
Supplementary Material A for examples).
311
16
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
The coordinator should consider how they will protect data and the anonymity of experts. We
312
recommend de-identifying expert judgments and personal information by using codenames for
313
experts (they can nominate their own), encrypting folders and establishing maximum periods for
314
which data will be stored. If information provided in the questions includes data that are not
315
publicly available, the coordinator may need to obtain permission from the owners to use them.
316
A project statement written in plain language should be developed. It should be brief, but list key
317
information including that it is voluntary, whether payment will be involved, how much time will
318
be asked of the experts, over what period, how the data will be used, how the anonymity of
319
judgments will be protected, how the data will be stored, who will have access, how they can
320
enquire or complain about the process, and that they are free to withdraw at any time.
321
A consent form provides an opportunity for experts to sign that they acknowledge having read and
322
understood the purpose of the study, and that they are willing to take part. In some applications, it
323
may be important to seek permission from experts to publish their names and credentials as
324
supporting information. If the project team plan to retain the judgments elicited from experts as
325
their intellectual property, they should make this clear in the consent form.
326
An instruction manual should be developed to help guide the experts through the elicitation process
327
and to reiterate key dates for the elicitation (Supplementary Material A).
328
4.3
Selecting and engaging experts
329
As with other structured protocols (Cooke 1991; Keeney & von Winterfeldt 1991; O'Hagan
et al.
330
2006; Mellers
et al.
2014), IDEA promotes the use of multiple experts based on empirical evidence
331
that indicates that whilst criteria such as age, experience, publications, memberships and peer-
332
recommendation can be useful for sourcing potential experts, they are actually very poor guides to
333
17
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
determining
a-priori
someone's ability to provide good judgments in elicitation settings (Shanteau
334
et al.
2002; Burgman
et al.
2011; Tetlock & Gardner 2015) and may result in the unnecessary
335
exclusion of knowledgeable individuals (Shanteau
et al.
2002; French 2011). The best guide to
336
expert performance is a person’s previous performance on closely related tasks, which is rarely
337
available
a-priori
. The inability to identify the best expert means that groups of multiple experts
338
almost always perform as well as, or better than, the best regarded expert(s) (Hora 2004;
339
Surowiecki 2004; Burgman
et al.
2011; Mellers
et al.
2014)
340
Because it is usually not possible to predict who has the requisite knowledge to answer a set of
341
questions accurately, the main criterion when selecting experts is whether the person can
342
understand the questions being asked. We recommend that you establish relevant knowledge
343
criteria and create a list of potential participants including their specialisation or skills, and contact
344
details.
345
Be especially vigorous in pursuing people that add to the group’s diversity. Diversity should be
346
reflected by variation in age, gender, cultural background, life experience, education and
347
specialisation. These are proxies for cognitive diversity (Page 2008). In high profile or contentious
348
cases, a transparent and balanced approach to selecting experts will also be important for
349
circumventing claims of bias (Keeney & von Winterfeldt 1991). We recommend aiming for around
350
10-20 participants, based on practicality and experience, and on empirical evidence suggesting
351
only minor improvements in the group’s performance are gained by having more than 6-12
352
participants (Hogarth 1978; Armstrong 2001; Hora 2004).
353
When recruiting experts, send a short introductory email inviting them to participate in an
354
introductory teleconference or workshop. We do this at least three weeks before the proposed
355
18
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
teleconference. Try to have the email originate from someone known to the experts. If this is not
356
possible, then include details of how they came to be recommended, and why they should be
357
involved. Personalised communication will also help ensure that experts see and respond to email
358
invitations. Mail-merge software is helpful for personalising emails by linking text fields such as
359
names to email addresses.
360
Important information such as the consent form, project statement and timeline, should be included
361
as attachments to the introductory email. Provide contact details and offer experts the opportunity
362
to discuss the project prior to the teleconference. Keep a contact log to track responses and follow
363
up on late replies. Follow-up introductory emails with a telephone call if experts don’t respond
364
within three to four days. Send reminders ahead of due dates.
365
5
Undertaking an elicitation
366
The following outline assumes that judgments will be elicited from experts remotely using the
367
IDEA protocol. However, the same basic approach can be adopted for face-to-face workshops.
368
Although the method specifies that the questions will be sent to experts and answered remotely,
369
we recommend an inception meeting in the form of a teleconference (Section 5.1). If practical, we
370
also recommend a workshop between Round 1 and Round 2 (Section 5.5), although this is less
371
important than the inception meeting.
372
5.1
Inception meeting
373
An introductory meeting is vital for establishing a rapport with the experts, explaining the
374
motivations for, and expectations of, the elicitation. It provides an opportunity for the coordinator
375
19
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
to explain context, and for the facilitator and analyst to explain how the various steps and
376
constraints contribute to relatively high quality judgments.
377
Start the meeting by thanking experts for their time and introducing the motivation behind the
378
project, and objectives for the elicitation. Many participants will be sceptical of expert elicitation,
379
or of the involvement of others. Acknowledge this scepticism, and explain the unavoidable nature
380
of expert judgment: the data we need aren’t available. Experts are often uncomfortable with stating
381
their judgments in quantitative form. It is important to acknowledge this, whilst emphasizing the
382
primary motivation for using the IDEA protocol is that it applies the same level of scrutiny and
383
neutrality to expert judgment as is afforded to the collection of empirical data. In addition, the
384
elicitation of numbers helps to overcome linguistic uncertainty.
385
Explain that participants must
not
speak to other participants about the elicitation prior to the
386
discussion phase between Rounds 1 and 2. However, they can and should speak to anyone else
387
they choose, and use all relevant information sources.
388
If time permits, run through the list of intended questions to further ensure that the wording and
389
requirements are clear. Reiterate the timelines and procedures for the Round 1, and allow sufficient
390
time for experts to ask questions. An example transcript is provided in Supplementary Material A.
391
5.2
Round 1 estimates
392
Round 1 commences with an email to the experts containing the questions to be answered by the
393
experts, or a link to the questions (if using a web-based platform), together with instructions on
394
how to complete them (Supplementary Material A).
395
20
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
When undertaking remote elicitation, allow about two weeks for experts to complete the exercise.
396
Allow sufficient time for late responses (another one to two weeks). Send a reminder to experts
397
before the close of the elicitation.
398
5.3
Analysis
399
Prior to the discussion phase, feedback on the results of Round 1 for experts will need to be
400
prepared. This step involves cleaning and standardising data, aggregating judgments (Table 3), and
401
providing clear and unambiguous graphs of the Round 1 estimates (Figure 3).
402
403
Cleaning the data
404
Elicited responses will almost always require some level of cleaning. Common mistakes include
405
entering numbers in the wrong boxes (e.g. the lowest estimate entered into the best guess, Table
406
3), blank estimates, wrong units- e.g. estimates in tonnes instead of kilograms, or as proportions
407
(out of 1) rather than percentages (out of 100), illogical and out-of-range numbers. Clarify with
408
experts whether apparent errors represent mistakes.
409
Standardise intervals (four-step only)
410
In the four-step question format, experts specify credible intervals. The analyst should standardise
411
these intervals, typically to 90% or 80% credible intervals, so that experts view the uncertainties
412
of all experts across questions on a consistent scale (Table 3). We use linear extrapolation (Bedford
413
& Cooke 2001; Adams
-
Hosking
et al.
2015), in which:
414
Lower standardised interval:
B
- ((
B
L
) * (
S/C
))
415
21
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Upper standardised interval:
B
+ ((
U
B
) * (
S/C
))
416
where:
B
= best guess,
L
=lowest estimate,
U
= upper estimate,
S
= level of credible intervals to be
417
standardised to,
C
= level of confidence given by participant. In cases where the adjusted intervals
418
fall outside of reasonable bounds (such as [0,1] for probabilities), we truncate distributions at their
419
extremes.
420
Participants often ask why they should specify confidence levels for their intervals when their
421
credible intervals are subsequently standardised (e.g. to 80%). While counter-intuitive, Speirs-
422
Bridge
et al.
(2010) found that overconfidence was reduced if experts were obliged to specify their
423
own level of confidence and the credible intervals were subsequently standardised.
424
As the main purpose of the adjusted intervals at this stage is to allow for comparison during the
425
discussion phase, linear extrapolation provides an easy-to-implement and explainable approach
426
that minimizes the need to make additional distributional assumptions. Our experience is that
427
alternative approaches (e.g. using the elicited responses to fit a distribution such as the beta,
428
betaPERT or lognormal) make little difference to the visual representations that result, or to the
429
discussions that follow. Thus, we use linear extrapolations for simplicity. Experts are encouraged
430
to change their estimates in Round 2 if the extrapolation does not represent their true belief.
431
Calculate a group aggregate estimate
432
Combined estimates are calculated following standardisation of the experts intervals. Most
433
applications of the IDEA protocol make use of quantile aggregation, in which the arithmetic mean
434
of experts’ estimates is calculated for the lower, best, and upper estimates for each question (Table
435
3). Quantile aggregation using the arithmetic mean avoids the need to fit a distribution, and has
436
22
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
been found to perform as well as more complex methods by Lichtendahl Jr, Grushka-Cockayne
437
and Winkler (2013), though there is ongoing debate in the literature over how well this result holds
438
more widely. In particular, recent studies by Eggstaff, Mazzuchi and Sarkani (2014) and Colson
439
and Cooke (2017), that involved large cross-validation studies, carried out on a massive dataset of
440
expert elicited estimates (a total of 73 independent expert judgment studies from the TU Delft
441
database (Colson & Cooke 2017)) found that the quantile aggregation method performs poorly
442
when compared to aggregating fitted distributions. These new results suggest further investigation
443
of aggregation methods for the IDEA protocol is warranted. However, until such time, we advocate
444
quantile aggregation, as a fast, straight forward approach that is well-understood by participants
445
and requires no distributional assumptions.
446
Equally weighted group aggregations can be sensitive to extreme outliers in small groups. Rather
447
than excluding outliers, they should be integrated into discussion, to determine whether there are
448
good reasons for providing them. We believe outliers should only be trimmed if they are clearly
449
and uncontroversially incorrect (for example outside of possible bounds).
450
5.4
Create and share graphical output
451
Create graphs for each question to display the estimates of each participant (labelled with their
452
codename) and the group aggregate (Figure 3). If displaying judgments from a four-step elicitation,
453
remind experts that their displayed uncertainty bounds may vary from their original estimates due
454
to standardisation, and that if the adjusted interval doesn’t accurately reflect their beliefs, they can
455
and should adjust their uncertainty bounds in Round 2.
456
Compile graphs, tables, comments for each question, and additional information submitted by
457
experts and return to experts.
458
23
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
459
5.5
Discussion phase
460
The discussion phase commences once experts have received feedback on the results of Round 1.
461
The facilitator guides and stimulates discussion, but does not dominate it. The facilitator should
462
choose contrasting results and ask questions which explore sources of variation, for example,
463
“What could occur that would lead the estimates to be high (or low)?”.
Where necessary, the
464
facilitator should clarify meaning or better define terms. A set of useful questions for facilitating
465
discussion is provided in Supplementary Material A.
466
5.6
Round 2 estimates
467
Following the discussion, experts make a second, anonymous and independent estimate for each
468
question. Experts who dropped-out following Round 1 should be excluded from the final
469
aggregation. Analyse the results using the methods described in section 5.3 and 5.4 above.
470
5.7
Post elicitation: documentation and reporting
471
Following completion of the elicitation, the experts' final assessments and the group aggregate
472
judgment should be circulated to the group for final review and ‘sign off’. All steps taken and
473
results collected during the elicitation should be documented to provide a transparent record of the
474
process and results. In presenting outputs, aggregated point estimates and uncertainty intervals
475
should be communicated along with the individual expert Round 2 estimates (Figure 4) to convey
476
the full level of inter-expert uncertainty (Morgan 2014). This concludes the elicitation process.
477
24
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
6
Discussion
478
This paper provides advice on the IDEA protocol, a structured elicitation method that is well-suited
479
to the time and resource constraints of many conservation problems. The three-step and four-step
480
question formats (Speirs-Bridge
et al.
2010) derive numerical estimates in a language accessible
481
to most experts, and summarises inter and intra-expert uncertainty, whilst the option for remote
482
elicitation accommodates modest budgets.
483
The protocol is simple to understand and apply and if needed could be undertaken entirely by hand
484
(though most often in Excel), making it an attractive option for those who may have limited
485
resources or time/willingness to familiarize with new techniques and/or software. Importantly, the
486
protocol has been shown to yield relatively reliable judgments in domains as diverse as
487
conservation (Burgman
et al.
2011) and geo-political forecasting (Wintle
et al.
2012; Hanea
et al.
488
2016a; Hanea
et al.
2016b).
489
Whilst we advocate that structured protocols can improve judgments, their use does not guarantee
490
accurate estimates. Unfortunately, in most cases the need for expert judgment arises where
491
empirical data cannot be obtained, and there is no way of evaluating judgments for their accuracy
492
(or calibration), and decisions need to be made urgently (Martin
et al.
2012b; McBride, Fidler &
493
Burgman 2012; Martin
et al.
2017). In such cases the best means of assessing whether the experts
494
have relevant/useful knowledge, can adapt their knowledge, and communicate their knowledge
495
accurately is to test them on carefully crafted test questions (Cooke 1991). The development of
496
such questions and the incorporation of improved methods for aggregation into the IDEA protocol
497
is the subject of current research by the authors. An additional avenue of research is required to
498
understand the best way to elicit probability distributions in a way that respects the bounded
499
25
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
rationality of experts, especially under remote elicitation conditions. We advocate that whilst all
500
such approaches could improve the IDEA protocol, they would benefit from additional testing
501
prior to formal incorporation into the IDEA protocol.
502
7
Conclusion
503
The reliability of expert judgment will always be sensitive to which experts participate and how
504
questions are asked. However, structured protocols such as the IDEA protocol improve the quality
505
of these judgments by taking advantage of the wisdom of the crowd and mitigating a range of the
506
most pervasive and potent sources of bias. This guide explains the rationale behind the IDEA
507
protocol and decomposes the process into manageable steps. It also highlights areas of future
508
research. Regardless of whether the IDEA protocol is adopted, we strongly advocate that structured
509
protocols for expert elicitation must be adopted within conservation, natural resource management
510
and other scientific domains.
511
8
Acknowledgements
512
The authors would like to thank Prof. Robert B. O’Hara, Chris Grieves, Dr. Iadine Chades, Dr.
513
Tara Martin, and one anonymous reviewer for their comments which substantially improved the
514
manuscript. We thank those who enabled the IDEA protocol to be refined and tested over the past
515
10 years. VH receives funding from The Australian Government Research Training Program
516
Scholarship. VH, AH, BW are funded by the Centre of Excellence for Biosecurity Risk Analysis
517
and the University of Melbourne. MB is supported by the Centre for Environmental Policy,
518
Imperial College London.
519
26
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
9
Authors Contribution Statement
520
VH led the development and writing of the manuscript based on her experience implementing the
521
IDEA protocol. MB, AH, MM, and BW, provided additional review and advice based on their
522
own experiences with the IDEA protocol and structured expert elicitation. All authors contributed
523
critically to the drafts and gave their final approval for publication.
524
525
526
27
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
10
Data Accessibility
527
This manuscript does not use data.
528
11
Figures and tables
529
530
531
Figure 1: The IDEA protocol (Burgman 2015)
532
28
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
533
29
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
534
Figure 2: The steps and ti me taken t o prepare for and implement the IDEA protocol. Time can be
535
reduced if the questi ons and the objectives of the elicitation are already well defined, and through the
536
use of workshops (although more expensive). The time shown above assumes experts are volunteers
537
and asked up to 30 technical quantities.
538
539
30
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Table 1: The project team and their roles and responsibilities.
540
541
542
543
31
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
544
Table 2: The three-step and four-step elicitation formats used by the IDEA protocol. More examples
545
are provided in Supplementary Material A.
546
547
548
32
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Table 3: An example of data cleaning, standardisation and aggregation for a single four-step elicitation
549
question. The raw data provided by experts is shown to the left and the standardised data to the right.
550
The group was asked to estimate t he number of Crown of Thorns starfish on Rib-Reef, on the Great
551
Barrier Reef (Table 2). Note that participant “Exp” has entered data in an illogical order (l ower
552
estimate higher than upper), this is fixed in the standardized data. Credible intervals have been
553
standardised to 80%, which changes the upper and lower estimate but not the best guess. Quantile
554
aggregation is then used to derive a group aggregation for lower, upper and best guesses.
555
556
557
33
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
558
559
Figure 3: Graphical feedback provided in Round 1 (from the data provided in Table 3). The circles
560
represent the best guess of experts, the dashed li nes s how t heir standardised 80% uncertainty bounds.
561
All experts except LouLou believe the density will be equal to or lower than 0. 50. The arithmetic mean
562
is 10.16.
563
Mean
Reefs
Exp
6117
LouLou
BHN
Pfish
25
50
75
Average Density of CoTS (80% CI)
Code Names
34
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Figure 4: Graphical feedback provided in Round 2. The dashed li ne shows the Round 1 estimates of
564
each of the experts. The bold li ne shows their corresponding Round 2 estimates. Only three experts
565
revised their estimates ( BHN, LouLou and 6117). The mean of Round 2 estimat es changed
566
substantially from Round 1 (Figure 3), to 0.32 Crown of Thorns Starfish (CoTS). The realised truth
567
was 0.14 CoTS shown by the vertical line.
568
569
Mean
Reefs
Exp
6117
LouLou
BHN
Pfish
25
50
75
Average Density of CoTS (80%CI)
Code Names
35
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
12
References
570
Adams-Hosking, C., McBride, M.F., Baxter, G., Burgman, M., Villiers, D., Kavanagh, R.,
571
Lawler, I., Lunney, D., Melzer, A. & Menkhorst, P. (2015) Use of expert
572
knowledge to elicit population trends for the koala (Phascolarctos cinereus).
573
Diversity and Distributions.
574
Armstrong, J.S. (2001) Combining forecasts. Principles of forecasting, pp. 417-439.
575
Springer.
576
Aspinall, W.P. (2010) A route to more tractable expert advice. Nature,
463,
294-295.
577
Aspinall, W.P. & Cooke, R.M. (2013) Quantifying scientific uncertainty from expert
578
judgement elicitation. Risk and Uncertainty Assessment for Natural Hazards (eds
579
J. Rougier, S. Sparks & L. Hill), pp. 64-99. Cambridge University Press,
580
Cmabridge, United Kingdom.
581
Bamber, J.L. & Aspinall, W. (2013) An expert judgement assessment of future sea level
582
rise from the ice sheets. Nature Climate Change,
3,
424-427.
583
Ban, S.S., Pressey, R.L. & Graham, N.A. (2014) Assessing interactions of multiple
584
stressors when data are limited: A Bayesian belief network applied to coral reefs.
585
Global Environmental Change,
27,
64-72.
586
Bedford, T. & Cooke, R.M. (2001) Mathematical tools for probabilistic risk analysis.
587
Cambridge University Press.
588
Burgman, M.A. (2004) Expert frailties in conservation risk assessment and listing
589
decisions. Threatened species legislation: is it just an Act? (eds P. Hutchings, D.
590
Lunney & C. Dickman), pp. 20-29. Royal Zoological Society, Mosman, NSW,
591
Australia.
592
Burgman, M.A. (2015) Trusting Judgements: How to get the best out of experts.
593
Cambridge University Press, Cambridge, United Kingdom.
594
Burgman, M.A., McBride, M., Ashton, R., Speirs-Bridge, A., Flander, L., Wintle, B.,
595
Fidler, F., Rumpff, L. & Twardy, C. (2011) Expert status and performance. PLoS
596
One,
6,
1-7.
597
Chadés, I., Nicol, S., van Leeuwen, S., Walters, B., Firn, J., Reeson, A., Martin, T.G. &
598
Carwardine, J. (2015) Benefits of integrating complementarity into priority threat
599
management. Conservation Biology,
29,
525-536.
600
Colson, A.R. & Cooke, R.M. (2017) Cross validation for the classical model of
601
structured expert judgment. Reliability Engineering & System Safety,
163,
109-
602
120.
603
36
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Cooke, R. & Goossens, L. (2000) Procedures guide for structural expert judgement in
604
accident consequence modelling. Radiation Protection Dosimetry,
90,
303-309.
605
Cooke, R.M. (1991) Experts in uncertainty: Opinion and subjective probability in
606
science. Oxford University Press, New York.
607
Eggstaff, J.W., Mazzuchi, T.A. & Sarkani, S. (2014) The effect of the number of seed
608
variables on the performance of Cookes classical model. Reliability Engineering
609
& System Safety,
121,
72-82.
610
EPA (2009) Expert elicitation white paper review. (ed. U.S.E.P.A. Science Council).
611
Washington, DC.
612
Firn, J., Martin, T.G., Chadès, I., Walters, B., Hayes, J., Nicol, S. & Carwardine, J.
613
(2015) Priority threat management of non-native plants to maintain ecosystem
614
integrity across heterogeneous landscapes. Journal of Applied Ecology,
52,
1135-
615
1144.
616
French, S. (2011) Aggregating expert judgement. Revista de la Real Academia de
617
Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas,
105,
181-206.
618
French, S. (2012) Expert judgment, meta-analysis, and participatory risk analysis.
619
Decision Analysis,
9,
119-127.
620
Furnham, A. & Boo, H.C. (2011) A literature review of the anchoring effect. The Journal
621
of Socio-Economics,
40,
35-42.
622
Garthwaite, P.H., Kadane, J.B. & O'Hagan, A. (2005) Statistical methods for eliciting
623
probability distributions. Journal of the American Statistical Association,
100,
624
680-701.
625
Gigerenzer, G. & Edwards, A. (2003) Simple tools for understanding risks: from
626
innumeracy to insight. BMJ: British Medical Journal
,
741-744.
627
Gregory, R. & Keeney, R.L. (2017) A practical approach to address uncertainty in
628
stakeholder deliberations. Risk Analysis.
629
Hanea, A., McBride, M., Burgman, M., Wintle, B., Fidler, F., Flander, L., Manning, B. &
630
Mascaro , S. (2016a) I
nvestigate
D
iscuss
E
stimate
A
ggregate
for structured expert judgement.
631
International journal of forecasting,
33,
267-269.
632
Hanea, A.M., McBride, M.F., Burgman, M.A. & Wintle, B.C. (2016b) Classical meets
633
modern in the IDEA protocol for structured expert judgement. Journal of Risk
634
Research
,
1-17.
635
Hastie, R. & Dawes, R.M. (2010) Rational choice in an uncertain world: The psychology
636
of judgment and decision making. Sage, California, United States of America.
637
Hogarth, R.M. (1978) A note on aggregating opinions. Organizational Behavior and
638
Human Performance,
21,
40-46.
639
37
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Hora, S.C. (2004) Probability Judgments for Continuous Quantities: Linear
640
Combinations and Calibration. Management Science,
50,
597-604.
641
IUCN (2012) IUCN Red List Catergories and Criteria: Version 3.1. pp. iv + 32pp.
642
IUCN, Gland, Switzerland and Cambridge, UK.
643
Janis, I.L. (1971) Groupthink. Psychology today,
5,
43-46.
644
Kahneman, D. & Tversky, A. (1973) On the psychology of prediction. Psychological
645
review,
80,
237.
646
Keeney, R.L. & von Winterfeldt, D. (1991) Eliciting probabilities from experts in
647
complex technical problems. IEEE Transactions on engineering management,
38,
648
191-201.
649
Kent, S. (1964) Words of estimative probability. Studies in Intelligence.
650
Knol, A.B., Slottje, P., van der Sluijs, J.P. & Lebret, E. (2010) The use of expert
651
elicitation in environmental health impact assessment: a seven step procedure.
652
Environmental Health,
9,
1.
653
Krueger, T., Page, T., Hubacek, K., Smith, L. & Hiscock, K. (2012) The role of expert
654
opinion in environmental modelling. Environmental Modelling & Software,
36,
4-
655
18.
656
Kuhnert, P.M., Martin, T.G. & Griffiths, S.P. (2010) A guide to eliciting and using expert
657
knowledge in Bayesian ecological models. Ecology Letters,
13,
900-914.
658
Lichtendahl Jr, K.C., Grushka-Cockayne, Y. & Winkler, R.L. (2013) Is it better to
659
average probabilities or quantiles? Management Science,
59,
1594-1611.
660
Lorenz, J., Rauhut, H., Schweitzer, F. & Helbing, D. (2011) How social influence can
661
undermine the wisdom of crowd effect. Proceedings of the National Academy of
662
Sciences,
108,
9020-9025.
663
Low Choy, S., O'Leary, R. & Mengersen, K. (2009) Elicitation by design in ecology:
664
using expert opinion to inform priors for Bayesian statistical models. Ecology,
90,
665
265-277.
666
Martin, T.G., Burgman, M.A., Fidler, F., Kuhnert, P.M., Low-Choy, S., McBride, M. &
667
Mengersen, K. (2012a) Eliciting expert knowledge in conservation science.
668
Conservation Biology,
26,
29-38.
669
Martin, T.G., Camaclang, A.E., Possingham, H.P., Maguire, L.A. & Chadès, I. (2017)
670
Timing of protection of critical habitat matters. Conservation Letters,
10,
308-
671
316.
672
Martin, T.G., Nally, S., Burbidge, A.A., Arnall, S., Garnett, S.T., Hayward, M.W.,
673
Lumsden, L.F., Menkhorst, P., McDonald-Madden, E. & Possingham, H.P.
674
(2012b) Acting fast helps avoid extinction. Conservation Letters,
5,
274-280.
675
38
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Mastrandrea, M.D., Field, C.B., Stocker, T.F., Edenhofer, O., Ebi, K.L., Frame, D.J.,
676
Held, H., Kriegler, E., Mach, K.j., Matchoss, P.R., Plattner, G.K., Yohe, G.W. &
677
Zweirs, F.W. (2010) Guidance note for lead authors of the IPCC Fifth Assessment
678
Report on Consistent Treatment of Uncertainties. Jasper Ridge, CA, USA.
679
McBride, M.F., Fidler, F. & Burgman, M.A. (2012) Evaluating the accuracy and
680
calibration of expert predictions under uncertainty: predicting the outcomes of
681
ecological research. Diversity and Distributions,
18,
782-794.
682
McBride, M.F., Garnett, S.T., Szabo, J.K., Burbidge, A.H., Butchart, S.H., Christidis, L.,
683
Dutson, G., Ford, H.A., Loyn, R.H. & Watson, D.M. (2012) Structured elicitation
684
of expert judgments for threatened species assessment: a case study on a
685
continental scale using email. Methods in Ecology and Evolution,
3,
906-920.
686
Mellers, B., Ungar, L., Baron, J., Ramos, J., Gurcay, B., Fincher, K., Scott, S.E., Moore,
687
D., Atanasov, P. & Swift, S.A. (2014) Psychological strategies for winning a
688
geopolitical forecasting tournament. Psychological Science,
25,
1106-1115.
689
Metcalf, S.J. & Wallace, K.J. (2013) Ranking biodiversity risk factors using expert
690
groups Treating linguistic uncertainty and documenting epistemic uncertainty.
691
Biological Conservation,
162,
1-8.
692
Morgan, M.G. (2014) Use (and abuse) of expert elicitation in support of decision making
693
for public policy. Proceedings of the National Academy of Sciences,
111,
7176-
694
7184.
695
Morgan, M.G. & Henrion, M. (1990) Uncertainty: A guide to dealing with uncertainty in
696
quantitative risk and policy analysis Cambridge University Press. New York, New
697
York, USA.
698
Nickerson, R.S. (1998) Confirmation bias: A ubiquitous phenomenon in many guises.
699
Review of general psychology,
2,
175.
700
O'Hagan, A., Buck, C.E., Daneshkhah, A., Eiser, J.R., Garthwaite, P.H., Jenkinson, D.J.,
701
Oakley, J.E. & Rakow, T. (2006) Uncertain judgements: eliciting experts'
702
probabilities. John Wiley & Sons, West Sussex, United Kingdom.
703
Page, S.E. (2008) The difference: How the power of diversity creates better groups,
704
firms, schools, and societies. Princeton University Press.
705
Regan, T.J., Burgman, M.A., McCarthy, M.A., Master, L.L., Keith, D.A., Mace, G.M. &
706
Andelman, S.J. (2005) The consistency of extinction risk classification protocols.
707
Conservation Biology,
19,
1969-1977.
708
Rothlisberger, J.D., Finnoff, D.C., Cooke, R.M. & Lodge, D.M. (2012) Ship-borne
709
nonindigenous species diminish Great Lakes ecosystem services. Ecosystems,
15,
710
1-15.
711
39
Pre-print of: Hemming, V., Burgman, M.A., Hanea, A.M., McBride, M.F. & Wintle, B.C.
(2018) A practical guide to structured expert elicitation using the IDEA protocol. Methods in
Ecology and Evolution, 9, 169-181.
Runge, M.C., Converse, S.J. & Lyons, J.E. (2011) Which uncertainty? Using expert
712
elicitation and expected value of information to design an adaptive program.
713
Biological Conservation,
144,
1214-1223.
714
Shanteau, J., Weiss, D.J., Thomas, R.P. & Pounds, J.C. (2002) Performance-based
715
assessment of expertise: How to decide if someone is an expert or not. European
716
Journal of Operational Research,
136,
253-263.
717
Soll, J.B. & Klayman, J. (2004) Overconfidence in interval estimates. Journal of
718
Experimental Psychology: Learning, Memory, and Cognition,
30,
299.
719
Speirs-Bridge, A., Fidler, F., McBride, M., Flander, L., Cumming, G. & Burgman, M.
720
(2010) Reducing overconfidence in the interval judgments of experts. Risk
721
Analysis,
30,
512-523.
722
Surowiecki, J. (2004) The wisdom of crowds: Why the many are smarter than the few
723
and how collective wisdom shapes business, economies, societies, and nations.
724
Little, Brown, London, United Kingdom.
725
Tetlock, P. & Gardner, D. (2015) Superforecasting: The art and science of prediction.
726
Random House, New York.
727
Tversky, A. & Kahneman, D. (1975) Judgment under uncertainty: Heuristics and biases.
728
Utility, probability, and human decision making: Selected proceedings of an
729
interdisciplinary research conference, Rome, 36 September, 1973 (eds D. Wendt
730
& C. Vlek), pp. 141-162. Springer Netherlands, Dordrecht.
731
van Gelder, T., Vodicka, R. & Armstrong, N. (2016) Augmenting expert elicitation with
732
structured visual deliberation. Asia & the Pacific Policy Studies,
3,
378-388.
733
Wallsten, T.S., Budescu, D.V., Rapoport, A., Zwick, R. & Forsyth, B. (1986) Measuring
734
the vague meanings of probability terms. Journal of Experimental Psychology:
735
General,
115,
348.
736
Wintle, B., Mascaro, S., Fidler, F., McBride, M., Burgman, M., Flander, L., Saw, G.,
737
Twardy, C., Lyon, A. & Manning, B. (2012) The intelligence game: Assessing
738
Delphi groups and structured question formats.
739
Wintle, B.C., Fidler, F., Vesk, P.A. & L Moore, J. (2013) Improving visual estimation
740
through active feedback. Methods in Ecology and Evolution,
4,
53-62.
741
Wittmann, M.E., Cooke, R.M., Rothlisberger, J.D., Rutherford, E.S., Zhang, H., Mason,
742
D.M. & Lodge, D.M. (2015) Use of structured expert judgment to forecast
743
invasions by bighead and silver carp in Lake Erie. Conservation Biology,
29,
744
187-197.
745
746
... Our objective was to gather outside perspectives on mitigation by eliciting probabilistic forecasts from human subject matter experts (SMEs). The elicitation process is informed by literature on cognitive biases, project planning and decision analysis as summarised by Hemming et al. 10 'Expert judgement can be remarkably useful when data are absent or incomplete. However, experts can also make mistakes. ...
... These protocols treat each step of the elicitation as a process of formal data acquisition, and incorporate research from mathematics, psychology and decision theory to help reduce the influence of biases and to enhance the transparency, accuracy and defensibility of the resulting judgements'. 10 This paper describes an elicitation exercise undertaken to obtain probabilistic forecasts about mitigation, from an outside perspective, of hospital activity across England over a 20-year time horizon. We asked SMEs to provide a probabilistic forecast for 77 types of hospital activity that might be mitigated in the future. ...
... Moreover, the inability to identify the best expert means that groups of multiple experts almost always perform as well as, or better than, the best-regarded expert(s). 10 Some SMEs opted to withdraw from the exercise because of lack of time and/or inability to engage with the exercise leading to a drop in participants at each stage. Although we were expecting a high drop-out of SMEs primarily because of workload pressures the final number of participants is not inconsistent with the recommendation of 10-20 participants and evidence which also notes that only minor improvements in performance are gained by having more than 6-12 participants. ...
Article
Full-text available
Objectives The planning process for a new hospital relies on assumptions about future levels of demand. Typically, such assumptions are characterised by point estimates, the flaw-of-averages, base-rate neglect and overoptimism from an inside view. To counteract these limitations, we elicited an outside view of probabilistic forecasts based on judgements of experts about the extent to which various types of hospital activity might be mitigated over 20 years, in support of the New Hospital Programme (NHP) in the English National Health Service. Design A prospective online elicitation exercise, over two rounds, to forecast the reduction (0% no reduction to 100% total reduction) in 77 types of hospital activity across England via five types of activity mitigation: outpatient attendance avoidance (n=8); inpatient admission avoidance (n=31); A&E attendance avoidance (n=12); outpatient delivery mode (n=4); inpatient length of stay reduction (n=22) and eight types of activity groups. Primary outcomes are the aggregated forecasts representing the percentage reduction (0%–100%) in hospital activity across England based on ‘surprisingly low’ (10th percentile—P10) to ‘surprisingly high’ (90th percentile—P90) forecasts from 17 experts. Results We had 657 forecasts from 17 experts. The most pessimistic forecast was for inpatient avoidance of frail elderly admissions (mean 5.71%, P10=0.43%, P90=16.40%). The most optimistic forecast was for inpatient admission avoidance for vascular surgery (mean 48.27%, P10=19.82%, P90=78.57%). The overall (n=77) aggregate means ranged from a low of 5.71% to a high of 48.27% with an average width of 50.08%. Experts highlighted mainly four types of mitigation mechanisms—prevention, displacement, quality improvement and de-adoption. Conclusion A national elicitation exercise has provided long-term aggregate forecasts across England that make explicit the wide variation and uncertainty associated with future mitigation activities from an outside perspective. These aggregate forecasts may now be incorporated into the NHP, providing a more robust foundation for planning.
... RLE assessments are useful for assessing risk and degradation of ecosystems, but the data requirements (e.g., requiring 50 years of data, establishing collapse thresholds) can limit the number and type of indicators used. This may require creative and robust ways to overcome (as was the case in this study), such as extrapolating where sufficient time series exists (but again this may be challenging for data poor systems), expert elicitation (Hemming et al., 2018) and literature reviews. Countries can also leverage the GCRMN to utilize baselines from countries or areas with comparable reef systems that have more extensive data. ...
Article
Full-text available
Countries have committed to conserving and restoring ecosystems after signing the Kunming‐Montreal Global Biodiversity Framework (GBF). The IUCN Red List of Ecosystems (RLE) will serve as a headline indicator to track countries' progress toward achieving this goal. Using Kenyan coral reefs, we demonstrate how nations implementing the GBF can use standardized estimates of ecosystem degradation from RLE assessments to support site‐specific management decisions. We undertook a reef‐by‐reef analysis to evaluate the relative decline of four key ecosystem components over the past 50 years: hard corals, macroalgae, parrotfish, and groupers. Using the two benthic indicators, we also calculated standardized estimates of state to identify reef sites which maintain a better condition through time relative to adjacent sites. Kenya's coral reefs have degraded across all four ecosystem components. At more than half the monitored sites parrotfish and grouper abundance declined by more than 50%, while coral cover and macroalgae‐coral ratio declined by at least 30%. This resulted in an Endangered threat status for coral reefs in Kenya (under criterion D of the RLE). The results can guide management actions related to 9 of the 23 GBF targets. For example, we identified several sites with relatively healthy benthic and fish communities as candidate areas for protection measures under Target 3. The RLE has a key role to play in monitoring and meeting the goals and targets of the GBF, and our work demonstrates how using the wealth of data within these assessments can inform local‐scale ecosystem management and amplify the GBF's impact.
... During an expert elicitation, taxon experts are asked to estimate parameter values based on their professional expertise with the focal species and/or geographic region. Multiple protocols have been developed to minimize biases and capture uncertainty across experts, such as the Delphi method, IDEA protocol, and the Sheffield elicitation framework (Gosling, 2018;Hemming et al., 2018). Expert elicitation has been used to parameterize models for data-deficient species within USFWS SSAs (Cummings et al., 2020;Fitzgerald et al., 2021;Tucker et al., 2021), IUCN global assessments (Lacher et al., 2012), and conservation planning efforts (Crawford et al., 2020). ...
Article
Conservation and management decisions often must be made on strict timelines, based on the "best available information" regarding a species' current and expected future status. Simulation models are valuable tools for predicting a species' future status but must incorporate multiple types of uncertainty in order to provide a complete understanding of plausible outcomes. Here we present a population viability analysis for a data-deficient species proposed for protection under the U.S. Endangered Species Act, the alligator snapping turtle. We used a matrix population model to simulate population trajectories, incorporating both parametric uncertainty and temporal variation into demographic parameters. We used expert elicitation to generate modified survival rates in the presence of specific anthropogenic threats, for which empirical estimates were unavailable. Because uncertainty in the expert elicited values was of particular interest to decision makers, we constructed a set of simulation scenarios to evaluate the sensitivity of model conclusions to the accuracy of expert elicited parameters. Our model predicted steep population declines under all scenarios with anthropogenic threats, indicating that under-or overestimation by experts would not change the overall conclusion that populations would decline. An additional sensitivity analysis revealed that a parameter related to nest survival for which there was high disagreement among experts had a negligible effect on model outcome, while other parameters (e. g., the effect of poaching) had more influence. Our analyses demonstrate the use of an expert-parameterized decision-support population viability analysis that explicitly evaluates the effects of multiple sources of uncertainty on model predictions.
... Measures associated with two metrics had values requiring a priori definition: for example, the timeframe (i.e., X years) over which they are calculated. Existing protocols/empirical knowledge were not available to guide the definition of these values and so we used insights from an informal expert elicitation process (e.g., Burgman 2016; Hemming et al. 2018). The same process was followed when eliciting all values and incorporated 18-22 experts in fire ecology (i.e., academic researchers) and fire management (i.e., DEECA practitioners). ...
Article
Full-text available
Background Changes to fire regimes threaten biodiversity worldwide and emphasize the need to understand the ecological consequences of fire management. For fire management to effectively protect biodiversity, it is essential to have ecologicallyrelevant metrics to plan and evaluate management interventions. Here, we describe a suite of metrics to guide fire management for enhanced biodiversity outcomes. Results We define five metrics that collectively provide comprehensive and complementary insights into the effect of fire regimes on ecosystem resilience and components of biodiversity. These include (1) Species Habitat Availability, a measure of the amount of suitable habitat for individual species; (2) Fire Indicator Species Index, population trends for species with clear fire responses; (3) Vegetation Resilience, a measure of plant maturity and the capability of vegetation communities to regenerate after fire; (4) Desirable Mix of Growth Stages, an indicator of the composition of post-fire age-classes across the landscape; and (5) Extent of High Severity Fire, a measure of the effect of severe fire on post-fire recovery of treed vegetation communities. Each metric can be quantified at multiple spatial and temporal scales relevant to evaluating fire management outcomes. We present a case study from Victoria, Australia, in which two metrics are applied across spatially-nested management areas. Results highlight four characteristics of metrics that enhance their value for management: (1) they quantify both status and trends through time; (2) they are scalable and can be applied consistently across management levels (from individual reserves to the whole state); (3) most can be mapped, essential for identifying where and when to implement fire management; and (4) their complementarity provides unique insights to guide fire management for ecological outcomes. Conclusions These metrics reflect common relationships between fire and biodiversity and are relevant to management in fire-prone ecosystems worldwide. They facilitate consistent translation of management responsibilities (planning, evaluation, reporting) across administrative levels and enable managers to strategically plan on-ground actions and transparently evaluate outcomes against strategic goals. A key next step for fire managers globally is to define “desirable” states for ecological metrics, to enable target-setting and the evaluation of management outcomes.
Article
Context Seagrasses form an important habitat that provides diverse ecosystem services essential for both the environment and people. In tropical Queensland, Australia, these meadows hold significant economic and cultural value, serving as nurseries for marine species and sustaining dugongs and green turtles. The biomass and size of tropical seagrass meadows in Queensland varies considerably and are influenced by various factors, both biotic and abiotic. Aims Functional trait-based approaches can improve the estimation of seagrass-meadow resilience and services provision by describing the relationship between environment and individual performance. To support these approaches, we provide a seagrass functional-trait database focusing on resilience and function provision for tropical Queensland. Methods We employed a combination of literature reviews, database searches, botanical information, and structured expert elicitation to target 17 functional traits across 13 seagrass species in tropical Queensland. Key results We developed a traits database to inform functional trait-based approaches to assessing seagrass-meadow resilience and dynamics. The outputs included trait information for approximately 78% of the targeted traits (of 221 unique trait–seagrass combinations). Conclusions With current information on functional traits, we can improve the estimation of resilience and ecosystem services for tropical Queensland seagrass species. We have also highlighted trait data gaps and areas for further research. Implications We have provided examples of applying this database within the tropical Queensland context, with the potential to facilitate regional comparative studies. Our database complements existing plant-trait databases and serves as a valuable resource for future trait-based seagrass research in tropical Queensland.
Article
Full-text available
Like many forage fish species, Pacific sand lance ( Ammodytes personatus ) play a key role in nearshore marine ecosystems as an important prey source for a diverse array of predators in the northeastern Pacific. However, the primary threats to Pacific sand lance and their habitat are poorly defined due to a lack of systematic data. Crucial information needed to assess their population status is also lacking including basic knowledge of their local and regional abundance and distribution. Sand lance are currently listed as ‘not evaluated’ under the IUCN red list and they have not been assessed by US and Canadian agencies. This hampers management and policy efforts focused on their conservation. To address this knowledge gap, we conducted a three-part, structured expert elicitation to assess the vulnerability of Salish Sea sand lance populations. Experts were asked to list and rank key threats to Salish Sea sand lance and/or their habitat, to further quantify the vulnerability of sand lance to identified threats using a vulnerability matrix, and to predict the population trajectory in 25 years from today. Impacts associated with climate change (e.g. sea level rise, sea temperature rise, ocean acidification, and extreme weather) consistently ranked high as threats of concern in the ranking exercise and quantified vulnerability scores. Nearly every expert predicted the population will have declined from current levels in 25 years. These results suggest sand lance face numerous threats and may be in decline under current conditions. This research provides vital information about which threats pose the greatest risk to the long-term health of sand lance populations and their habitat. Managers can use this information to prioritize which threats to address. Future research to reliably quantify population size, better understand the roles of natural and anthropogenic impacts, and to identify the most cost-effective actions to mitigate multiple threats, is recommended.
Article
Context Global climate is changing rapidly, necessitating timely development of specific, actionable species conservation strategies that incorporate climate-change adaptation. Yet, detailed climate-change adaptation planning is noticeably absent from species management plans. This is problematic for restricted species, which often have greater extinction risk. Aims Focusing on the restricted and endangered Tarengo leek orchid (Prasophyllum petilum), we aimed to adapt and test a framework for producing strategies for its management under climate change. Methods We used expert elicitation to estimate the severity of threats and assess potential management actions to mitigate threat impacts. We created a conceptual model detailing ecology, threats and likely impacts of climate change on the species, including the interactions between components. Key results Although climate change-related threats will affect the species, the most severe threats were non-climate threats including grazing, weeds, and habitat degradation. Fire management was deemed highly beneficial but had low feasibility for some populations. Without management, experts estimated up to a 100% decrease in one P. petilum population, and up to 50% decrease if management remained unchanged. Conclusions Management actions with the highest benefit and feasibility addressed the non-climate threats, which, in turn, can give the species the best opportunity to withstand climate-change impacts. Experts highlighted the difficulty of addressing climate threats with such limited knowledge; therefore, further research was recommended. Implications This adapted framework enabled a structured analysis of threats, and informed selection of priority adaptation options. We recommend its use for other restricted species for efficient and robust decision-making in climate-change management.
Article
Full-text available
Can the vague meanings of probability terms such as doubtful, probable, or likely be expressed as membership functions over the [0, 1] probability interval? A function for a given term would assign a membership value of zero to probabilities not at all in the vague concept represented by the term, a membership value of one to probabilities definitely in the concept, and intermediate membership values to probabilities represented by the term to some degree. A modified pair-comparison procedure was used in two experiments to empirically establish and assess membership functions for several probability terms. Subjects performed two tasks in both experiments: They judged (a) to what degree one probability rather than another was better described by a given probability term, and (b) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners. Task a data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. The conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable. Furthermore, the derived membership function values satisfactorily predicted the judgments independently obtained in task b. The results support the claim that the scaled values represented the vague meanings of the terms to the individual subjects in the present experimental context. Methodological implications are discussed, as are substantive issues raised by the data regarding the vague meanings of probability terms.
Article
Full-text available
We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an "out-of-sample penalty" and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance weighting and equal weighting is 0.001.
Article
Full-text available
Policy decisions frequently depend on quantitative judgements made by domain experts. The quality of those judgements depends in turn on the nature of the procedures used to obtain them. Procedures designed for this task are known as expert elicitation. This article explores how expert elicitation can be enhanced by supporting the deliberations of expert groups with diagrammatic methods for displaying the structure of those deliberations. It describes an elicitation exercise in which a team from the Defence Science and Technology Group estimated rates of loss of an aircraft that might be acquired for the Australian Defence Force. A structured visual deliberation method, dialogue mapping, was used to facilitate the discussions of the panel of experts in between two rounds of estimations in a Delphi-type elicitation process. The enhanced elicitation approach worked well for the attrition estimation project, and a similar approach may be useful in many other contexts.
Book
Policy- and decision-makers in government and industry constantly face important decisions without full knowledge of all the facts. They rely routinely on expert advice to fill critical scientific knowledge gaps. There are unprecedented opportunities for experts to influence decisions. Yet even the most experienced can be over-confident and error-prone, and the hidden risk is that scientists and other experts can over-reach, often with good intentions, placing more weight on the evidence they provide than is warranted. This book describes how to identify potentially risky advice, explains why group judgements outperform individual estimates, and provides an accessible and up-to-date guide to the science of expert judgement. Finally, and importantly, it outlines a simple, practical framework that will help policy- and decision-makers to ensure that the advice that they receive is relatively reliable and accurate, thus substantially improving the quality of information on which critical decisions are made.
Article
This article addresses the difficulties of incorporating uncertainty about consequence estimates as part of stakeholder deliberations involving multiple alternatives. Although every prediction of future consequences necessarily involves uncertainty, a large gap exists between common practices for addressing uncertainty in stakeholder deliberations and the procedures of prescriptive decision-aiding models advanced by risk and decision analysts. We review the treatment of uncertainty at four main phases of the deliberative process: with experts asked to describe possible consequences of competing alternatives, with stakeholders who function both as individuals and as members of coalitions, with the stakeholder committee composed of all stakeholders, and with decisionmakers. We develop and recommend a model that uses certainty equivalents as a theoretically robust and practical approach for helping diverse stakeholders to incorporate uncertainties when evaluating multiple-objective alternatives as part of public policy decisions.
Article
Expert judgement is pervasive in all forms of risk analysis, yet the development of tools to deal with such judgements in a repeatable and transparent fashion is relatively recent. This work outlines new findings related to an approach to expert elicitation termed the IDEA protocol. IDEA combines psychologically robust interactions among experts with mathematical aggregation of individual estimates. In particular, this research explores whether communication among experts adversely effects the reliability of group estimates. Using data from estimates of the outcomes of geopolitical events, we find that loss of independence is relatively modest and it is compensated by improvements in group accuracy.