Content uploaded by Andy Stirling
Author content
All content in this area was uploaded by Andy Stirling on May 29, 2014
Content may be subject to copyright.
Content uploaded by Andy Stirling
Author content
All content in this area was uploaded by Andy Stirling on Feb 28, 2014
Content may be subject to copyright.
COMMENT
OBITUARY Brian Marsden,
keeper of comets,
remembered p.1042
RevIewIng Pool of peers grows
to cope with submissions
surge p.1041
mAThemATIcs Roger Penrose
reflects on 50 years and 6
volumes of work p.1039
cOnseRvATIOn Threats
to Adélie penguins
assessed p.1034
W
orldwide and across many fields,
there lurks a hidden assumption
about how scientific expertise
can best serve society. Expert advice is often
thought most useful to policy when it is pre-
sented as a single ‘definitive’ interpretation.
Even when experts acknowledge uncer-
tainty, they tend to do so in ways that reduce
unknowns to measurable ‘risk’. In this way,
policy-makers are encouraged to pursue (and
claim) ‘science-based’ decisions. It is also not
uncommon for senior scientists to assert that
there is no alternative to some scientifically
contestable policy. After years researching
— and participating in — science advisory
processes, I have come to the conclusion that
this practice is misguided.
An overly narrow focus on risk is an inad-
equate response to incomplete knowledge. It
leaves science advice vulnerable to the social
dynamics of groups — and to manipulation
by political pressures seeking legitimacy,
justification and blame management. When
the intrinsically plural, conditional nature
of knowledge is recognized, I believe that
science advice can become more rigorous,
robust and democratically accountable.
A rigorous definition of uncertainty can be
traced back to the twentieth-century econo-
mist Frank Knight
1
. For Knight, “a measur-
able uncertainty, or ‘risk’ proper ... is so far
different from an unmeasurable one that it
is not in effect an uncertainty at all”. This is
not just a matter of words, or even methods.
The stakes are potentially much higher. A
preoccupation with assessing risk means
that policy-makers are denied exposure to
dissenting interpretations and the possibility
of downright surprise.
Of course, no-one can reliably foresee
the unpredictable, but there are lessons to
be learned from past mistakes. For example,
the belated recognition that seemingly inert
and benign halogenated hydrocarbons were
interfering with the ozone layer. Or the slow-
ness to acknowledge the possibility of novel
transmission mechanisms for spongiform
encephalopathies, in animal breeding and
in the food chain. In the early stages, these
sources of harm were not formally charac-
terized as possible risks — they were ‘early
warnings’ offered by dissenting voices. Policy
recommendations that miss such warnings
court overconfidence and error.
The question is how to move away
Keep it complex
When knowledge is uncertain, experts should avoid
pressures to simplify their advice. Render decision-
makers accountable for decisions, says Andy Stirling.
A UK crop circle, created by activists to signify uncertainty over where genetic contamination can occur.
G. GRAF/GREENPEACE
23/30 DECEMBER 2010 | VOL 468 | NATURE | 1029
© 20 Macmillan Publishers Limited. All rights reserved10
from this narrow focus on risk to broader
and deeper understandings of incomplete
knowledge. Many practical quantitative
and qualitative methods already exist (see
‘Uncertainty matrix’), but political pres-
sure and expert practice often prevent them
being used to their full potential. Choosing
between these methods requires a more
rigorous approach to assessing incomplete
knowledge, avoiding the temptation to treat
every problem as a risk nail, to be reduced
by a probabilistic hammer. Instead, experts
should pay more attention to neglected areas
of uncertainty (in Knight’s strict sense) as
well as to deeper challenges of ambiguity and
ignorance
2
. For policy-making purposes, the
main difference between the ‘risk’ methods
shown in the matrix and the rest is that the
others discourage single ‘definitive’ policy
interpretations.
AnY jUsTIfIcATIOn
There are still times when ‘risk-based’
techniques are appropriate and can yield
important information for policy. This can
be so for consumer products in normal use,
general road or airline-safety statistics, or
the epidemiology of familiar diseases. Yet
even in these seemingly familiar and
straightforward areas, unforeseen pos-
sibilities, and over-reliance on aggre-
gation, can undermine probabilistic
assessments. There is a need for humil-
ity about science-based decisions.
For example, consider the risk
assessment of energy technologies.
The other graphic (see ‘The perils of
‘science-based’ advice’) summarizes
63 studies on the economic costs aris-
ing from health and environmental
impacts of different sets of energy tech-
nologies. The aim of the studies is to
help policy-makers identify the options
that are likely to have the lowest impact.
This is one of the most sophisticated
and mature fields for quantitative risk-
based comparisons. Individual policy
reports commonly express their find-
ings as if there were little room for
doubt. Many of the studies present no
— or tiny — uncertainty ranges. But
taken together, these 63 studies tell a
very different story
3
— one usually
hidden from policy-makers. The dis-
crepancies between equally authoritative,
peer-reviewed studies span many orders of
magnitude, and the overlapping uncertainty
ranges can support almost any ranking order
of technologies, justifying almost any policy
decision as science based.
This is not just a problem with quantita-
tive analysis. Qualitative science advice is
also usually presented in aggregated and
consensual form: there is always pressure
on expert committees to reach a ‘consensus’
opinion. This raises profound questions over
what is most accurate and useful for policy. Is
it a picture asserting an apparent consensus,
even where one does not exist? Or would it
be more helpful to set out a measured array
of contrasting specialist views, explaining
underlying reasons for different interpreta-
tions of the evidence? Whatever the political
pressures for the former, surely the latter is
more consistent both with scientific rigour
and with democratic accountability?
I believe that the answer lies in supporting
more plural and conditional methods for sci-
ence advice (the non-risk quadrants shown
in ‘Uncertainty matrix’). These are plural
because they even-handedly illuminate a
variety of alternative reasonable interpreta-
tions. And conditional because they explore
explicitly for each alternative, the associated
questions, assumptions, values or inten-
tions
4
. Under Knightian uncertainty, for
instance, pessimistic and optimistic inter-
pretations can be treated separately, each
explicitly associated with assumptions, dis-
ciplines, values or interests so that these can
be clearly appraised. It reminds experts that
absence of evidence of harm is not the same
as evidence of absence of harm. It also allows
scenario analysis and the consideration of
sensitivity, enabling more accountable evalu-
ation. For example, it could allow experts to
highlight conditional decision rules aimed at
maximizing best or worst possible outcomes,
or ‘minimizing regrets’
5
.
The few sporadic examples of the appli-
cation of this approach show that it can be
practical. One particularly politicized and
high-stakes context for expert policy advice
is the setting of financial interest rates. The
Bank of England’s Monetary Policy Commit-
tee, for example, describes its expert advisory
process as a “two-way dialogue” — with a
priority placed on public accountability.
Great care is taken to inform the commit-
tee, not just of the results of formal analysis
by the sponsoring bodies, but also of com-
plex real-world conditions and perspectives.
Reports detail contrasting recommendations
by individual members and explain reasons
for differences
6
. Why is this kind of thing not
normal in science advice?
When scientists are faced with unmeas-
urable uncertainties, it is much more usual
for a committee to spend hours negotiating
a single interpretation across a spread of con-
tending contexts, analyses and judgements.
From my own experiences of standard-
setting for toxic substances, it would often
be more accurate and useful to accept these
divergent expert interpretations and focus
instead on documenting the reasons. In my
view, concrete policy decisions could still
be made — and possibly more efficiently.
Moreover, the relationship between the
decision and the available science would be
clearer and the inherently political dimen-
sions more honest and accountable.
Problems of ambiguity arise when experts
disagree over the framing of possible options,
contexts, outcomes, benefits or harms.
Like uncertainty, these cannot be
reduced to risk analysis, and demand
plural and conditional treatment. Such
methods can highlight — rather than
conceal — different regulatory ques-
tions, such as: “what is best?”, “what
is safest?”, “is this safe?”, “is this toler-
able?” or (as is often routine) “is this
worse than what we have now?” Nobel-
winning work in rational choice shows
that when ambiguity rules there is no
guarantee, as a matter of logic, that
scientific analysis will lead to a unique
policy answer
7
. Consequently, defini-
tive science-based decisions are not
just potentially misleading — they are a
fundamental contradiction in terms.
meThODs ThAT wORK
One practical example of ways to be
plural and conditional when consid-
ering questions and options, as well
as in deriving answers, is multicri-
teria mapping. Other participatory
and deliberative procedures include
interactive modelling and scenario work-
shops, as well as Q-method and dissensus
methods. Multi criteria mapping makes use
of simple but rigorous scoring and weight-
ing procedures to reveal the ways in which
overall rankings depend on divergent ways
of framing the possible options. In 1999,
Unilever funded me and colleagues to use
multicriteria mapping to study the perspec-
tives of different leading science advisers on
genetically modified (GM) crops
8
. The back-
ing of this transnational company helped
UNCERTAINTY MATRIX
A tool to catalyse nuanced deliberations: experts must look beyond
risk (top left quadrant) to ambiguity, uncertainty and ignorance
using quantitative and qualitative methods.
Unproblematic Problematic
Unproblematic Problematic
Knowledge about probabilities
AMBIGUITY
IGNORANCE
RISK
UNCERTAINTY
• Risk assessment
• Optimizing models
• Expert consensus
• Cost–benefit analysis
• Aggregated beliefs
• Interval analysis
• Scenario methods
• Sensitivity testing
• Decision rules
• Evaluative judgement
• Interactive modelling
• Participatory deliberation
• Focus & dissensus groups
• Multicriteria mapping
• Q-method, repertory grid
• Monitoring & surveillance
• Reversibility of effects
• Flexibility of commitments
• Adaptability, resilience
• Robustness, diversity
Knowledge about possibilities
Political pressures tend to push attention from ‘plural conditional’
(dark shading) to ‘single definitive’ (light shading) methods.
1030 | NATURE | VOL 468 | 23/30 DECEMBER 2010
COMMENT
© 20 Macmillan Publishers Limited. All rights reserved10
draw high-level UK government attention.
A series of civil servants told me, in quite
colourful terms, that results mapped out in
plural, conditional fashion would be “abso-
lutely no use” in practical policy-making. Yet
when a chance finally emerged to present
results to Mo Mowlam, the relevant cabinet
minister, the reception was very positive.
She immediately appreciated the value of
having alternative perspectives laid out for a
range of policy options. It turned out in this
case, that the real block to a plural, condi-
tional approach was not the preferences of
the decision-maker herself, but of some of
those around her.
In my experience, it is the single defini-
tive representations of science that are most
vulnerable to political manipulation. Plural,
conditional approaches are not immune, but
they can help make political pressures more
visible. Indeed, this is what happened dur-
ing another GM policy process in which I
was involved: the 2003 UK science review of
GM crops. Reporting included explicit dis-
cussion of uncertainties, gaps in knowledge
and divergent views — and was described as
“neither a red nor a green light” for GM tech-
nology. A benefit of this more open approach
is that it helped GM proponents and critics
to work more effectively together during the
committee deliberations, without a high-
stakes, ‘winner takes all’ dynamic. There was
more space to express alternate interpreta-
tions, free from implications that one party
or another was wrong. This is important in a
highly-politicized area such as GM science,
where there are entrenched interests on both
sides. Yet this unusual attempt to acknowl-
edge uncertainty was not universally popu-
lar. Indeed, it was also the only occasion, to
my knowledge, on which the minutes of a
UK science advisory committee formally
documented covert attempts to damage
the career of one of its members (me, in
this case)
9
. Perhaps for political — rather
than scientific — reasons, this experiment
towards plural and conditional advice has
not been repeated.
A further argument for using more plural
approaches arises from the state of igno-
rance, in which ‘we don’t know what we
don’t know’. Ignorance typically looms in
the choice of which of a range of feasible,
economically viable future paths to support
— either through funding or regulation —
for emerging technologies. In a finite and
globalizing world, no single path can be fully
realized without detracting from the poten-
tial for others. Even in the most competitive
consumer markets, for
instance, development
routinely ‘locks in’ to
dominant technologies
such as the QWERTY
keyboard or VHS tape.
The same is true of
infrastructures, such
as narrow-gauge rail,
AC electricity or light-
water reactors. This is not evidence of inevi-
tability, but of the ‘crowding out’ of potential
alternatives. Likewise, locking-in occurs in
the prioritizing of certain areas of scientific
enquiry over others. The paths taken by
scientific and technological progress are far
from inevitable. Deliberately or blindly, the
direction of progress is inherently a matter
of social choice
10
.
A move towards plural, conditional advice
would help avoid erroneous ‘one-track’,
‘race to the future’ visions of progress. Such
advice corrects the fallacy that scepticism
over a specific technology implies a general
‘anti-science’ sentiment. It defends against
simplistic or cynical support for some par-
ticular favoured direction of change that is
backed on the spurious grounds that it is
somehow synonymous with ‘sound science’,
or uniquely ‘pro innovation’.
Instead, plural, conditional advice helps
enable mature and sophisticated policy
debate on broader questions. How reversible
are the effects of a particular path, if we learn
later that it was ill-advised? How flexible are
the associated industrial and institutional
commitments, allowing us later to shift
direction? How adaptable are the innovation
systems? What part might be played by the
deliberate pursuit of diverse approaches —
to hedge ignorance, defend against lock-in
or foster innovation — in any given area?
Thus, such advice provides the basis for
a more-equal partnership between social
and natural science in policy advice. Plural
and conditional advice may also help resolve
some polarized fault-lines in current debates
about science in policy. It shows how we
might better: integrate quantitative and
qualitative methods; articulate ‘risk assess-
ment’ and ‘risk management’; and reconcile
‘science-based’ and ‘precautionary appraisal’
methods.
A move towards plural and conditional
expert advice is not a panacea. It cannot
promise escape from the deep intractabilities
of uncertainty, the perils of group dynamics
or the perturbing effects of power. It differs
from prevailing approaches in that it makes
these influences more rigorously explicit and
democratically accountable. ■
Andy Stirling is research director at SPRU
(Science and Technology Policy Research)
and co-directs the joint Centre for Social
Technological & Environmental Pathways to
Sustainability at Sussex University, Falmer,
Brighton BN1 9QE, UK.
e-mail: a.c.stirling@sussex.ac.uk
1. Knight, F. Risk, Uncertainty and Profit (Houghton
Mifflin, 1921).
2. Wynne, B. Glob. Environ. Change 2, 111–127
(1992).
3. Stirling, A. Ann. NY Acad. Sci. 1128, 95–110
(2008).
4. Stirling, A. Sci. Technol. Hum. Valu. 33, 262–294
(2008).
5. Farman, J. in Late Lesson from Early Warnings:
the precautionary principle 1898-2000
(eds Harremoës, P. et al.) 76–83 (European
Environment Agency, 2001).
6. Treasury Committee Inquiry into the Monetary
Policy Committee of the Bank of England: Ten
Years On (Bank of England, 2007); available at
go.nature.com/v2h4al
7. Leach, M., Scoones, I. & Stirling, A. Dynamic
Sustainabilities: Technology, Environment, Social
Justice (Earthscan, 2010).
8. Stirling, A. & Mayer, S. Environ. Plann. C 19,
529–555 (2001).
9. UK GM Science Review Panel Minutes of the
Seventh Meeting (2003); available at go.nature.
com/lxbmpb
10. ESRC Centre on Social, Technological and
Environmental Pathways to Sustainability, A New
Manifesto, (Univ. Sussex, 2010); available at:
go.nature.com/znqakg
Variability across studies
Low High
Energy options
THE PERILS OF ‘SCIENCE-BASED’ ADVICE
A survey of 63 peer-reviewed studies of health and environmental risks associated with energy technologies.
Individual studies oer conclusions with surprisingly narrow uncertainty ranges, yet together the
literature
oers no clear consensus for policy makers.
Number
of studies
0.01 1 100 10,000
Gas
Nuclear
Hydro
Wind
Solar
Biomass
Economic risk (dollars per megawatt hour)
Lowest result
Highest result25% Median 75%
Oil
Coal
XX
36
20
3�
2�
�6
�8
��
22
“An overly
narrow focus
on risk is an
inadequate
response to
incomplete
knowledge.”
SOURCE: REF. 3
23/30 DECEMBER 2010 | VOL 468 | NATURE | 1031
COMMENT
© 20 Macmillan Publishers Limited. All rights reserved10