ArticlePDF Available

Collingridge and the dilemma of control: Towards responsible and accountable innovation

Authors:

Abstract

The paper critically reviews the work of David Collingridge in the light of contemporary concerns about responsibility and accountability in innovation, public engagement with science and technology, and the role of scientific expertise in technology policy. Given continued interest in his thoughts on the 'social control of technology', and the 'dilemma of control', this attention is both timely and overdue. The paper illuminates a mismatch between the prevalence of citations to Collingridge's work on the dilemma of control in the literature on responsible innovation, and the depth of engagement with his arguments. By considering neglected aspects of Collingridge's substantive, methodological and philosophical analysis, important implications can be drawn for theory and practice relating to the governance of innovation and co-evolution between technology and society. The paper helps to improve understandings of wider political contexts for responsible innovation, especially in relation to anticipatory, participatory and institutional aspects of governance.
Contents lists available at ScienceDirect
Research Policy
journal homepage: www.elsevier.com/locate/respol
Research paper
Collingridge and the dilemma of control: Towards responsible and
accountable innovation
Audley Genus
a,
, Andy Stirling
b
a
Kingston Business School, Kingston University, Kingston upon Thames, KT2 7LB, United Kingdom
b
Science Policy Research Unit, University of Sussex, Falmer, Brighton, BN1 9RH, United Kingdom
ARTICLE INFO
Keywords:
Accountability
David Collingridge
Dilemma of control
Governance of innovation
Incrementalism
Responsible innovation
ABSTRACT
The paper critically reviews the work of David Collingridge in the light of contemporary concerns about re-
sponsibility and accountability in innovation, public engagement with science and technology, and the role of
scientic expertise in technology policy. Given continued interest in his thoughts on the social control of
technology, and the dilemma of control, this attention is both timely and overdue. The paper illuminates a
mismatch between the prevalence of citations to Collingridges work on the dilemma of control in the literature
on responsible innovation, and the depth of engagement with his arguments. By considering neglected aspects of
Collingridges substantive, methodological and philosophical analysis, important implications can be drawn for
theory and practice relating to the governance of innovation and co-evolution between technology and society.
The paper helps to improve understandings of wider political contexts for responsible innovation, especially in
relation to anticipatory, participatory and institutional aspects of governance.
1. Introduction
David Collingridge was an important contributor to the eld of
Science and Technology Studies. An active researcher from the late-
1970s to the early-1990s, he developed a distinctive and substantive
line of thinking concerning the social control of technology’–the scope
for conventionally excluded people and interests to shape the forms and
orientations of innovation trajectories in particular sectors.
Collingridges work was concerned with increasing social agency over
technology away from incumbent interests in what have come to be
called innovation systems. It drew on a critique of what he called the
justicationist schoolin decision-making, philosophy, and political
science. Language fashions change, and now more is spoken of the
governanceof science, technology or innovation than of social con-
trol. The notion of control itself can be viewed as being as problematic
as it is helpful to Collingridges underlying aims (Smith and Stirling,
2010; Stirling, 2016a). And a succession of vocabularies that have
burgeoned in this eld largely since Collingridges work (for instance,
around precaution,participationand engagement), are now being
substituted in some quarters by the language of responsible (research
and) innovation (Guston et al., 2014; Owen, 2014; Owen et al., 2012,
2013a,b; Stahl, 2012; Stilgoe et al., 2013, 2014; Von Schomberg, 2011,
2015; see: https://www.epsrc.ac.uk; Stirling, 2016b).
Responsible innovation is not a new concern. Academic and policy
discussions over responsibilities, risks and controlin the governance of
science and technology go back many years (Donnelley, 1989; Jonas,
1984). Over 70 years ago landmark contributions to understandings of
the history of science and relationships among science and society were
produced by Bernal (1939, 1954) and Haldane (1939). In the 1960s,
signicant contributions emphasised the social responsibility of science
and greater public understanding of science (e.g. Rose and Rose, 1969;
Ziman, 1968). The British Society for Social Responsibility in Science
ran from 1969 to 1991 whilst its US equivalent was active between
1949 and 1976. These societies were concerned with the transparency
and openness of science policy-making as well as the environmental
and health consequences associated with the operation of new tech-
nologies implicated in scientic discoveries and inventions.
There have been longstanding concerns regarding the framing and
promotion of scientic citizenship or citizen science(Irwin, 1995,
2001), which now gure in discussions of responsible research and
innovation (see the special issue of Public Understanding of Science,
edited by Stilgoe et al., 2014). An important development was the
emergence of the ELSI approach towards the end of the 1980s, con-
nected with the ethical, legal and societal implications of, for instance,
the Human Genome project and the proper governance of scientic
research and technological innovation. More recently, there have been
concerns to address the limitations of the ELSI approach and to elicit
more active or responsive public engagement with science and
http://dx.doi.org/10.1016/j.respol.2017.09.012
Received 25 February 2016; Received in revised form 10 February 2017; Accepted 25 September 2017
Corresponding author.
E-mail addresses: a.genus@kingston.ac.uk (A. Genus), A.C.Stirling@sussex.ac.uk (A. Stirling).
Research Policy xxx (xxxx) xxx–xxx
0048-7333/ © 2017 Published by Elsevier B.V.
Please cite this article as: Genus, A., Research Policy (2017), http://dx.doi.org/10.1016/j.respol.2017.09.012
technology (Rose, 2012).
In the UK, a landmark parliamentary select committee report re-
cognised citizensdistrust of conventionally-institutionalised science,
following a number of high prole technological controversies and
crises which brought into question the accountability and autonomy of
science and the role of society therein (House of Lords, 2000). More-
over, it represented a moment of reection on what it meant for science
and innovation to be conducted responsibly. As subsequent discussion
has shown, this responsibilitymay be understood in relation to what
may be considered proper behaviour or responsiveness’–in dened
contexts (i.e. the ethical dimension of responsible innovation), and in
terms of who is to be held responsiblefor what (role responsibility).
Temporally, consequentialist notions of accountability on the part
of science/scientists and innovation/innovators pertain to whether and
how they should be held to account for the consequences of actions
taken in the past (Grinbaum and Groves, 2013; Stilgoe et al., 2013). But
they also concern more immediate democratic accountabilities for the
contemporary driving motivations of innovation processes (Lee and
Petts, 2013)with respect to impacts which may not have been rea-
sonably foreseeable with any particularity, but which nonetheless arise
from the prioritisation in any given setting of some kinds of general
orientation for innovation rather than others (Wynne, 1993; 2002).
While some analyses of responsible research and innovation (RRI) in-
clude consideration of accountability (Lee and Petts, 2013; Sutclie,
2011), others are explicitly critical of what is portrayed as the trau-
maticand infantilisingeects on processes of innovation (Grinbaum
and Groves, 2013). For the most part, the RRI literature tends to neglect
the quality of accountability. The associated relevance of democracy to
the orientation of technology is also thereby downplayed. Collingridges
emphasis on Popperian qualities of openness and his development of
specic qualities around which to structure accountability oer crucial
but under-realised values for understanding responsible research and
innovation.
Putting problems of accountability to the fore, this paper seeks to
identify aspects of Collingridges work that have not been especially
well assimilated or further developed in attempts to implement a fra-
mework for responsible governance of research innovation. More
fruitful analysis would draw on Collingridges contribution to engage
more strongly with accountability in debates bearing on key elements
of responsible innovation, such as anticipatory technological decision-
making and inclusive deliberation. Within this discussion,
Collingridges ideas are themselves subject to productive critique,
which may also helpfully inform understanding and practice of RRI.
The paper has the following structure. Section 2outlines develop-
ments in practice and research connected with building and in-
stitutionalising frameworks for responsible innovation. Section 3dis-
cusses how Collingridges work addresses core concerns of RRI,
focusing initially on the Collingridge dilemmaand the corrigibilityof
technology. Section 4critically reviews the contribution of Collin-
gridges work to RRI. Section 5considers revisions and extensions that
may be made to Collingridges contribution which have the potential to
improve understandings of responsible innovation. Section 6concludes
the paper, reecting on implications of the foregoing sections for future
research on and practice of more responsible and accountable in-
novation.
2. The practice and a framework of responsible innovation
Multiple labels, approaches and genealogies alluded to in dierent
areas of the responsible research and innovation literature make this
subject dicult to characterise in any denitive way. What is clear is
that an expanding body of analysis and policy practice has built up over
the last decade around notions of responsibilityin this area. This may
be identied with a number of key journal articles such as by Guston
(2006),Guston and Sarewitz (2002),Roco et al. (2011) and Stilgoe
et al. (2013), the book by Owen et al. (2013a), and the launch of the
Journal of Responsible Innovation (Guston et al., 2014). Reading across
multiple denitions Wickson and Carew (2014: 255) identify the fol-
lowing core characteristics of RRI: a focus on addressing signicant
socio-ecological needs and challenges; a commitment to actively en-
gage a range of stakeholders for the purpose of substantively improving
decision-making and mutual learning; a dedicated attempt to anticipate
potential problems; and a willingness among all participants to act and
adapt according to these ideas.
The EU employs RRI as a cross-cutting theme within its Horizon
2020 funding framework (alongside science with and for society).
Horizon 2020 is itself a core element of the agship Innovation Union
programme, which in turn is a central aspect of the EU2020 strategy.
Taken as a whole, EU initiatives and policies tend to characterise in-
novation in an undierentiated way as a self-evidently generally
good thingirrespective of the specic kind of innovation involved or
the alternatives that might thereby be foreclosed (Stirling, 2014). Thus
pro-innovationpolicies are prized as a means to smart growth,
which is in turn seen simply in terms of the gross numbers of jobs in-
volved rather than in terms of net comparisons with numbers and
kinds of jobs that would be created by the same investments by other
means (Stirling, 2010a).
In this view, innovation is whatever happens to emerge from in-
cumbent structures of interest, privilege and power in prevailing in-
novation systems (Stirling, 2008). Justication is provided by reference
to variously direct or indirect engagement with societal challenges,
such as those connected with promoting green and secure energy, food
security, climate action, and smarttransport. But these solutionsare
typically addressed by starting rst with the incumbent innovation
trajectory and simply highlighting those problems that it may promise
to address. Far less attention is given to any analysis starting with the
challenges themselves, in order to decide which innovation trajectories
might be most appropriate (Felt et al., 2008).
Of course, this highly pressurized and expedient approach needs to
be understood in relation to EU concerns about the need to close the
competitiveness gapwith other global economic blocs and countries,
notionally by increasing R & D (Felt et al., 2013). And it is here that it is
relevant that responsible innovationhas also been strongly invoked
(under similar dynamics) in US policy on research and innovation
notably in the governance of nanotechnology. Examples here are the
National Nanotechnology Initiatives strategic plan (NNI, 2011) and the
National Science Foundations Nanotechnology in Society network
(Roco et al., 2011).
Such initiatives raise a key point regarding the emergence of re-
sponsible innovation. This concerns the potential for understandings of
RRI to become unduly attenuated or instrumentalised, resulting in more
attention being devoted to deciding on how to implement an incumbent
innovation pathway, than on choosing which pathway to follow (STEPS
Centre, 2010). To address the shortcomings of these more in-
strumentalised approaches, it is often urged that responsible innovation
move beyond preoccupation with research and development and eco-
nomic benets of individual technologies to address the innovation
process more fully, including social as well as technical and other as-
pects (Blok and Lemmens, 2015; Von Schomberg, 2015).
Whilst orientations and emphases vary, four resulting interacting
dimensions are highlighted as means by which RRI might mitigate such
criticisms. First, RRI aims to be anticipatory in the sense of exploring
possibilities (not making predictions) and analysing intended and po-
tentially unintended impacts that might arise. It aims to be delib-
erative’–‘inclusivelyinviting and listening to wider perspectives from
publics and diverse stakeholders. It prioritises reectivenessregarding
underlying purposes, motivations and potential impacts. And nally,
RRI is argued to be responsive,using this collective process of re-
exivity to both set the direction and inuence the subsequent trajec-
tory and pace of innovation(Owen et al., 2013b: 38; see also Stilgoe
et al., 2013). Among other issues, these processual aspirations of RRI
raise a number of implications for the accountability of research and
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
2
innovation.
First, proponents of RRI are concerned about the limitations of risk-
oriented approaches in providing reliable early warningsof potentially
deleterious eects of new technologies (Harremoës et al., 2000). Here,
they emphasise instead anticipatory approaches involving expert stu-
dies of diverging futures and socio-technical imaginaries. In this view,
anticipation in responsible innovation involves an intermediate posi-
tion between the closing downof governance through the making of
predictions and promises and the full opening upof spaces for more
direct forms of public accountability in substantive citizen participa-
tion. (Stirling, 2008).
The timing of accountabilities is also an important aspect of an-
ticipation, with RRI enjoining eective early instigation of upstream
governance processes. Whether referred to as deliberation (Owen et al.,
2013b) or inclusion (Stilgoe et al., 2013), responsible innovation
prioritises the admission of new voicesto the governance of science
and innovation. A plethora of divergent forms of public engagement
have emerged. Yet these have been criticised for remaining marginal to
governance of science and innovation, with the key commitments un-
dertaken elsewhere, often even further upstreamin innovation pro-
cesses (Conrad et al., 2011). A further risk of erosion in accountability
arises here in tendencies to favour designs for invitedforms of public
engagement (Wynne, 2007) that reinforce (rather than fully inter-
rogate) political closures (Chilvers, 2008).
For its part, reexivity is a key quality in inclusive deliberation.
Here, RRI literatures are informed by Becks (1992; 1999) seminal work
on reexive modernisation, in which he argues that increasing di-
culties in calculating scientic and technological risks will lead scien-
tists themselves to become more reexive heralding what he calls a
second modernity. In this vein, Stilgoe et al. (2013) draw on work by
Wynne (1993, 2002) to argue for institutional reexivityamong fun-
ders, regulators and users of scientic research concerning the as-
sumptions and practices implicated in science and innovation and their
governance. But in order to contribute to the opening upof account-
abilities, reexivity must be more than a private process of self-ques-
tioning regarding values and interests in science and innovation. Rather
than a quality located in individual social actors, reexivity needs to be
recognised as a plural and distributed social capability (Stirling,
2016a). In this sense, reexivity is a public practice, capacities for
which may be enabled by application of standards and codes of conduct
(Voss et al., 2006).
In all these ways and more, RRI involves mutual accommodation
and adjustment in the needs, interests and values of contending sta-
keholders. In particular, the quality of responsiveness’–for instance
according to Owen et al. (2013b: 38) involves an open processof
adaptive learning. In such ways, interwoven principles of anticipatory
governance and inclusive deliberation help confer mutual responsive-
ness on the part of participating interests and enjoin reexivity over
the positions and values that they themselves and others hold. But
again, there tends to be relatively little explicit, direct, or substantive
provision for addressing power dynamics in these processes of adap-
tation, learning and responsiveness. It is these kinds of power dynamic
that are addressed in wider notions of political accountability.
With these dimensions of RRI in mind, the following section outlines
how Collingridges work has been and could be employed in the ela-
boration of RRI. In particular, it considers how his ideas on the dilemma
of control and corrigibility of decisions about technology might address
issues of accountability referred to above.
3. Collingridges contribution to RRI
When RRI literatures draw on Collingridge, the most explicitly in-
voked theme is the dilemma of control(or Collingridge dilemma)
(Asante et al., 2014: 1314;Fonseca and Pereira, 2013: 51;2014: 17;
Kiran, 2012: 220;Owen, 2014;Owen et al., 2013b: 34;Stilgoe et al.,
2013: 1569;Lee and Petts, 2013: 145;Rose, 2012: 9;Van den Hoven,
2013;Van den Hoven, 2013: 80). The dilemma runs thus: attempting to
control a technology is dicultbecause during its early stages, when
it can be controlled, not enough can be known about its harmful social
consequences to warrant controlling its development; but by the time
these consequences are apparent, control has become costly and slow
(Collingridge, 1980: 19). This leads to the importance of a second
theme in more detailed accounts, concerning the corrigibility of in-
novation trajectories (see: Blok, 2014; Owen et al., 2013b; Lee and
Petts, 2013). And alongside these explicit references to his work, Col-
lingridges thinking also has implicit inuence on RRI for instance in
the recognition of the signicance of corrigibility in the form of re-
sponsiveness.Stilgoe et al. (2013: 1572) describe this as the capacity
to change shape or direction in response to stakeholder and public
values and changing circumstances.
In general Collingridges work is used as fundamental grounding for
framing: (a) the problem agenda of RRI; and (b) particular strategies for
steering technology-society more eectively. The dilemma of control
has been invoked in a general sense to underpin discussions of how to
govern uncertain or potentially undesirable innovations in contexts
where knowledge is unavailable or contested (Asante et al., 2014). In
this sense, RRI emerges as a direct response to the Collingridge di-
lemma, in which respect a number of other approaches for governing
emerging technologies have been found wanting (Rose, 2012). In es-
sence, the argument is that the Collingridge dilemma can be overcome
when responsibility is embedded in emerging technologies in the form
of enhanced reexivity among researchers alongside wider provision
for upstreamengagement (Fonseca and Pereira, 2013, 2014).
Reference to Collingridge is especially prominent where RRI seeks
to address the rst hornof the Collingridge dilemma (concerning the
dearth of necessary early information about technological implica-
tions). Eschewing simplistic instrumental approaches, attention focuses
on exposing developments to a range of mid-stream, multidisciplinary
perspectivesof kinds that were undeveloped when Collingridge rst
developed his dilemma(Kiran, 2012). Other particular dimensions of
RRI that involve elaboration of core ideas from Collingridge, include
recognition for the importance of continuing dialogue processes as a
means to enhance the corrigibility of decisions. In this regard, dialo-
gical responsivenessinvolves destruction and reconstituting of the
identities of those participating in deliberations about scientic re-
search and innovation (Blok, 2014).
Alongside these well-recognised themes, other aspects of
Collingridges work are under-acknowledged in RRI, and oer sig-
nicant potential for useful inuence. For instance, Collingridges ap-
proach to seeking social control over technology is often interpreted
simply to involve monitoring and continual search for error with re-
mediation best facilitated by ensuring that those pathways that are
pursued are as corrigible as possible. However, this is only part of the
story. Other themes in Collingridges work that are undervalued in the
RRI literature involve other strategies than corrigibility and address the
afore-mentioned dimensions of RRI. For instance, in relation to antici-
patory technological decision-making Collingridge describes a series of
equivalentways of overcoming obstacles to the control of technology,
including: keeping options open; increasing the insensitivity of perfor-
mance of technology to error; escaping the hedging circle; enhancing
controllability; managing entrenchment; reducing dogmatism of ex-
perts; and minimising the diseconomies of scale.
Collingridge (1980) argues that keeping future options open facil-
itates the social control of technology by enhancing the exibility of
decisions. Having a range of technical options available avoids reliance
on any one technology. For Collingridge, the choice of which nascent
innovation pathways to pursue (or not) is a matter of societal and
technological choice, implicated with competing visions of the pur-
poses, benets and limitations of technology and more or less eective
processes for decision-making. In relation to the rst horn of his di-
lemma, the knowledge required to avoid big mistakesmay be
knowledge pertaining to a class of similar decisions about technology,
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
3
albeit that operational knowledge about the technology in question
may not yet be available. In Nordmanns (2014) terms, RRI should
emphasise the search for alternative scenarios and technological op-
tions, rather than comprehensive knowledge of the future. This in-
creases freedom of manoeuvre, opening up a wider range of possible
future actions. In this way, a system composed of many small units of
operation or production presents far wider options than dependence on
a few very large units which rely on highly specialised and capital in-
tensive infrastructure. The trade-ois between exibility of decisions
on one hand and the loss of the economies of scale from the more in-
exible technology on the other. Public accountability may also be
more practicable in cases when technology is insensitive to error, that is
when the time taken to discover and remedy a mistakendecision is
short and the costs of the mistake compared with those of following the
correctoption make little dierence to the nal pay-o
(Collingridge, 1980: 40).
A particular problem for anticipatory decisionmaking is manifest
in what Collingridge (1980: chapter 5) calls the hedging circle. This
refers to a process in which liberal just-in-caseassumptions (e.g. in the
energy sector about future energy demand growth and GDP) interlock
with existing low varietysupply technology to create a vicious circle in
which supply is increased in the expectation of growing demand, and
demand growth materialises as consumers adjust to supply increases.
Decision-makers understand the cost of error in terms of failure to
supply (energy) according to expected demand as generously fore-
casted. In this way, it appears as if expansion of the prevailing low-
variety system (in this case based on centralised generation of non-re-
newable electricity), has a lower cost of error, than investment in a
more decentralised supply system or energy eciency. Subsequent
thinking and experience has illuminated this fallacy (GEA, 2012).
Here, Collingridge distinguishes between controllabilityand cor-
rigibilityin that the former relates to the ecacy, cost and timeliness
with which wider social agency can be asserted over the orientation of a
technological trajectory, and not just to the ease with which specic
errors may be corrected. Decisions that are easily controlled will have
what Collingridge calls low control cost, which he denes as the costs
of applying a remedy to a mistaken decision (Collingridge, 1980: 33).
Where such costs are unknown in advance, then options with low xed
costs are preferable. If these decisions are mistaken then the losses of
sunk costs associated with highly capital intensive options may be kept
low.
Assuring responsiveness to changed circumstances or values is dif-
cult in situations where technologies are entrenched(Collingridge,
1979;1980: chapter 3;1983; c.f. Arthur, 1989 for the related notion of
technology lock inreferred to in RRI literature). Entrenchment may be
thought of in terms of the second hornof the Collingridge dilemma,
extending beyond problems of information regarding the future per-
formance of technology to challenges of insucient agency a devel-
oped technology is more clear in its implications but more entrenched
in the face of eorts to reshape it (Owen et al., 2012, 2013b), though
continued monitoring may be possible. In addition to entrenchment,
however, Collingridge identies further impediments to responsiveness,
which tend not to be considered in RRI literatures.
First, responsiveness may become dicult due to competition
(which may for Collingridge be between rms, nation-states or regional
blocs). This is due to the ways in which competition can serve to restrict
the number of options while dulling the reexivity, or critical ability, of
inuential stakeholders. This occurs, for example, when the technology
in question becomes so intrinsically embedded in imaginations of the
future that competition is based entirely around optimising this tra-
jectory, rather than exploring alternatives.
Secondly, responsiveness is impaired by dogmatism, which serves to
reduce societal reexivity and inclusive deliberation. This involves the
many ways in which incumbent interests that are associated with a
particular entrenched technology can avoid criticism of their favoured
commitments. Dogmatism ourishes in the absence of transparent
processes of scrutiny or where such monitoring is conducted too lightly
or by groups unable to exercise independent views of the technology
concerned (Collingridge, 1980: chapter 9). Scientic experts and the
use of scientic expertise are central to dogmatism. Collingridge argues
that scientic experts distort the proper taskof technology which is
supposed to be to meet societal needs and fail to address more human
aspects of technology. Instead, they and their work tend to be unduly
optimistic, overly technical and serve the narrow needs of large orga-
nisations and government, by and for which their expertise is com-
missioned (Collingridge, 1992: 180182).
Thirdly, responsiveness is inhibited by diseconomies of scale.
Collingridge (1980, 1992) identied a number of indicators of scale,
and applied these to a variety of case studies, including system-built
high-rise buildings, nuclear power, large irrigation schemes in devel-
oping countries, and the US space shuttle. The indicators thereby im-
plicated include long lead time, large unit size, capital intensity and
dependence on specialised infrastructure. Collingridge shows how these
characteristics of large inexible technologies may bring economies of
scale once they are deployed but impose severe diseconomies of scale in
terms of response time and control costs on society if decisions are
subsequently found to be mistaken.
In all these ways, more fulsome institutionalisation of Collingridges
ideas to RRI might additionally emphasise and combine attention to: (a)
exible, incremental decisions, which are more likely to be taken by (b)
incremental decision-making processes accountable to stakeholders
who are usually left out of such mechanisms (c.f. Lindblom, 1959,
1990); and (c) increased criticism (and relaxed reliance on) the
worldviews of the kinds of professional experts that tend to be most
implicated in RRI again a key focus of accountabilityas distinct from
responsibility(c.f. Lindblom, 1990). But Collingridges work is of
course also susceptible to critique not least in relation to other
emerging insights in RRI. In particular, this discussion relates to a series
of analyses that have emerged since Collingridge ceased active re-
search, including research on: constructive technology assessment and
discourse; the dynamics of expectations; and socio-technical scenarios
and imaginaries. It is to this that attention will now turn.
4. A critique of Collingridges work in relation to RRI
Over the last three decades, Collingridges own approach has been
subject to much useful criticism. Sometimes despite its age
prompting fresh research questions even now, this also oers signicant
insights for developing RRI agendas. For instance, some critics of
Collingridge have read his work as an externalistview, unduly se-
parating of relations between technologyand society(e.g. Johnston,
1984; Kiran 2012). Collingridges (1985: 380) (somewhat defensive)
response is that (to him) the story of technologyobviously involves
its social and institutional aspects as well. But it is important here to
understand connections between the content of inexible technologies
and the closed processes and interactions (i.e. between policy-makers
and large rms) through which they are promoted. In Kirans (2012)
view RRI approaches should not seek to appropriatethis position. In
other words, governing technologies responsibly requires a more
nuanced approach to the relatively simple distinction between up-
stream or downstream design strategiesimplied in the Collingridge
dilemma. Instead contemporary mid-streammultidisciplinary per-
spectives recognise the mutual interdependence of the technicaland
the social, and oer the potential to allow participants to work out
questions of function and meaning in the fray of sociotechnical devel-
opment.
A second kind of criticism emanates from contributions to RRI lit-
erature which emphasise (and problematise) the responsiveness of ac-
tors to each other (e.g. Blok, 2014; De Bakker et al., 2014; Lövbrand
et al., 2011; Sykes and Macnaghten, 2013; Van Oudheusden, 2014).
These juxtapose ideals of deliberative democracy working towards the
common goodwith corresponding critiques from the eld of science
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
4
and technology studies (STS) which stress the plurality of knowledges
and assumptions which can inform collective action(Lövbrand et al.,
2011). Here however, there is actually a strong consistency with Col-
lingridges position on the role of scientic expertise in technological
controversies, which emphasises how scientic experts called upon to
present knowledge for opposing sides in technical controversies, will
obviously disagree over interpretations and implications (Collingridge,
1980; Collingridge and Reeve, 1986). Yet Collingridge does envisage
that mutual rapprochement of experts may be secured if appeals are
successful to background values(Collingridge, 1980: chapter 12). For
its part, STS recognises that these values themselves are typically con-
testable (Felt et al., 2008, 2013).
Here again, however, Collingridge (1992: 186) does nonetheless
still converge with pluralist STS, in that his notion of associated trial
and error learning requires mutual co-ordination between disparate
interest groups sharing power, each having a veto. It is from this (in
STS terms) more reexive process, that more exible sociotechnical
congurations may be selected. These may be far from perfect, but they
may be expected to better serve the people aected by them as well as
proving better adapted to realities of human ignorance about the future.
In any case, this kind of appeal to shared or background valuescan
also to be found in the STS-informed RRI literature for instance in von
Schombergs description of the EU approach to RRI (Von Schomberg,
2013). The extent and depth in which underlying social values must be
dierentiated, or may be assumed to be shared, could usefully be
problematized and subject to further scrutiny in future RRI research.
Two further key points here arise in the extent to which processes of
technology development embody or enable democratic inuence of
those who stand to be aected. Relating to the accountability issues
mentioned above, these concern the requisite diversity of stakeholders
involved in alignmentprocesses, as well as the capacities of these
processes for reection and responsiveness to plural values and inter-
ests. Relevant here is Collingridges concern that debates about tech-
nologies be seen as continuing processes, rather than one-oexercises.
Accordingly, his work highlights the need to conceive of account-
abilities not consequentially in relation to postulated future outcomes,
but rather pragmatically in terms of more immediately apprehensible
contemporary qualities of innovation processes themselves: like inclu-
sion, openness, incrementalism, exibility and reversibility.
Also, in Collingridges terms, the focus should not only be on
emerging technologies but also on monitoring those which have be-
come entrenched. Recognising that judgements regarding the exibility
or robustness of technologies are always provisional, Collingridge ar-
gues that scrutiny should persist for as long as it is possible for debate to
continue. However, he does not say how the category of aected
parties is to be determined in practice ex ante. Nor does he oer gui-
dance about how democratic governance should be practised so as to
admit such potential participants, let alone how to ensure in the pre-
sence of encompassing power gradients, that they exert inuence over
decisions. Instead, Collingridge simply analyses cases where such actors
are missing' (Winner, 1993) and cautions about potentially innite
regress in criticism in (or of) science. Subsequent contributions, in-
cluding contemporary approaches to RRI, still struggle to address these
issues eectively. Here in particular, there seems much further scope
for further critical research and action.
One area where exactly these issues where this challenge was
especially addressed is Constructive Technology Assessment (CTA).
Also in danger of neglect in subsequent RRI research and especially in
parallel elds like transition management (Meadowcroft, 2009; Shove
and Walker, 2007; Smith and Stirling, 2010)CTA focuses directly on
the open and inherently political nature of alignment processes.
Focussing on the dynamics of negotiating between promotion and
control, a range of strategies emerge on the part of regulators, mar-
keting or environmental departments in rms and their various prota-
gonists in wider political debate (Rip and te Kulve, 2008; Schot, 1992;
Schot and Rip, 1997; c.f. Genus, 2006; Genus and Coles, 2005). In
Jasano's work on sociotechnical imaginaries, as well, visions and un-
derstandings on the part of non-specialists are aorded equal attention
and signicance to expert perspectives, with systematic contrasts ob-
served between circumstances in contrasting national settings (Jasano
and Kim, 2009). Here, RRI shares in common with many other sub-
sequent academic contributions to technology governance including
transition management and CTA a tendency to be most preoccupied
with interactions between social scientists, scientists, research funders,
policy makers and entrepreneurial or innovating rms. They have been
less directly concerned with relations (like accountability) extending to
NGOs and citizens more widely.
For example Owen et al. (2013b: 46) refer to the important role of
universities, institutes, and research funderswho enjoy co-responsi-
bilityfor dening responsible innovation and to institutionally embed
the RRI framework they advocate. In CTA reference is made to the
relevant institutions and networks that are directly involved [in tech-
nology development], but also to third partieswho can provide or
withhold credibility and legitimation (for example insurance compa-
nies, NGOs and critical or activist groups(Rip and te Kulve, 2008).
Thus citizen perspectives and democratic control of technology can
appear secondary (even tertiary) considerations. This is despite de-
clared aspirations and recognition that the institutionalising of new
technologies is an inherently social process in the widest sense, im-
plicating broad societal and environmental concerns and unintended
eects. Again, Jasanos (2003) discussion of technologies of humility
oers some especially salient principles for upstreamcitizen engage-
ment, of kinds that remain to be institutionalised in any eective way in
RRI. Indeed, tendencies discussed here in some RRI practices and
structures towards relatively instrumental orientation, narrow scope
and circumscribed participation, mean these may sometimes more ac-
curately be referred to in Jasanos terms as technologies of hubris.
For his part, however, Collingridge may in these terms also be cri-
ticised for lack of attention to wider patterns of institutional organisa-
tion and practice around technological decision-making. Albeit still
under-developed, this is an important area of emerging attention in
contemporary work in RRI (Stilgoe et al., 2013). Seen, after Giddens, as
recursive rules and resources through which social practices are made
and reproduced (Giddens, 1984: 24), institutions are by denitionthe
most enduring features of social life in modern societies. They are thus
crucial to the dynamics of emerging sociotechnical congurations.
This said, though he does not explicitly use the term, it is clear that
Collingridge is strongly aware of institutional dynamics. For example,
Collingridge (1982: preface) refers to how his approach can enable
more eective criticism of those features of existing social institutions
that give rise to inexible technologies. It could be argued, then, that
Collingridges views have implications for the formal or regulative in-
stitutional arrangements which could (or should, in his view) be
adopted to promote the responsible governance of innovation. These
involve the state avoiding gold plating favoured technologies through
nancial subsidies or taxation allowances. They also highlight the
limitations of a picking winnersapproach to technology and innova-
tion policy, with incrementalism more closely tting a policy of gen-
erating and preserving alternatives. Thus, to adapt Collingridges ap-
proach, analysis of relevant policies should seek to investigate not just
the governmental institutional arrangements, but also wider societal
and cultural contexts, which might bear on the shaping of more open
forms of research or more exible congurations for innovation capable
of being implemented by smaller and more diverse kinds of organisa-
tions (Voss and Freeman, 2015).
In relation to institutionalised practices for monitoring decisions
about technology, Collingridge notes that conventional administrative
rulesserve to impede discovery of expert bias due to a series of factors.
First, there is unequal funding to expertise aligned with dierent in-
terests, making expertise unfair both in its availability and orientation.
Second, there is the tendency to domination of research elds and
agendas by a few experts the so-called Kehoeproblem. This refers to
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
5
the leading scientic authority on environmental lead in the 1930s,
whose ndings came to determine an erroneous threshold limit for safe
exposure to lead. Third, there are bureaucratic rules which impose se-
crecy or protect key factsfrom criticism as repeatedly documented
in public inquiries (Millstone and van Zwanenberg, 2005; Wynne,
2010).
These existing institutional arrangements may be contrasted with
another model of the role of scientic experts, which can accommodate
the fact that experts can be expected to disagree(1980: 191; see also
Collingridge and Reeve, 1986). This is also one in which it is recognised
that there is nothing wrongin scientists being advocatesfor parti-
cular perspectives or policies. As explored further by Pielke (2007),
however, this holds only for as long as scientic practices and norms are
adhered to in a broader sense as well as wider values of reasoned
deliberation (Dryzek, 2006).
Collingridges work is informed by the contribution of Charles
Lindblom on the prevalence and reduction of professional impairment
(Lindblom, 1990). A contemporary development of such thinking di-
rects attention to normativeinstitutional arrangements which govern
the training of scientists and social scientists for example, in relation
to public engagement and ethics, and in terms of their capacity for self-
criticism, reection and proactive accountability. But what does not
feature so directly in Collingridges thinking, are institutionalised rules
implicated with understandings, beliefs and expectations concerning
participation of publics per se. This includes such rules concerning the
responsibilities of (and requirements for) citizens to play active roles in
technology development, to help guard against unfair or unwise in-
novations. But again, this gap in Collingridges thinking also seems to
persist equally in much contemporary academic thought.
The following section reects on how RRI might usefully engage
more deeply with Collingridges ideas and the implications of doing so
for developing both understanding and practice of RRI. This discussion
also benets from appreciation for how Collingridges thinking is in-
formed by Charles Lindbloms work on political incrementalism and the
associated Popperianfallibilist approach to decision-making under
conditions of uncertainty or ignorance. The discussion oers for RRI a
sceptical view of anticipatory decision-making. It also oers insights
into the constituting of reexivity and responsiveness and to some ex-
tent inclusive deliberation of kinds that are emergent in Collingridges
work but not referenced within RRI literatures, even amongst those
contributions which cite him.
5. Building RRI: revisiting Collingridge?
Collingridges approach emphasises active processes of learning
from a particular class of past decisions in order to inform future de-
cision-making about technology development, scientic research and
innovation. This contrasts with the more hubristic techno-scientic
approaches to futureswhich form one branch of the technology as-
sessment literature (Selin, 2014). Collingridge is concerned pragmati-
cally with the qualities of emerging innovations, rather than con-
sequentially with their outcomes. This should not be confused with
questions about the availability or not of knowledge regarding a focal,
emerging technology of a kind that might otherwise be the central
framing adopted in current approaches to responsible innovation.
Whereas his contributions are often thought of as being preoccupied
with lock-in and the closing down of governance processes, it is the
incrementalist and Popperian underpinning of Collingridges thinking
that most emphasise the need for open processes (c.f. Stirling, 2008).
These themes are developed in the paragraphs below.
5.1. Incrementalism
Some contributions to RRI literature which refer to Collingridges
work, neglect to recognise its immersion in a wider pool of thinking
around incremental policy, strategy and decision-making. The writings
of Charles Lindblom, for instance, are a particularly strong inuence on
Collingridges thinking about what he called the management of scale
and exible decision-making about technology.
For Lindblom (1959) incrementalism involves two aspects: a) an
emphasis on relatively small changes from some pre-existing state of
aairs (which may nonetheless be large in their cumulative eects); and
b) the participation of likely-aected partisanswhose continual
proactive mutual adjustmentshelp to prevent the more egregious
kinds of mistake associated with centralised planning. Later, Lindblom
(1990) emphasised the role and need for wider citizen engagements,
allowing the probing of past and prospective decisions. So too, in
Collingridges (1980) view, the post-hoc monitoring of past technolo-
gical decisions should be encouraged in the form of public debate, with
recognition that this may periodically entail the reversal of deep com-
mitments. In ways that chime with Van de Poels (2000) emphasis on
the need to include outsidersand Winners (1993) concern with
missing actors, both Lindblom and Collingridge see this in terms of
potentially empowering and involving those who are often left out of
technological decision-making. To the extent that some of the more
instrumental applications of RRI practice are seen (as discussed above)
in conservative quarters as substitutes for waves of policy enthusiasm
for public engagement, this work of Collingridges serves as a reminder
that wider inclusion and empowerment are central to and constitutive
of responsibility.
The resulting issues extend very widely. Collingridge identied as
one of the most pressing problems of our time, the question of: can we
control our [sic] technology?(Collingridge, 1980: preface). In sum-
marising his overall response, Collingridge notes that what is required
extends well beyond specic bolt-on methods, tools and practices, to
encompass the entire normal machinery of politics(1983: preface). For
him, it is this normal decision-makingthat needs to be much more
strongly conditioned by incrementalism. Thus he states that weought
to avoid those decisions that cant be taken incrementally a clear,
normative commitment with extensive repercussions not just for policy
but for the politics of technology more widely. It is in this way that
Collingridge seeks to escape the excessively deterministic connotations
of control(Collingridge, 1983). And it is in these senses that he oers
what are arguably some of his most salient messages for RRI.
5.2. Ignorance and fallibility
Reaching back especially to the work of Popper (1979), Collin-
gridges incremental approach to the social control of technology is
based on the philosophy of fallibilism. Here, Collingridge considers
error to be an unavoidable part of being human. He contrasts his em-
phasis on human fallibility, with then (and still) prevailing conditions
under which incumbent governance cultures aspire and claim sy-
noptically rational decision making. With Bayesian probabilistic
methods for risk-based decision-making even more prominent today
than they were in Collingridges time, such attitudes are also ex-
emplied in the current emphasis on sound scienceand evidence
based decision-making’–as if these permitted single self-evidently
denitive prescriptions (Stirling, 2010b).
This continuing dominant traditionrests on what Collingridge re-
fers to as the justicationist model, in which only those decisions which
can be fully justied are seen as rational. As a result, strong pressures
are formed to suppress recognition for uncertainties, ambiguities and
irreducible ignorance. In Bayesianand kindred approaches critiqued
by Collingridge, these kinds of indeterminacy and intractability are
instead treated with aggregated probabilities. It is central to
Collingridges insights that these kinds of reductive aggregativetools
are deeply misleading. Yet many parts of the responsible innovation
literature that otherwise draw on his work, are not so clear concerning
these kinds of problems with what continue to be dominant methods
(Martin, 2013; Walport, 2014).
There is a key parallel here between the Popperian process in which
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
6
scientic progress rests on eective falsication of conjectures, and
ways in which wider capacities for reasoned societal interrogation can
expose aws in technology development and operation. It is arguably
here that the importance of accountability becomes most signicant
as an element of responsibility relating not to anticipated consequences,
but to the appropriate prioritisation of Collingridge qualitiesin
emerging technologies themselves. It is in qualities of inclusion, open-
ness, incrementalism, exibility and reversibility that the parallels are
strongest between Popper and Collingridge. Conditions favouring ro-
bust knowledge and robust technology are not so dierent. Again, the
implications for RRI are quite direct.
Taken together, there arise in the foregoing discussions a number of
searching questions with quite practical implications: 1). What are the
dierent meanings and forms of RRI and how do these relate to con-
crete qualities identied by Collingridge? 2). To what extent is RRI
becoming institutionally embedded as a means to assert, or alter-
natively to challenge, processes of justication? 3). What visions and
styles of scientic expertise and discourse are most promoted by RRI
openly contending or apparently harmonious? 4) What are the relations
between public participation and stakeholder engagement with RRI as
a source of challenge and substantive orientation, or a resource for
securing legitimacy? 5) What are the implications of practices and
structures of RRI for wider qualities of democratic accountability do
these tend to be emphasised and reinforced, or suppressed and sub-
stituted?
6. Conclusion
This paper seeks to make a three-fold contribution. First it oers a
relatively full and systematic critical analysis of the work of David
Collingridge which is acknowledged to have exercised an important
inuence on the currently burgeoning eld of responsible (research
and) innovation (RRI), but of which some of the wider and deeper
implications have arguably been neglected. Second, the paper has ex-
plored some specic implications of this relative neglect, for some
central themes in RRI for instance arising in the importance of
pragmatic incrementalism rather than consequentialist justication.
The third contribution has been to analyse some signicant limitations
in Collingridges approach as illuminated in the light of subsequent
developments. This also raises practical issues for RRI, particularly with
regard to the strengthening of Collingridge qualities’–including in-
clusion, openness, diversity, incrementalism, exibility and reversi-
bility.
Of course, some problems are presented for aspects of Collingridges
analysis by the advent of contemporary grand challengesfor innova-
tion policy, such as climate change and social justice. In particular,
serious questions appear to be raised for his vision of incrementalism,
by imperatives to achieve urgent and broad-scale transformations.
These may partly be alleviated by pointing to the potentially radical
cumulative eects of incrementalism in achieving occasionally trans-
formative emergent cultural murmurations(Stirling, 2016a). But the
tension still remains.
Likewise, there are also criticisms that Collingridge presents a de-
cisionistic and machinery viewof technology in society, more con-
sistent with risk regulationthan the innovation governanceof RRI
approaches. This links to the paradoxically deterministic connotations
of the controlmetaphor that Collingridge uses to communicate his
analysis. Again, Collingridges choice of metaphor is of its time. His
prescriptions of inclusion, openness, diversity, incrementalism, ex-
ibility and reversibility might all now be better expressed in terms of
qualities other than control’–including care, solidarity, mutualism,
non-consequentialist notions of accountability and responsibility itself.
Nonetheless, whilst the questions raised for RRI still stand, these
limitations in Collingridges approach do detract from its contemporary
value. And, though he highlights the problem well, Collingridges work
also fails eectively to resolve how to achieve the necessary kinds of
wider and deeper democratic deliberation, including by vulnerable
marginalised communities who are repeatedly deeply aected by in-
cumbent patterns of decision-making, but who remain perennially
missing actors(Winner, 1993). It is in this area that work in sub-
sequent decades around modalities for participatory deliberation,
marginalised interests and responsiveness to uninvitedcollective ac-
tion arguably have most to oer (Wynne, 2007).
This reinforces a point that is already quite well appreciated in
particular areas of RRI but which may fruitfully be restated in terms of
three implications of the paper for future RRI research and practice and
the directions that these should take. First there is a need to develop
and invigorate more concrete and assertive frameworks for enabling
practice of critical citizen engagement and participatory deliberation
(see Macnaghten and Chilvers, 2014 for a recent example). Attention to
the full scope of Collingridges analysis would fortify such moves. But it
would also have the eect of raising particular questions and pointers.
Secondly, and in relation to the previous point, RRI needs to have
due regard to Collingridgesemphasis on fallibility and the ever-present
intractabilities of ignorance. In terms of the direction of ensuing RRI
research and practice this highlights the value of processes and dis-
courses that illuminate, rather than suppress, contention among spe-
cialists and wider societal interests. Particular challenges are raised by
this for the search for right impactsin RRI (Von Schomberg, 2013).
Here, an implication of the paper is that responsibility lies not in en-
gineering consensus, but in exploring dissensus (Genus, 2006). And it is
in this regard that the Collingridge qualities’–around inclusion,
openness, diversity, incrementalism, exibility and reversibility oer
concrete constituting (albeit sometimes contending) axes meriting fur-
ther deliberation in RRI.
Thirdly, to the extent that RRI approaches can fully embrace
Collingridges contributions, they will need to grapple not only with
contending qualities and principles for rationalistic decision making but
also with the fundamental realities (foundational for Collingridge) that
the governance of research and innovation are fundamentally about
muddling throughin the presence of steep power gradients and
strongly asserted interests. In this sense, it is a core feature of respon-
sibility that it is often better engaged with as a struggle against in-
cumbent power, than as an instrumental facilitation (Stirling, 2016a).
Thus understood, one of the most important properties of responsibility
lies in the reinforcing rather than the attenuation of accountabilities.
In the end, the kinds of humility and pluralism urged by Collingridge in
the face of ignorance and contending interests, underscore the point
that the most responsible way to govern innovation is by democracy
itself. The institutions and practices of RRI are arguably only pro-
gressive insofar as they helps to strengthen, rather than weaken, this
general aim.
References
Arthur, W.B., 1989. Competing technologies, increasing returns, and lock-in by historical
events. The Econ. J. 99 (394), 116131.
Asante, K., Owen, R., Williamson, G., 2014. Governance of new product development and
perceptions of responsible innovation in the nancial sector: insights from an eth-
nographic case study. J. Responsible Innovation 1 (1), 930.
Beck, U., 1992. Risk Society: Towards a New Modernity. Sage, London.
Beck, U., 1999. World Risk Society. Polity, London.
Bernal, J.D., 1939. The Social Function of Science. Routledge, London.
Bernal, J.D., 1954. Science in History. Faber and Faber, London (4 volumes).
Blok, V., Lemmens, P., 2015. The Emerging concept of responsible innovation. Three
reasons why it is questionable and calls for a radical transformation of the concept of
innovation. In: In: Koops, B.-J., van den Hoven, H., Swierstra, T., Oosterlaken, I.
(Eds.), Responsible Innovation, vol. 2. Springer, Dordrecht, pp. 1935 Concepts,
Approaches, and Applications.
Blok, V., 2014. Look who's talking: responsible innovation, the paradox of dialogue and
the voice of the other in communication and negotiation process. J. Responsible
Innovation 1 (2), 171190.
Chilvers, J., 2008. Delivering competence: theoretical and practitioner perspectives and
eective. Sci. Technol. Human Values 33, 155185.
Collingridge, D., Reeve, C., 1986. Science Speaks to Power: the Role of Experts in
Policymaking. Frances Pinter, London.
Collingridge, D., 1979. The entrenchment of technology. Sci. Public Policy 332338.
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
7
Collingridge, D., 1980. The Social Control of Technology. Pinter, London.
Collingridge, D., 1982. Critical Decision Making. Frances Pinter, London.
Collingridge, D., 1983. Technology in the Policy Process: the Control of Nuclear Power.
Frances Pinter, London.
Collingridge, D., 1985. Controlling technology (response to Johnston). Soc. Stud. Sci. 15
(2), 373380.
Collingridge, D., 1992. The Management of Scale. Routledge, London.
Conrad, E., Cassar, L.F., Christie, M., Fazey, I., 2011. Hearing but not listening? A par-
ticipatory assessment of public participation in planning. Environ. Plann. C: Gov.
Policy 29, 761782.
De Bakker, E., de Lauwere, C., Hoes, A.-C., Beekman, V., 2014. Responsible research and
innovation in miniature: information asymmetries hindering a more inclusive na-
nofooddevelopment. Sci. Public Policy 41, 294305.
Donnelley, S., 1989. Hans Jonas, the philosophy of nature, and the ethics of responsi-
bility. Social Res. 56 (3), 113.
Dryzek, J.S., 2006. Deliberative Global Politics: Discourse and Democracy in a Divided
World. Polity Press, Cambridge.
Felt, U., Wynne, B., Callon, M., Gonçalves, M.E., Jasano, S., Jepsen, M., Tallacchini, M.,
2008. Taking European Knowledge Society Seriously: Report of the Expert Group on
Science and Governance to the Science, Economy and Society Directorate,
Directorate-General for Research, European Commission. European Commission,
Brussels.
Felt, U., Barben, D., Irwin, A., Pierre-Benoit-Joly, Rip, A., Stirling, A., Stockelova, T.,
2013. Science in Society: Caring for Our Futures in Turbulent Times. Strasbourg.
Fonseca, P., Pereira, T.S., 2013. Emerging responsibilities: brazilian nanoscientistscon-
ceptions of responsible governance and social technology practices. In: Konrad, C.,
Coenen, A., Dijkstra, C., Milburn, H. (Eds.), Shaping Emerging Technologies:
Governance, Innovation, Discourse. IOS Press/AKA, Berlin, pp. 4965.
Fonseca, P.F.C., Pereira, T.S., 2014. The governance of nanotechnology in the Brazilian
context: entangling approaches. Technol. Soc. 37, 1627.
GEA, 2012. In: Davis, G., Goldemberg, J. (Eds.), Global Energy Assessment Toward a
Sustainable Future. Cambridge University Press, Cambridge, UK.
Genus, A., Coles, A.-M., 2005. On constructive technology assessment and limitations on
public participation in technology assessment. Technol. Anal. Strategic Manage. 17
(4), 433443.
Genus, A., 2006. Rethinking constructive technology assessment as democratic, reective,
discourse. Technol. Forecasting Social Change 73 (1), 1326.
Giddens, A., 1984. The Constitution of Society: Outline of the Theory of Structuration
Polity. Cambridge.
Grinbaum, A., Groves, C., 2013. What is Responsibleabout responsible innovation?
understanding the ethical issues. In: Owen, R., Bessant, J., Heintz, M. (Eds.),
Responsible Innovation: Managing the Responsible Emergence of Science and
Innovation in Society. Wiley, Chichester, pp. 119142.
Guston, D.H., Sarewitz, D., 2002. Real-time technology assessment. Technol. Soc. 24,
93109.
Guston, D.H., Fisher, E., Grunwald, A., Owen, R., Swierstra, T., van der Burg, S., 2014.
Responsible innovation: motivations for a new journal. J. Responsible Innovation
1, 18.
Guston, D.H., 2006. Responsible knowledge-based innovation. Society 1921.
Haldane, J.B.S., 1939. Science and Everyday Life. Pelican, London (reprinted 1941)
(1939).
Harremoës, P., Gee, D., MacGarvin, M., Stirling, A., Keys, J., Vaz, S.G., Wynne, B. (Eds.),
2000. Late Lessons from Early Warnings: the Precautionary Principle 18962000.
European Environment Agency, Copenhagen.
House of Lords, 2000. Select Committee on Science and Technology. Science and Society,
London.
Irwin, A., 1995. Citizen Science: A Study of People, Expertise and Sustainable
Development. Routledge, London.
Irwin, A., 2001. Constructing the scientic citizen: science and democracy in the bios-
ciences. Public Underst. Sci. 10, 118.
Jasano, S., Kim, S.-H., 2009. Containing the atom: sociotechnical imaginaries and nu-
clear power in the United States and South Korea. Minerva 47, 119146.
Jasano, S., 2003. Technologies of humility: citizen participation in governing science.
Minerva 41, 223244.
Johnston, R., 1984. Controlling technology: an issue for the social studies of science. Soc.
Stud. Sci. 14, 97113.
Jonas, H., 1984. The Imperative of Responsibility: in Search of an Ethics for the
Technological Age. Chicago University Press, Chicago.
Kiran, A., 2012. Does responsible innovation presuppose design instrumentalism?:
Examining the case of telecare at home in the Netherlands. Technol. Society 34,
216226.
Lövbrand, E., Pielke, R., Beck, S., 2011. A democracy paradox in studies of science and
technology. Sci. Technol. Human Values 36, 474496.
Lee, R.G., Petts, J., 2013. Adaptive governance for responsible innovation. In: Owen, R.,
Bessant, J., Heintz, M. (Eds.), Responsible Innovation: Managing the Responsible
Emergence of Science and Innovation in Society. Wiley, London, pp. 143164.
Lindblom, C.E., 1959. The science of muddling through. Public Adm. Rev. 19, 7988.
Lindblom, C.E., 1990. Inquiry and Change: The Troubled Attempt to Understand and
Shape Society. Yale University Press, New Haven.
Macnaghten, P., Chilvers, J., 2014. The future of science governance: publics, policies,
practices. Environ. Plann. C: Gov. Policy 32, 530548.
Martin, B., 2013. Innovation Studies: an emerging agenda. In: Fagerberg, J., Martin, B.,
Andersen, E.S. (Eds.), Innovation Studies: Evolution and Future Challenges. Oxford
University Press, Oxford.
Meadowcroft, J., 2009. What about the politics? Sustainable development, transition
management, and long term energy transitions. Policy Sci. 42 (4), 323340.
Millstone, E., van Zwanenberg, P., 2005. BSE: Risk, Science, and Governance. Oxford
University Press, Oxford NNI (The National Nanotechnology Initiative), Strategic
Plan 2011. http://www.nano.gov/sites/default/les/pub_resource/2011_strategic_
plan.pdf.
NNI (The National Nanotechnology Initiative), 2011. Strategic Plan. http://www.nano.
gov/sites/default/les/pub_resource/2011_strategic_plan.pdf.
Nordmann, A., 2014. Responsible innovation, the art and craft of anticipation. J.
Responsible Innovation 1 (1), 8798.
Owen, R., Macnaghten, P., Stilgoe, J., 2012. Responsible research and innovation: from
science in society to science for society, with society. Sci. Public Policy 39, 751760.
Owen, R., Bessant, J., Heintz, M. (Eds.), 2013. Responsible Innovation: Managing the
Responsible Emergence of Science and Innovation in Society. Wiley, London.
Owen, R., Stilgoe, J., Macnaghten, P., Gorman, M., Fisher, E., Guston, D., 2013b. A fra-
mework for responsible innovation. In: Owen, R., Bessant, J., Heintz, M. (Eds.),
Responsible Innovation: Managing the Responsible Emergence of Science and
Innovation in Society. Wiley, London, pp. 2750.
Owen, R., 2014. Responsible Research and Innovation: Options for Research and
Innovation Policy in the EU Report for ERIAB..
Pielke Jr., R.S., 2007. The Honest Broker: Making Sense of Science in Policy and Politics.
Cambridge University Press, Cambridge.
Popper, K., 1979. Objective Knowledge: an Evolutionary Approach. Oxford University
Press, Oxford Revised paperback edition.
Rip, A., te Kulve, H., 2008. Constructive technology assessment and sociotechnical sce-
narios. In: In: Fisher, E., Selin, C., Wetmore, J.M. (Eds.), The Yearbook of
Nanotechnology in Society, vol. I. Springer, Berlin, pp. 4970 (Presenting Futures).
Roco, R.C., Harthorn, B., Guston, D., Shapira, P., 2011. Innovative and responsible gov-
ernance of nanotechnology for societal development. J. Nanopart. Res. 13,
35573590.
Rose, H., Rose, S., 1969. Science and Society. Penguin, Harmondsworth.
Rose, N., 2012. Democracy in the contemporary life sciences. BioSocieties 7, 459472.
STEPS Centre, 2010. Innovation, Sustainability Development: a New Manifesto. STEPS
Centre, Brighton.
Schot, J., Rip, A., 1997. The past and future of constructive technology assessment.
Technol. Forecast. Social Change 54 (2/3), 251268.
Schot, J., 1992. Constructive technology assessment and technology dynamics: the case of
clean technologies. Sci. Technol. Human Values 17, 3656.
Selin, C., 2014. On not forgetting futures. J. Responsible Innovation 1 (1), 103108.
Shove, E., Walker, G., 2007. CAUTION! Transitions ahead: politics, practice, and sus-
tainable transition management. Environ. Plann. A 39 (4), 763770.
Smith, A., Stirling, A., 2010. The politics of social-ecological resilience and sustainable
socio-technical transitions. Ecol. Soc. 15 (1), 113.
Stahl, B.C., 2012. Responsible research and innovation in information systems. Eur. J. Inf.
Syst. 21, 207211.
Stilgoe, J., Owen, R., Macnaghten, P., 2013. Developing a framework for responsible
innovation. Res. Policy 42, 15681580.
Stilgoe, J., Lock, S., Wilsdon, J., 2014. Why should we promote public engagement with
science? Public Underst. Sci. 23, 415.
Stirling, A., 2008. Opening up and closing down: power, participation, and pluralism in
the social appraisal of technology. Sci. Technol. Human Values 33 (2), 262294.
Stirling, A., 2010a. Direction, Distribution, Diversity! Pluralising Progress in Innovation,
Sustainability and Development, STEPS Working Paper 32. pp. 145 Brighton.
Stirling, A., 2010b. Keep it complex. Nature 468, 10291031.
Stirling, A., 2014. Towards Innovation Democracy: participation, responsibility and
precaution in innovation governance. In: Walport, M. (Ed.), Annual Report of the
Government Chief Scientic Adviser 2014, Innovation: Managing Risk, Not Avoiding
It. Evidence and Case Studies. Government Oce of Science, London, pp. 4962.
https://www.gov.uk/government/uploads/system/uploads/attachment_data/le/
376505/14.
Stirling, A., 2016a. Knowing doing governing: realising heterodyne democracies. In: Voss,
J.-P., Freeman, R. (Eds.), Knowing: Governance: the Epistemic Construction of
Political Order. Palgrave MacMillan, London Chapter 12.
Stirling, A., 2016b. Addressing scarcities in responsible innovation. J. Responsible
Innovation 3 (3), 274281.
Sutclie, H., 2011. A Report on Responsible Research and Innovation. MATTER, London.
Sykes, K., Macnaghten, P., 2013. Responsible innovation opening up dialogue and de-
bate. In: Owen, R., Bessant, J., Heintz, M. (Eds.), Responsible Innovation: Managing
the Emergence of Science and Innovation in Society. Wiley, Chichester, pp. 85108.
Van Oudheusden, M., 2014. Where are the politics in responsible innovation? European
governance, technology assessments, and beyond. J. Responsible Innovation 1 (1),
6786.
Van de Poel, I., 2000. On the role of outsiders in technical development. Technol. Anal.
Strategic Manage. 12, 383387.
Van den Hoven, J., 2013. Value sensitive design and responsible innovation. In: Owen, R.,
Bessant, J., Heintz, M. (Eds.), Responsible Innovation: Managing the Responsible
Emergence of Science and Innovation in Society. Wiley, London, pp. 7583.
Von Schomberg, R., 2011. Prospects for technology assessment in a framework of re-
sponsible research and innovation. In: Dusseldorp, M., Beecroft, R. (Eds.),
Technikfolgen abschätzen lehren: Bildungspotenziale transdisziplinärer Methoden.
Springer, Fachmedien, Wiesbaden, pp. 3962.
Von Schomberg, R., 2013. A vision of responsible research and innovation. In: Owen, R.,
Bessant, J., Heintz, M. (Eds.), Responsible Innovation: Managing the Responsible
Emergence of Science and Innovation in Society. Wiley, London, pp. 5174.
Von Schomberg, R., 2015. From the responsible development of new technologies to-
wards Responsible Innovation. In: Holbrook, J.B., Mitcham, C. (Eds.), Ethics, Science,
Technology, and Engineering: An International Resource, 2nd ed. Cengage, London
Available at: http://renevonschomberg.wordpress.com/from-responsible-
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
8
development-of-technologies-to-responsible-innovation/.
Knowing Governance: The Epistemic Construction of Political Order. In: Voss, J.-P.,
Freeman, R. (Eds.), Palgrave MacMillan, London.
Reexive Governance for Sustainable Development. In: Voss, J.-P., Bauknecht, D., Kemp,
R. (Eds.), Edward Elgar, Cheltenham.
Walport, M., 2014. Innovation: Managing Risk, Not Avoiding It Evidence and Case
Studies Annual Report of the Government Chief Scientic Adviser. Government
Oce of Science, London.
Wickson, F., Carew, A.L., 2014. Quality criteria and indicators for responsible research
and innovation: learning from transdisciplinarity. J. Responsible Innovation 1 (3),
254273.
Winner, L., 1993. Autonomous Technology: Technics Out of Control as a Theme in
Political Thought. MIT, Cambridge, MA.
Wynne, B., 1993. Public uptake of science: a case for institutional reexivity. Public
Underst. Sci. 2, 321337.
Wynne, B., 2002. Risk and environment as legitimatory discourses of science and tech-
nology: reexivity inside-out? Curr. Sociol. 50, 459477.
Wynne, B., 2007. Public participation in science and technology: performing and ob-
scuring a political?conceptual category mistake. East Asian Sci. Technol. Soci.: Int. J.
1 (1), 99110.
Wynne, B., 2010. Rationality and Ritual: Participation and Exclusion in Nuclear Decision-
Making, 2nd ed. Earthscan, London.
Ziman, J., 1968. Science Is Public Knowledge. Cambridge University Press, Cambridge.
A. Genus, A. Stirling Research Policy xxx (xxxx) xxx–xxx
9
... Introduction of deposit systems for transport and packaging materials, e.g., disposable bottles, delivery services, and takeaway food. 33. Increasing climate-friendly short transport and shopping distances, e.g., for supermarkets and restaurants. ...
... At the same time, new technologies could be influenced more easily in the early phase. However, there is a lack of reliable information about future development dynamics and the effects of technology on controlling them in a targeted manner [33]. After investments in the technology and its application, there is an increase in knowledge of its impacts. ...
Article
Full-text available
Current food systems provide relative food security but compromise planetary health and largely fail to address climate change challenges. Regional food supplies can contribute to sustainable production and consumption, reducing the dependence on global supply chains. However, food systems’ complexity and rigidity hinder the implementation of climate-conscious, healthier practices. The City.Food.Basket project explored regional food baskets in urban and peri-urban settings in Austria for the City of Graz and its surroundings, developing models for regional, healthy, and low-climate-impact diets. Against this background, we present a qualitative study that generated three explorative scenarios for promoting regional diets using a Delphi-based expert-stakeholder survey method with participatory elements. A scenario workshop elaborated on interconnecting actions to strengthen regional food supply, including making regional food a tender criterion, reducing waste, ensuring affordability, and shifting subsidies to climate-conscious practices for Graz. While the method successfully provides socio-technical futures for policy orientation, its direct policy impact remains low due to time constraints, short project duration, limited project resources, and differing rationalities between research and policymaking. This study highlights the need for improved connectivity between transdisciplinary research, foresight methods, and regional policy cycles to enhance such projects’ effectiveness.
... To understand this anticipatory character, it can be helpful to imagine technological innovation as a 'stream': to avert or mitigate undesirable outcomes and promote desirable ones 'downstream', it is essential to make informed decisions 'upstream', early in the innovation process. This 'upstream-ethics', as I will refer to the ethics of early-stage technologies, aligns with the broader paradigm of responsible research and innovation (RRI), which is generally geared towards anticipation and integrating societal considerations throughout the research and innovation process (Ryan & Blok, 2023;Genus & Stirling, 2018;Wickson and Carew, 2014). ...
... One often-advocated response to the dilemma of social and ethical control focuses on fostering 'technological flexibility' or corrigibility (Collingridge, 1980;Joly, 2015;Genus & Stirling, 2018). This involves designing technology to be adaptable and responsive to feedback as new social or ethical concerns arise, theoretically allowing society to retain a measure of control. ...
Preprint
Full-text available
The ethics of emerging technologies faces an anticipation dilemma: engaging too early risks overly speculative concerns, while engaging too late may forfeit the chance to shape a technology's trajectory. Despite various methods to address this challenge, no framework exists to assess their suitability across different stages of technological development. This paper proposes such a framework. I conceptualise two main ethical approaches: outcomes-oriented ethics, which assesses the potential consequences of a technology's materialisation, and meaning-oriented ethics, which examines how (social) meaning is attributed to a technology. I argue that the strengths and limitations of outcomes- and meaning-oriented ethics depend on the uncertainties surrounding a technology, which shift as it matures. To capture this evolution, I introduce the concept of ethics readiness: the readiness of a technology to undergo detailed ethical scrutiny. Building on the widely known Technology Readiness Levels (TRLs), I propose Ethics Readiness Levels (ERLs) to illustrate how the suitability of ethical approaches evolves with a technology's development. At lower ERLs, where uncertainties are most pronounced, meaning-oriented ethics proves more effective, while at higher ERLs, as impacts become clearer, outcomes-oriented ethics gains relevance. By linking Ethics Readiness to Technology Readiness, this framework underscores that the appropriateness of ethical approaches evolves alongside technological maturity, ensuring scrutiny remains grounded and relevant. Finally, I demonstrate the practical value of this framework by applying it to quantum technologies, showing how Ethics Readiness can guide effective ethical engagement.
... The work cited on methods often also covers concepts. project managers should transparently communicate project steps and outcomes, as well as RRI and SA procedures and outcomes, to all stakeholders and the wider public to support trust and accountability (Genus and Stirling, 2018). However, the inclusion and reflexivity keys are also important because all stakeholders and the wider public should understand the project outcomes, such as technologies developed and related research findings. ...
... Transparency and access to data and information are needed both when running a project and after project completion. Transparency of the methods applied and research steps throughout the project is very important for accountability and for enabling open discussion of preliminary and final results within the project team and with stakeholders and the wider public (Genus and Stirling, 2018). Shared (online) platforms and the provision of information on, for example, questionnaires used in surveys, test setups or measurement results of, for example, fruit yields or working hours, facilitate internal traceability of outcomes. ...
Article
Full-text available
Responsible research and innovation concepts are popular at higher levels of organising research policy, which must align with the design and management of individual research projects. However, at this lower level, there is still a need for clearer guidance on how to support responsible research and innovation through the development of socially desirable and sustainable technologies. This is particularly evident in the agri-food sector, where calls for innovation have been on the increase, but novel technologies are often controversial and their contribution to sustainable development is uncertain. Integration of responsible research and innovation with sustainability assessment is required at the early stages of technology development in projects, during which technology development can still respond to social concerns and sustainability assessments. The few first attempts are often vague about the methods applicable in projects to support the sustainable and responsible development of technology. This paper develops a conceptual approach that integrates methods required to support the anticipation, reflexivity, inclusion, and responsiveness keys of responsible research and innovation with sustainability assessment methods, along typical phases of a research project. A case study of agricultural photovoltaics illustrates the applicability of the framework across a full research project cycle. The framework addresses the gap in how to apply methods that support responsible research and innovation and sustainability assessment in research projects. It enables synergies between responsible research and innovation and sustainability assessment. In the first steps of assessment, when the unknowns and uncertainties surrounding novel technologies are great, research and sustainability assessment require systematic anticipation of developments and impacts. In this context, sustainability assessment can support reflexivity in more detail than previously suggested approaches.
... 1 It is precisely the possession of this feature by an assortment of related new technologies (often loosely grouped under the label "AI", and including machine learning, deep learning, and neural networks) that has sparked a concern that they represent a potentially insuperable threat to our responsibility practices. In many cases the worries raised are familiar ones, made more acute by the features of these new technologies: the Problem of Many Hands (Thompson, 2014(Thompson, , 2017van de Poel et al., 2012;van der Poel et al., 2015), the Collingridge Dilemma (Collingridge, 1980;Genus & Stirling, 2018;Levy, 2011;Rosen, 2004;Zimmerman, 2022), and harmful dual-use (Forge 2010;Vaseashta, 2023) being the foremost examples. However, we will primarily focus on the relatively new worry of techno-responsibility gaps. ...
Article
Full-text available
It has been extensively argued that emerging autonomous technologies can represent a challenge for our traditional responsibility practices. Though these challenges differ in a variety of ways, at the center of these challenges is the worrying possibility that there may be outcomes of autonomous technologies for which there are legitimate demands for responsibility but no legitimate target to bear this responsibility. This is well exemplified by the possibility of techno-responsibility gaps. These challenges have elicited a number of responses, including dismissals of the legitimacy of these demands, attempts to find proximate agents that can be legitimately held responsible, and arguments for prohibiting the use of technologies that may open such gaps. In this piece we present a general argument that an overlooked but valuable option lies in adopting a strategy of taking responsibility for the outcomes of autonomous technologies even when the conditions for being legitimately held responsible are not met. We develop a general argument that the adoption of such a strategy is often justified not only by the demands of being responsible, but by practical considerations rooted in our relationships: the need to preserve of the quality of our relationships and the trustworthiness of the socio-technical system of which the autonomous technology is both a result of and embedded in.
... More ambitiously still, it may be useful to think of RI being applied outside of the relatively limited spheres in which it is currently promulgated. It is well known that, in design, early-stage decisions lock in high proportions of the costs and capabilities of the eventual product: later changes and decisions can only marginally affect outcomes -to a large extent, the die is cast (Genus & Stirling, 2018). In a similar way, then, if RI is only introduced into innovation policy in localised innovation activities in firms and academic research projects funded or otherwise supported by the government, even though the wider innovation challenge affects and is affected by a much wider range of stakeholders, then RI initiatives as classically conceived may be something of a rear-guard action. ...
Chapter
Full-text available
... This issue is also referred to as the Collingridge dilemma: at an early stage of technologies, it is relatively easy to change their trajectory and outcomes, but it is unclear what those outcomes will be. As an innovation matures, its consequences become clearer, but the trajectory becomes more difficult to change (Genus & Stirling, 2018). Despite this uncertainty, early intervention through government policy offers the opportunity to positively shape the development of technology before its trajectory is set. ...
Chapter
This informative Handbook provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI's impact on society and how it should be regulated. This title is also available as Open Access on Cambridge Core.
Article
This paper addresses the challenge of incorporating innovation and structural change in models of economic planning. Previous approaches to economic planning have mostly considered the static problem of the allocation of goods and services, leaving a secondary role (if at all) for the dynamic problem of innovation and change. However, Morozov (2021) argues that the key challenge for any alternative economic system, including planning, is to incorporate a model for progress that can rival the perceived innovative and dynamic nature of capitalism. Finding previous approaches to change in planned economies to be insufficient as central elements of planning technological progress, this paper introduces two new and complementary approaches to planning innovation: Democratic accelerated missions and screening and scaling technologies. Democratic accelerated missions act on the demand side of innovation, translating democratically formulated needs for new capabilities into research and development projects to fulfill these needs. Screening and scaling technologies act on the supply side, selecting promising new technologies based on democratically decided priorities and developing them towards finished products. Both approaches draw extensively on quantitative and qualitative evidence from different strands of literature on innovation to build an empirically grounded model, with a particular focus on (1) democratic decision-making in planning innovation, and (2) incorporating insights from technology prediction and the economics of innovation to steer technological progress. The proposed model demonstrates the feasibility of planning and directing technological progress, and is a first step towards designing institutional and algorithmic structures to this end.
Chapter
Owen, R. & Pansera, M., (2019). Responsible Innovation and Responsible Research and Innovation in “Handbook on Science and Public Policy”, (Eds) Dagmar Simon, Stefan Kuhlmann, Julia Stamm, Weert Canzler, Edward Elgar publishing: Cheltenham. https://www.e-elgar.com/shop/handbook-on-science-and-public-policy
Book
Science and innovation have the power to transform our lives and the world we live in - for better or worse - in ways that often transcend borders and generations: from the innovation of complex financial products that played such an important role in the recent financial crisis to current proposals to intentionally engineer our Earth's climate. The promise of science and innovation brings with it ethical dilemmas and impacts which are often uncertain and unpredictable: it is often only once these have emerged that we feel able to control them. How do we undertake science and innovation responsibly under such conditions, towards not only socially acceptable, but socially desirable goals and in a way that is democratic, equitable and sustainable? Responsible innovation challenges us all to think about our responsibilities for the future, as scientists, innovators and citizens, and to act upon these. This book begins with a description of the current landscape of innovation and in subsequent chapters offers perspectives on the emerging concept of responsible innovation and its historical foundations, including key elements of a responsible innovation approach and examples of practical implementation. Written in a constructive and accessible way, Responsible Innovation includes chapters on: Innovation and its management in the 21st century. A vision and framework for responsible innovation. Concepts of future-oriented responsibility as an underpinning philosophy. Values - sensitive design. Key themes of anticipation, reflection, deliberation and responsiveness. Multi - level governance and regulation. Perspectives on responsible innovation in finance, ICT, geoengineering and nanotechnology. Essentially multidisciplinary in nature, this landmark text combines research from the fields of science and technology studies, philosophy, innovation governance, business studies and beyond to address the question, "How do we ensure the responsible emergence of science and innovation in society?".
Article
This chapter introduces the components and characteristics of the governance framework. It focuses on the notions of responsibility and accountability and how innovation at the earliest stages of research, and by scientists themselves, can be more open and reflective. Then it turns to consider the types of regulatory tools that might bring more formal accountability to the governance of innovative technologies and their products, focusing on the points within the innovation and product network at which regulatory intervention might be appropriate and effective. It also suggests that there is a necessary dependence on soft law and co-operative approaches to embed notions of responsibility in the early stages of research and innovation. This leads us to consider the fundamental principles of an effective governance framework and to reflect on progress in, and requirements for, development of the essential tools to ensure anticipation, reflection, deliberation, and responsiveness.
Article
This book presents a systematic analysis of how BSE policy was made in the UK and EU, 1986%#x2013;2004. The main focus is on the role of scientific expertise, advice, and evidence in policy-making processes, and its use by officials and ministers as a political resource. The central argument is that highly political and highly problematic policy decisions were often misrepresented as based on, and only on, sound science. Those tactics required the selective highlighting of scientific uncertainties. Since many of the most crucial policy-sensitive uncertainties were concealed or discounted, research to diminish those uncertainties was not undertaken. Since the claim had been that it was impossible for BSE-contaminated food to cause a human spongiform encephalopathy, when such cases emerged in 1996, the policy-making regime was comprehensively undermined and a crisis ensued. The BSE policy saga is used to develop and refine a general analytical framework with which science-based policy governance can be analysed, providing resources with which the book specifies the conditions under which such policy-making may achieve and reconcile scientific and democratic legitimacy.
Article
In Rationality and Ritual, internationally renowned expert Brian Wynne offers a profound analysis of science and technology policymaking. By focusing on an episode of major importance in Britain's nuclear history – the Windscale Inquiry, a public hearing about the future of fuel reprocessing – he offers a powerful critique of such judicial procedures and the underlying assumptions of the rationalist approach.
Chapter
The Netherlands has learned interesting lessons about ethics and innovation in the first decade of the twenty-first century. A real innovative design for an electronic patient record system or a truly smart electricity meter, would have anticipated or pre-empted moral concerns and accommodated them into its design, reconciling efficiency, privacy, sustainability, and safety. Innovation can take the shape of design solutions to situations of moral overload. Responsible innovation aims at changing the world in such a way that the pursuit of one horn of the dilemma is no longer necessarily at the expense of grabbing the other. It aims at grabbing the bull by both horns. Responsible innovation should, therefore, be distinguished from mere innovation or the adding of mere new functionality. Responsible innovation is the endeavor of attempting to add morally relevant functionality which allows us to do more good than before.