ArticlePDF Available

Foresight of evolving security threats posed by emerging technologies

Authors:
  • Ministry of Education, Israel

Abstract and Figures

Purpose – Many emerging technologies are being developed in an accelerating pace and are key drivers of future change. In foresight studies, usually their positive impact on the quality of life is considered or their negative environmental effects. This paper seeks to draw attention to an overlooked “dark side” of new technologies: their potential abuse by terrorists or organized crime. Recent cybercrime events are examples of abuse that perhaps could have been minimized if appropriate foresight studies were performed years ago. This was the aim of the recently completed EU-funded project FESTOS. Design/methodology/approach – Several foresight methodologies were employed. Following a horizon scanning for potentially threatening technologies, a Delphi-type expert survey helped to evaluate critical threat characteristics of selected 33 technologies: the likelihood that each technology will actually come to pose a security threat (in different time frames), the easiness of its malicious use, the severity of the threat, and the most threatened societal spheres. Findings – The results enabled ranking the technologies by their “abuse potential” and “threat intensity”. Certain emerging technologies (or their combinations), regarded as “weak signals”, inspired ideas for potential “wild cards”. In a subsequent workshop, which employed a variant of the “futures wheel” method, four wild-card “scenario sketches” were constructed. These were later developed to full narrative scenarios. Originality/value – The entire process enables the introduction of security foresight into policy planning in a long-range perspective. The foresight results were followed by the evaluation of policy implications and coping with the knowledge control dilemma. The paper illustrates how a mix of foresight methods can help in a continuous analysis of new and threats posed by emerging technologies, thus raising awareness of decision makers and mitigating the risk of unforeseen surprises.
No caption available
… 
Content may be subject to copyright.
Foresight of evolving security threats
posed by emerging technologies
Aharon Hauptman and Yair Sharan
Abstract
Purpose Many emerging technologies are being developed in an accelerating pace and are key
drivers of future change. In foresight studies, usually their positive impact on the quality of life is
considered or their negative environmental effects. This paper seeks to draw attention to an overlooked
‘‘dark side’’ of new technologies: their potential abuse by terrorists or organized crime. Recent
cybercrime events are examples of abuse that perhaps could have been minimized if appropriate
foresight studies were performed years ago. This was the aim of the recently completed EU-funded
project FESTOS.
Design/methodology/approach – Several foresight methodologies were employed. Following a
horizon scanning for potentially threatening technologies, a Delphi-type expert survey helped to
evaluate critical threat characteristics of selected 33 technologies: the likelihood that each technology
will actually come to pose a security threat (in different time frames), the easiness of its malicious use, the
severity of the threat, and the most threatened societal spheres.
Findings – The results enabled ranking the technologies by their ‘‘abuse potential’’ and ‘ ‘threat
intensity’’. Certain emerging technologies (or their combinations), regarded as ‘‘weak signals’’, inspired
ideas for potential ‘‘wild cards’’. In a subsequent workshop, which employed a variant of the ‘‘futures
wheel’’ method, four wild-card ‘‘scenario sketches’’ were constructed. These were later developed to full
narrative scenarios.
Originality/value – The entire process enables the introduction of security foresight into policy planning
in a long-range perspective. The foresight results were followed by the evaluation of policy implications
and coping with the knowledge control dilemma. The paper illustrates how a mix of foresight methods
can help in a continuous analysis of new and threats posed by emerging technologies, thus raising
awareness of decision makers and mitigating the risk of unforeseen surprises.
Keywords Foresight, Security, Emerging technologies, Future threats, Signals of change, Wild cards
Paper type Research paper
1. Introduction
In foresight studies, usually the positive impact of emerging technologies on the quality of life
is considered, or their negative environmental effects. This paper draws the attention to an
overlooked ‘‘dark side’’ of new technologies: their potential abuse by terrorists or organised
crime. Recent debate on the legitimacy to publish (or even to conduct) a research on
genetically engineered ‘‘bird flu’’ virus exemplifies the tension between the benefits of new
technologies and their ‘‘dark side’’ if abused. Would we have been able to foresee in 1993
that something like Google ‘‘Street View’’ will help terrorists to plan an attack in Mumbai in
2008? In 1932, in a famous BBC address H.G. Wells asked, referring to the dangers of
nuclear power: ‘‘Will there be no Foresight until those [atomic] bombs[1] begin to rain upon
us?’’ (Wells, 1932). This, in a nutshell, was our challenge in the EU-funded project
FESTOS[2]. Its main goals were to assess potential abuse threats posed by selected
emerging technologies, to cope with the dilemma of knowledge control, and to propose
policy guidelines to reduce the likelihood of abuse.
DOI 10.1108/FS-05-2012-0036 VOL. 15 NO. 5 2013, pp. 375-391, QEmerald Group Publishing Limited, ISSN 1463-6689
j
foresight
j
PAGE 375
Aharon Hauptman and
Yair Sharan are both based
at Interdisciplinary Center
for Technology Analysis
and Forecasting (ICTAF),
Tel-Aviv University, Tel-Aviv,
Israel.
The authors are grateful to all
FESTOS partners for their
fruitful cooperation: Finland
Future Research Centre (FFRC)
at the University of Turku,
Centre for Technology &
Society (ZTG) at the Technical
University of Berlin, EFP
Consulting UK, and the
University of Lodz, Poland.
Members of three Millennium
Project nodes were involved in
the project: ICTAF (Israel,
project coordinator), FFRC
(Finland) and Dr K. Steinmu
¨ller
of Z-punkt (Germany) as a
partner in the team of ZTG.
Received 21 May 2012
Revised 8 October 2012
Accepted 11 January2013
In this paper we briefly present some results of the Foresight study carried out within the
FESTOS project, namely horizon scanning for potentially threatening technologies,
assessment of threat aspects of selected technologies, and some highlights from ‘‘wild
card’’ (low-probability high-impact) scenarios inspired by them. ‘‘Wild cards’ ’ are
increasingly recognised in recent years as an important Foresight method, in light of
apparent growing frequency of ‘‘strategic surprises’’ and the lack of preparedness (Lee and
Preston, 2012) or even denial of decision makers.
2. Horizon scanning
The horizon scanning effort focused on five fields: ICT[3], nanotechnologies, biotechnology,
robotics, new materials, and converging technologies (Nano-Bio-Info-Cogno). Security
threats under consideration involved potential malicious use by terrorist groups and/or
organised crime. The scanning was based on a broad literature survey, interviews with
experts, and internal brainstormings. In the first stage 80 technologies were identified and
briefly described, including related threat indications. These preliminary indications were
regarded as ‘‘weak signals’’, and were analysed in a later stage. Three broad categories
were observed:
1. Disruption of certain applications, e.g. jamming the communications in intelligent collision
avoidance systems in transportation. This seemingly conspicuous category is
increasingly important with our growing dependence on technologies.
2. Increased accessibility to technologies that once were confined to the military sector or to
unique laboratories, and were prohibitively expensive, e.g. commercial off-the-shelf
(COTS) components that may generate damaging electromagnetic pulses (EMP).
3. Surprising malicious uses of new technologies that are being developed for beneficial
purposes. For instance, employing advanced toy robots for terror attacks, or using
synthetic biology to engineer bacteria that instead of producing fuel consume it.
All the three categories are important in addressing emerging threats. In FESTOS we
decided to concentrate mainly on the third, where we may find the most unexpected
potential threats, signals to ‘‘wild cards’’.
3. Expert survey
The objective of the worldwide online expert survey was to elicit experts’ opinions about the
threat potential of 33 technologies (briefly described in the Appendix Table AI), selected
from the 80 identified in the previous stage. The preliminary scanning and then the selection
of technologies for further evaluation in the survey were two steps of the so-called
‘‘surveillance filtering’’ process, carried out by internal discussions and consultation with
external experts and the project’s Advisory Board. For each technology the experts were
asked to assess:
BWhen will this technology be sufficiently mature[4] to be used in practice?
BHow easy will it be to use it for malicious purposes[5]? (1 ¼not easy at all, 5 ¼very easy).
BHow severe is the potential security threat posed by this technology? (1 ¼very low
severity, 5 ¼very high severity).
BThe likelihood that it will actually come to pose a security threat, in different time frames
(1 ¼very unlikely, 5 ¼very likely).
BTo which societal spheres it will pose a security threat.
About 280 experts responded (namely, answered at least part of the questions). Most come
from the academia or research institutes with expertise in specific technology fields, and 50
per cent have high or medium experience in security. They indicated their level of knowledge
about each technology: expert, knowledgeable, or familiar[6].
PAGE 376
j
foresight
j
VOL. 15 NO. 5 2013
A striking and important general finding was that majority of experts agree that the public is
rather badly informed about dangers of new technologies, and that governments tend to
underestimate the potential threats. Main results of the survey are presented below
(Hauptman et al., 2011).
3.1 Maturity timeframes
Table I presents the distribution of the estimated time of ‘‘sufficient maturity to be used in
practice’’ for each technology.
The technologies under consideration can be roughly divided into four groups according to
their estimated (median) time of maturity:
1. Short term (now-2015). RFID, smartphone technologies mash-ups, cloud computing,
tailored nanoparticles, and new gene transfer technologies.
2. Medium term (2016-2025). Internet of things (IoT), ultra-dense data storage, advanced
AI, autonomous mini robots, AI-based robot-human interaction, ethical control of robots,
robotic artificial limbs, energetic nanomaterials, molecular manufacturing, molecular
nanosensors, crystalline polymers, cyborg insects, 3-d printers, synthetic biology,
DNA-protein interaction, Induced Pluripotent Stem Cells, Brain-Computer Interfaces.
3. Long term (2026-2035). Self-replicating nanoassemblers, medical nanorobots,
Nanotechnology-enabled brain implants, Human enhancement based on NBIC[7]
convergence, programmable matter, processes and materials for nuclear technologies,
Table I Maturity timeframes – distribution of responses
Technology
Now-2015
(%)
2016-2020
(%)
2021-2025
(%)
2026-2035
(%)
Later or never
(%) median N
Internet of things 33.3 50.9 10.5 1.8 3.5 2018 57
RFID 72.2 13.9 5.6 2.8 5.6 2012 36
Smartphone technologies mash-ups 84.4 12.5 0.0 0.0 3.1 2012 32
Cloud computing 84.4 15.6 0.0 0.0 0.0 2012 32
Ultra-dense data storage 26.3 63.2 10.5 0.0 0.0 2018 19
Artificial intelligence 27.7 21.3 19.1 14.9 17.0 2018 47
AI-based robot-human interaction 15.8 21.1 42.1 5.3 15.8 2023 19
Autonomous toy robots 28.6 57.1 0.0 7.1 7.1 2018 14
Robotic artificial limbs 22.2 55.6 11.1 11.1 0.0 2018 9
Ethical control of robots 14.3 28.6 14.3 14.3 28.6 2023 7
Swarm robotics 0.0 22.2 11.1 66.7 0.0 2030 9
Molecular manufacturing 10.0 20.0 40.0 10.0 20.0 2023 10
Self-replicating nanoassemblers 0.0 16.7 8.3 33.3 41.7 2030 12
Medical nanorobots 9.1 9.1 0.0 54.5 27.3 2030 11
Tailored nanoparticles 57.9 42.1 0.0 0.0 0.0 2012 19
Energetic nanomaterials 28.6 50.0 7.1 0.0 14.3 2018 14
Molecular nanosensors 46.2 38.5 7.7 7.7 0.0 2018 13
Brain implants 0.0 13.3 20.0 40.0 26.7 2030 15
Brain-to-brain 0.0 11.1 0.0 66.7 22.2 2030 9
Cyborg insects 15.4 30.8 30.8 7.7 15.4 2023 13
Brain computer interface 25.0 25.0 25.0 16.7 8.3 2023 12
Human enhancement 5.9 17.6 17.6 47.1 11.8 2030 17
Metamaterials and optical cloaking 4.0 32.0 8.0 4.0 52.0 2030 25
Water catalyzing chemical reactions 4.0 20.0 24.0 4.0 48.0 2030 25
Programmable matter 8.0 16.0 12.0 16.0 48.0 2030 25
3-D printing 30.8 19.2 7.7 15.4 27.0 2018 26
Future fuels 16.0 8.0 32.0 4.0 40.0 2023 25
Crystalline polymers 34.6 11.5 26.9 3.8 23.0 2023 26
Synthetic biology 15.4 34.6 23.1 11.5 15.4 2018 26
DNA-protein interaction 19.0 28.6 19.1 4.8 28.6 2023 21
Gene transfer 50.0 25.0 12.5 0.0 12.5 2012 24
iPS cells 5.9 23.5 29.4 11.8 29.4 2023 17
Bio-mimicking for fluid mixing 13.3 6.7 13.3 26.7 40.0 2030 15
Note:
a
Radio frequency identification
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 377
water catalysing explosive reactions, Bio-mimicking for fluids mixing, metamaterials and
‘‘optical cloaking’’, swarm robotics.
4. In some cases a significant difference was found between the estimated maturity times.
For instance, 16 per cent of the experts assessed that ‘‘AI-based robot-human
interaction’’, ‘‘synthetic biology’’ and ‘‘cyborg insects’ ’ will be sufficiently mature before
2015, while the same percentage estimated that this will happen after 2035 (or never).
Around 50 per cent think that some technologies will mature after 2035, (‘‘optical
cloaking’’, ‘‘programmable matter’’ and ‘ ‘water catalysing chemical reactions’’). Naturally,
for many new technologies the uncertainty regarding their time of maturity is high and
therefore significant disagreement between experts is not surprising.
3.2 Severity of threat and easiness of abuse
The technologies can be roughly divided into three groups by their severity of threat, based
on the average severity values resulting from the survey (Table II):
1. Low severity (1-2.50). Molecular manufacturing, brain-computer interface, crystalline
polymers, ethical control of robots, robotic artificial limbs, bio-mimicking for fluid mixing,
molecular nanosensors, iPS cells.
2. Medium severity (2.51-3.39). New gene transfer technologies, cyborg insects, energetic
nanomaterials, RFID, autonomous mini robots, AI-based robot-human interaction, swarm
robotics, water catalyzing explosive reactions, brain implants, ultra-dense data storage,
Table II Severity of threats, easiness of malicious use, and potential of abuse
Technology
A
Easiness of malicious use
B
Severity of threat
C
Potential of abuse
(product of A and B)
Smartphone mash-ups 3.69 3.49 12.88
Internet of things 3.61 3.49 12.60
Cloud computing 3.29 3.53 11.61
New gene transfer technologies 3.52 3.22 11.33
Advanced artificial intelligence 3.21 3.43 11.01
Synthetic biology 3.16 3.40 10.74
Cyborg insects 3.33 3.08 10.26
Energetic nanomaterials 3.00 3.33 9.99
RFID 3.14 3.03 9.51
Autonomous mini-robots 3.36 2.83 9.51
AI-based robot-human Interaction 3.00 2.94 8.82
Swarm robotics 2.89 3.00 8.67
Water catalysing explosive reactions 2.56 3.38 8.65
Brain implants 2.73 3.07 8.38
Ultra-dense data storage 3.05 2.72 8.30
Human enhancement 2.63 3.13 8.23
Nanoassemblers 2.75 2.92 8.03
3-D printing 2.89 2.71 7.83
Metamaterials and optical cloaking 2.50 2.95 7.37
Tailored nanoparticles 2.53 2.89 7.31
Future fuels and materials for nuclear technologies 2.33 3.07 7.16
DNA-protein interaction 2.58 2.58 6.65
Programmable matter 2.29 2.79 6.39
Molecular manufacturing 2.50 2.50 6.25
Medical nanorobots 2.27 2.73 6.20
Brain-to-brain communication 2.25 2.56 5.76
Brain-computer interface 2.33 2.42 5.64
Crystalline polymers 2.56 2.11 5.40
Ethical control of robots 2.29 2.17 4.97
Robotic artificial limbs 2.63 1.78 4.68
Bio-mimicking for fluid mixing 1.92 2.07 3.98
Molecular nanosensors 2.08 1.85 3.85
iPS cells 1.44 1.87 2.68
PAGE 378
j
foresight
j
VOL. 15 NO. 5 2013
human enhancement, nano-assemblers, 3-d printers, ‘‘optical cloaking’ ’, tailored
nanoparticles, fuels and processes for nuclear technologies, DNA-protein interaction,
programmable matter, medical nanorobots, brain-to-brain communication.
3. High severity (3.40-5). Cloud computing, internet of things, smartphone mash-ups,
advanced AI, synthetic biology.
A useful way to prioritise the technologies is by multiplying the easiness of malicious use by
the severity of threat, which we interpret as the ‘‘potential of abuse’’ (column C in Table II).
The top ten technologies in Table II have relatively high potential of abuse (C .9) as they
exhibit rather severe threat potential and could be relatively easily used for malicious
purposes. Special attention should be paid in particular to those which are expected to
mature in the relatively near future, e.g. cloud computing, RFID, smartphone technologies
mash-ups, and new gene transfer technologies.
In Figure 1 the technologies are mapped by their threat severity vs easiness of malicious
use.
3.3 Likelihood to actually pose a security threat in the future
The (average) likelihoods are shown in Table III. It should be noted that a technology can be
maliciously used even before it is sufficiently mature for regular purposes, e.g. if it is in a
phase of prototype testing and does not comply yet with safety standards. It might be of
interest to present the severity of potential threats and the maximum likelihoods along the
whole time interval (Figure 2), which represents the likelihood that the technology might pose
a security threat in general, no matter when. Technologies in the upper right quartile have
high likelihood to pose a highly severe threat: internet of things, RFID, smartphone
technologies mash-ups, cloud computing, advanced AI, tailored nanoparticles, energetic
nanomaterials, cyborg insects, ‘‘optical cloaking’’, water catalysing explosive reactions,
synthetic biology, human enhancement, and new gene transfer technologies.
On the other hand, looking at technologies with low likelihood (less than 3.0) combined with
high (more than 3.0) severity of threat could point to potential wild cards (low-likelihood but
high impact events), that may also imply a need for special attention.
Figure 1 Severity of threat versus easiness of malicious use
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 379
Figure 2 Severity of threat versus maximim likelihood to pose a threat
Table III Likelihood of posing a threat in different time intervals
Technology Now-2015 2016-2020 2021-2025 2026-2035 After 2035 Never N
Internet of things 2.57 3.11 3.6 3.51 3.23 1.46 54
RFID 2.75 3.06 3.18 3.03 3 2 36
Smart mobile 3.06 3.33 3.44 3.15 2.96 2.08 31
Cloud computing 3 3.1 3.14 3.04 2.89 1.78 32
Ultra-dense data storage 1.83 2.5 2.88 3.12 3.24 2.15 18
Advanced AI 2.07 2.56 3.13 3.43 3.71 1.71 46
AI robots 1.59 2 2.25 2.65 2.94 17
Autonomous robots 1.71 2.36 3.07 3.14 3.36 14
Robotic artificial limbs 1.22 2.22 2.67 2.78 2.67 2.25 9
Ethical control 1.57 1.86 1.86 1.71 2 1.67 7
Swarm robotics 1.33 1.44 1.89 2.56 2.89 2 9
Molecular manufacturing 1.38 1.88 2.44 3 3 2.25 9
Nanoassemblers 1.1 1.1 1.67 2.11 2.73 2.43 11
Medical nanorobots 1 1.25 1.78 2.44 3.11 1.63 9
Tailored nanoparticles 2.56 3.11 3.18 3.06 3.07 1.56 18
Energetic nanomaterials 2.2 2.91 3.3 3.9 3.78 1.17 11
Molecular nanosensors 1.38 1.88 2.44 3 3 2.25 9
Brain implants 1 1.64 2.07 2.46 2.85 15
Brain-to-brain communication 1 1.11 1.44 2 2.56 10
Cyborg insects 1.42 2.38 2.92 3.08 3.17 1.2 13
Brain-computer interface 1.7 1.9 2.11 2.2 2.22 12
Human enhancement 1.07 1.69 2.07 2.67 3.13 16
Optical cloaking 1.21 1.99 2.56 3.33 3.82 1.39 19
Water catalyzing reactions 1.44 2.01 2.69 2.94 3.58 1.81 17
Programmable matter 1.51 1.95 2.41 3.14 3.40 2.27 17
3-D printing 2.13 2.68 3.18 3.53 3.56 1.49 20
Future fuels for nuclear. . . 1.47 2.07 2.78 3.38 3.71 1.76 17
Crystalline polymers 2.12 2.40 2.99 3.54 3.38 1.18 17
Synthetic biology 1.91 2.59 3.14 3.64 4.15 2.13 22
DNA protein interaction 1.81 2.08 2.73 3.07 3.20 3.03 18
Gene transfer 2.23 2.94 3.41 3.41 3.47 2.38 22
iPS cells 1.21 1.31 1.69 1.86 2.31 2.08 16
Bio-mimicking for fluid mixing 1.35 1.60 1.72 1.89 2.26 1.54 13
PAGE 380
j
foresight
j
VOL. 15 NO. 5 2013
It is interesting to observe the ‘‘change over time’’ of the average values of perceived
likelihood to actually pose a threat. This dynamics is shown in Figures 3-8. For most
technologies the likelihood rises with time, but in some cases it declines in later stages,
presumably because the experts envision that preventive means will be effectively
applied.
Figure 3 Liklihood to pose a threat ICT
Figure 4 Liklihood to pose a threat robotics
Figure 5 Liklihood to pose a threat nanotechnology
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 381
3.4 The impact on society
The experts were asked to which societal spheres each technology will pose a security
threat (multiple choice):
BPeople.
BInfrastructures.
BEconomy.
Figure 6 Liklihood to pose a threat – converging technologies
Figure 7 Liklihood to pose a threat – materials
Figure 8 Liklihood to pose a threat – biotechnology
PAGE 382
j
foresight
j
VOL. 15 NO. 5 2013
BEnvironment.
BPolitical systems.
BValues.
The percentages of respondents who opted for each sphere are shown in Table IV. Due to
multiple choice, the sum of percentages across each technology can vary between 0 and
600. We interpret this sum (the right column) as the ‘‘overall intensity of threat’’.
Evidently some technologies potentially threaten several societal spheres while others
affect fewer spheres. It is interesting to show for each sphere the number of
technologies regarded as threatening by more than 50 per cent of respondents
(Figure 9). Broadly speaking, most technologies pose threats to people. ICTs could also
threaten the economy and infrastructures, new materials affect mainly the environment
and infrastructures, nanotechnologies and biotechnology threaten mainly the
environment, and converging technologies can also threaten political systems and
values.
Respondents’ comments underscore the insufficient awareness (even of experts!) of
potential security threats, and hence the importance of foresight studies like FESTOS. A
notable remark of one expert:
Going over your questions I suddenly realized how smilingly innocent technologies can pose
severe security threats in the near future. Being a relativelyexperienced individual, yet unaware of
the above, I would say now that the public is not informed about such threats . . .
Table IV Intensity of threats
Technology
Economy
(%)
Environment
(%)
Infrastructures
(%)
People
(%)
Political
systems
(%)
Values
(%)
Overall
intensity of
threat
Advanced AI 67 38 76 82 51 31 345
Human enhancement 38 6 38 94 69 81 326
Swarm robotics 67 67 78 78 11 22 323
Cyborg insects 42 67 92 92 8 17 318
Internet of things 66 22 80 93 24 29 314
Water catalysing explosive reactions 42 58 92 83 33 0 308
Fuels and processes for nuclear technologies 25 83 83 92 25 0 308
AI-based robot-human interaction 41 41 59 88 35 41 305
Cloud computing 90 10 65 81 32 19 297
Programmable matter 45 64 73 73 18 18 291
Brain-to-brain communication 40 10 10 100 60 70 290
Molecular manufacturing 50 88 38 75 25 13 289
Self-replicating nanoassemblers 55 82 55 73 18 0 283
Energetic nanomaterials 25 58 83 92 25 0 283
RFID 54 9 63 91 23 31 271
Smartphone mash-ups 41 13 56 94 34 31 269
Autonomous mini robots 43 36 71 93 14 29 268
3D printers 75 33 58 75 17 8 266
Ultra-dense data storage 89 17 33 78 39 6 262
Metamaterials and optical cloaking 46 23 85 69 31 8 262
Synthetic biology 23 69 15 100 23 31 261
Nano-enabled brain implants 7 20 13 93 33 67 233
DNA-protein interaction 17 75 8 100 8 25 233
Tailored nanoparticles 16 84 21 100 0 11 232
Gene transfer 27 82 0 82 9 27 227
Ethical control of robots 17 50 17 83 33 17 217
Brain-computer interface 8 8 17 100 33 50 216
Molecular sensors 20 30 20 100 40 0 210
Crystalline polymers 36 45 45 82 0 0 208
Medical nanorobots 18 46 9 100 9 18 200
Bio-mimicking for fluid mixing 13 50 13 100 0 0 176
iPS cells 9 18 0 82 0 27 136
Robotic artificial limbs 0 0 0 89 11 33 133
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 383
Some experts asserted that although the technologies are new, the threats are not, because
similar malicious uses are already possible by existing technologies. Certain
nanotechnologies, in particular nanoassemblers and molecular manufacturing, are
regarded as ‘‘too speculative’’ by some respondents. However, there is clear
disagreement about that. While one expert claims that molecular manufacturing is ‘‘too
speculative to deserve attention at the moment’’, another one says that it is ‘‘no more
dangerous than organic chemistry’’, and a third one thinks that the same technology ‘‘could
be dangerous by scaling down the resources and facilities needed to manufacture other
risky technologies [. . .] enabling creative malefactors to invent entirely new categories of
threats there is no general preparedness or recognition of’’. Similarly, while many
respondents stress the potential threats of tailored nanoparticles (‘‘a nice weapon for a
terrorist’’), others think that the perceived dangers are overestimated. Many experts
stressed the well-known fact that all technologies are inherently prone to potential misuse,
and that the necessary regulations usually lag behind the technologies. Moreover, risk policy
is problematic because, as one expert put it, ‘‘Society tends to misestimate risks, and be
very over-confident about its estimates. Certain minor risks are exaggerated, while others
are regarded as silly.’’
3.5 FESTOS alerts
Based on the foresight study ‘‘FESTOS alerts’’ were prepared, in order to present the results
to policy makers in a clear and concise form, including some preliminary
technology-specific policy implications. As an example we present below one ‘‘alert’’,
concerning the internet of things.
4. Threat scenarios
The selected technologies with their threat indications are ‘‘weak signals’’; some of them
may hint at ‘‘wild cards’’: surprising low-probability high-impact events (Petersen and
Steinmu
¨ller, 2009). Consider, for example, the combination of the ‘‘internet of things’’ with
‘‘molecular manufacturing’’ and/or ‘‘programmable matter’’. One could imagine future
products with built-in capabilities of reconfiguration and recycling. What about potential
abuse? This was the idea behind one of the FESTOS narrative scenarios inspired by these
technologies. In this scenario ordinary products and gadgets are nano-based and can be
set to self-destruct (by terrorists? organised crime?) with a remote wireless signal. This
scenario was called ‘‘at the flea market’’, because of the obvious economic value of
‘‘pre-nano’’ products in this situation.
Figure 9 The number of technologies that pose a threat to each of the societal spheres
PAGE 384
j
foresight
j
VOL. 15 NO. 5 2013
The scenario construction consisted of the following stages:
1. Development of security climates: four different societal frameworks in which the
scenarios occur. These backgrounds, together with the selected technologies, triggered
imagination and helped assess impacts.
2. The central stage was the Scenario Workshop with experts in technology, security and
administration. The workshop was divided into six steps:
Bsuggestion of 30 wild card ideas inspired by the technologies;
Bselection of the four most interesting ideas;
Bpreliminary development of wild card events (each of them against one of the
previously developed ‘‘security climates’’);
Bimpact analysis;
Bpresentation of narrative ‘‘scenario sketches’’; and
Bfeedback from the participants.
From a methodology perspective, four methods were applied:
Bbrainstorming;
Bfutures wheel (Glenn, 2009);
Bsecurity cafe
´(modified version of World Cafe
´ Brown and Isaacs, 2005); and
Bimpact assessment (against selected societal dimensions).
3. Writing full narrative scenario drafts, incorporating the workshop results.
4. Obtaining online feedback from experts, and incorporating it in a revised version of the
scenarios.
Due to space limitation we only highlight here a concise description of the narrative
scenarios fully developed in the above process[8]:
B‘‘Cyborg-insects attack!’’: Swarms of cyborg-insects (insects with implanted electronics)
attack people, animals and agriculture crops.
B‘‘The genetic blackmailers’’: DNA of human individuals is misused for extortion.
B‘‘At the Flea Market’’: everyday, intelligent nanotechnology-based products are
programmed to ‘‘self-destruct’’ with a wireless signal.
B‘‘We’ll change your mind...’’: a terrorist group uses a virus to change the behaviour of a
portion of the population for a certain period of time.
The full narrative scenarios (Dienel and Peperhove, 2011) as well as other FESTOS results
(e.g. coping with the knowledge control dilemma) deserve a separate article.
Conclusions
This paper presents a foresight process which identifies and assesses evolving security
threats posed by the abuse of emerging technologies. It reflects a pressing need for
continuous analysis of the unfolding technology landscape for potential threats. The
foresight study provided evidence of riskier fields of research and technology. Tens of
technologies give rise to novel security threats, once available to evil actors such as terror
and crime groups. The study covered technologies from leading fields: nanotechnology,
biotechnology, information technology, materials, robotics as well as their convergence.
These are only the tip of the iceberg. Our study could pave the way to further foresight
studies, widening the scope. We view the emerging technologies as conveying signals of
change, with the following characteristics:
BThey contain future-oriented information.
BHardly perceptible at present, but will constitute a strong trend in the future.
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 385
BUnlikely events, not based on current trends, maybe pursued by pioneers of a certain
field.
BComplete surprises to many people.
Such signals of change are important early warnings of possibly emerging threats. They
might be surprising for the general public, but plausible to experts. The signals were
assessed by the potential for abuse of a technology, its likelihood to pose a new threat, and
the intensity of this threat. The results enabled us to prioritise the technologies which carry
these signals of change (Auffermann and Luoto, 2011).
Table V shows the top ten potentially most threatening technologies prioritised by the two
criteria developed in this study: the potential of abuse (multiplication of the easiness of
malicious use by the threat severity) and the intensity of the potential threat, which reflects
the integrative effect on the six societal spheres.
Results show that ICT and robotics play a major role by both criteria. This could be related to
the fact that these technologies are estimated to be realized in the near future and are more
familiar to many security experts. However other fields appear to be very significant.
Materials, biotechnology, nanotechnology and converging technologies appear in these
lists too, hinting on their future dangerous potential. It has to be underlined that the experts
were rather careful in their evaluations of the potential of abuse and also cautious with
respect to the intensity of the potential threat. For most technologies they did not choose the
highest values possible in the survey, but slightly higher than the mid values. Such lacks of
extreme views give reason to interpret the experts opinions as realistic and seriously
concerned.
An important dimension of the results’ interpretation is related to the probable time of threat
realization in different fields. ICT is the field with a significant amount of newly emerging
technologies that are seen as relevant for our study. In this field new technologies with
significant abuse potential are expected to be realized in short term, between now and 2020.
Threats stemming from nanotechnology and biotechnology are expected to materialise
much later, in some cases not before 2035.
The presented process could serve as basis for a kind of early warning system on
emerging threats, which could help coping with them in advance and thus to minimise
surprises. Raising awareness of policy makers to future threats might enhance
preparedness activities in various levels and reduce the likelihood of threats realization
on one hand and their impact on societal spheres on the other. It is our hope that the
knowledge base created in our project, with its adopted mix of methods, will expand by
further similar security foresight studies. Continuous assessment of many other new
technologies is needed, as well as additional future threat scenarios, involving a wider
community of experts from diverse fields.
Let the bright side of exciting new technologies win, after all.
Table V Prioritisation of top ten signals of change represented by emerging technologies
Priority By potential of abuse By threat intensity
1 Smartphone mash-ups Advanced AI
2 Internet of things Human enhancement
3 Cloud computing Swarm robotics
4 New gene transfer technologies Cyborg insects
5 Advanced AI Internet of things
6 Synthetic biology Water catalyzing explosive reactions
7 Cyborg insects Fuels and processes for nuclear technologies
8 Energetic nanomaterials AI-based robot-human interaction
9 RFID Cloud computing
10 Autonomous mini robots Programmable matter
PAGE 386
j
foresight
j
VOL. 15 NO. 5 2013
Notes
1. In 1913, before the nuclear chain reaction was fully understood, H.G. Wells wrote in his book The
World Set Free that ‘‘. ..a man could carry about in a handbag an amount of latent energy sufficient
to wreck half a city.’’
2. Foresight of Evolving Security Threats pOsed by emerging technologieS(project’s web site: www.
festos.org).
3. Information and communication technologies
4. The following explanation was provided: ‘‘Sufficiently mature’’: The technology was at least
demonstrated and validated outside the laboratory, through testing of prototypes. (This is similar to
TRL-5 or higher, on the ‘‘Technology Readiness Scale’’ used in many technology assessments)
5. It was explained in the questionnaire that ‘‘easy’’ means that the technology is easily
available/affordable/adaptable or ‘‘disruptable’’, and that ‘‘malicious’ ’ refers to terrorism and crime.
6. Expert: specialist knowledge in this area, through current/recent research work. Knowledgeable:
considerable knowledge in this area, through past work or current engagement in adjoining areas.
Familiar: knowledge based on reading relevant publications or listening to experts.
7. Nano-Bio-Info-Cogno
8. The Scenarios Workshop was moderated by the Futurist and Science Fiction writer Karlheinz
Steinmu
¨ller, who also subsequently wrote the narrative scenarios. Full versions can be found in
FESTOS report D4.3
References
Auffermann, B., Luoto, L. and with contribution from all partners (2011), Integrated Security Threats
Report, FESTOS Deliverable D3.3, available at: http://tinyurl.com/9auvja8
Brown, J. and Isaacs, D. (2005), The World Cafe: Shaping Our Futures Through Conversations That
Matter, Berrett-Koehler, San Francisco, CA.
Dienel, H.L. and Peperhove, R. (2011), Final Scenario and Indicators Report, FESTOS Deliverable D4.3.
Glenn, J.C. (2009) in Glenn, J.C. and Gordon, T.J. (Eds), The Futures Wheel, in Futures Research
Methodology – V3.0, The Millennium Project.
Hauptman, A., Raban, Y., Katz, O. and Sharan, Y. (2011), Final Report on Potentially Threatening
Technologies, FESTOS Deliverable D2.3, available at: http://tinyurl.com/8hlf4vl
Lee, B. and Preston, F. (2012), Preparing for High-impact, Low-probability Events – Lessons from
Eyjafjallajokull, A Chatham House Report.
Petersen, J.L. and Steinmu
¨ller, K. (2009) ‘‘Wild cards’’, in Glenn, J.C. and Gordon, T.J. (Eds), Futures
Research Methodology – V3.0, The Millennium Project.
Wells, H.G. ((1932) (1987)), ‘‘Wanted: professors of foresight!’’, Futures Research Quarterly, Vol. 3 No. 1,
pp. 89-91.
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 387
Table AI Technologies assessed in the FESTOS expert survey
Technology Potential threats (examples)
Nanotechnology
1. Molecular manufacturing
Assembling various products ‘‘bottom up’’, molecule by
molecule, possibly in small ‘‘nanofactories’’
Creation of new hazardous materials, or new types of weapons
2. Self-replicating nanoassemblers
Nanoassemblers, envisioned as tools for molecular
manufacturing, could self-replicate exponentially. Uncontrolled
‘‘runaway replication’’ has been described in speculative
scenarios of futuristic nanotechnology
Even if uncontrolled ‘‘runaway replication’’ is unlikely, one can’t
rule out an intentional design for malicious purposes
3. Medical nanorobots
Nanoscale robots inserted in the body are envisioned to conquer
diseases. A possible ‘‘shortcut’’ to bio-nanorobotics is to
engineer viruses and bacteria to create artificial bio-devices
Nanorobots instructed to harm humans and/or to remotely control
human actions
4. Tailored nanoparticles
Various nanoparticles designed for commercial products can be
hazardous to health. For example, iron oxide particles smaller
than 10 nanometers stunt the growth of nerve cells. Some
nanoparticles can cross the blood/brain barrier
Tailoring the properties to develop harmful nanoparticles
5. Energetic nanomaterials
Nanoscale additives can enhance chemical reactivity
(nano-sized aluminum is highly explosive). High energy density
could be achieved by molecular nanotechnology methods
New powerful propellants and explosives
6. Molecular sensors
Sensors with molecular precision will enable advanced
nano-diagnostics and will detect where a person has been by
sampling environmental clues on clothes
Such sensors could make people ‘‘molecularly naked’’, with their
personal information abused by criminals. From another angle,
this technology might enable ‘‘molecular camouflage’’
Biotechnology
1. Synthetic biology
In vitro building of biological agents from interchangeable
biological components. Using such ‘‘standard parts’ ’ will enable
‘‘programming’’ living organisms. This vision includes generating
a synthetic genome and then using it to control a recipient cell.
Synthetic biology goes beyond classic genetic engineering as it
attempts to engineer living systems to perform new functions not
found in nature
Accessible tools to build bioweapons, virulent pathogens,
dangerous organisms, etc
2. DNA-protein interaction
Using computer simulations, researchers envision a process
involving ‘DNA-binding proteins’ that bind to exactly the right
section of the DNA so they can carry out vital functions such as
copying genetic information and translating genes into templates
for protein production. This in one of possible ways to control
DNA expression
Construction of harmful biological materials/organisms
3. New gene transfer technologies
New devices and methods for transferring genes from one
organism to another
Could this trend evolve into ‘‘bio-hacking’’ culture?
4. Induced pluripotent stem cells
Scientists used viruses to flip genetic switches in the DNA of skin
cells from adult mice to turn them into iPS cells that are
functionally equivalent to embryonic stem cells
May enable genetically engineer traits into the cells before using
them to create ‘‘designer embryos’’, and revives concerns
regarding reproductive cloning by ‘‘rogue organisations’’. One
could speculate about camouflage at the cell level
5. Bio-mimicking for fluids mixing at extremely small scales
Speeding up biomedical reactions by filling reservoirs with tiny
rods that mimic cilia. Scientists created a prototype that mixes
tiny volumes of fluid or creates a current to move a particle: a
flexible structure with ‘‘fingers’’ 400 micrometers long that can
move liquids or biological components at the microscopic
scale
Preparation of toxic substances that need very small scale mixing
and are harmful in micro quantities
(Continued)
PAGE 388
j
foresight
j
VOL. 15 NO. 5 2013
Table AI
Technology Potential threats (examples)
New materials
1. Metamaterials and optical cloaking
Engineered metamaterials with negative light refraction index
could enable optical ‘‘cloaking’’, and creation of ’super-lenses’.
Cloaking devices made out of metamaterials can hide objects
from sight in certain wavelengths, or make them appear as other
objects
‘‘Invisibility cloaking’’ or perfect camouflage for malicious
purposes
2. Water catalysing explosive reactions
Researchers showed that water in hot and dense environments
plays an unexpected role in catalysing complex explosive
reactions
Preparation of powerful explosives by adding water to
appropriate solid materials, without necessity to mix fluids
3. Programmable matter
Materials that can be programmed to self-assemble, alter their
shape and physical properties to perform a desired function, and
then disassemble - in response to user input or autonomous
sensing
Malicious use of reconfigurable tools with perfect performance,
including weapons (that can pass security checks), readily
adaptable to changing conditions and requirements
4. Personal rapid prototyping and 3-D printers
3D printers can construct products after downloading a detailed
description. Researchers envision low-cost printers able to
self-copy and to use a wide variety of materials
‘‘Home-made’’ (undetectable?) weapons, or cheap
manufacturing of fake products
5. Future fuels, processes and structural materials for nuclear
technologies
Materials and processes for safe and efficient reactors,
e.g. organic superconductors, special magnetoresistive
materials, radiation-induced segregation, Uranium sili-cide fuels
which require only low enrichment, nanocrystalline diamond
films, etc. These enable to determine the mechanisms of
irradiation-induced swelling, predict the behaviour of fuels in
reactor cores, develop inelastic neutron scattering techniques,
determine properties of transuranium compounds, etc
Could such techniques make it easier for terrorists to built fissile
or ‘‘dirty’’ nuclear bombs?
6. Crystalline polymers, polymer blends, multilayer assemblies
Controlling structure– property relationships to improve the
performance of polymeric materials involves blending with
suitable functional constituents. Specific processes are attractive
for gas separation and special gas permeability and selectivity
characteristics are obtained by combining appropriate materials
and new processes. Applications range from long term food
preservation to specific mechanical endurance of multilayer foils
(e.g. for bullet-proof vests)
Since such polymers are of military interest due to reduction of
gas permeability, they could be attractive to terrorist groups that
want to use gas resistive coatings
Converging technologies (NBIC: Nano-Bio-Info-Cogno)
1. Nanotechnology-enabled brain implants
Neural interface devices implantable in the central nervous
system to treat motor disorders, or to control prosthetic limbs or
external devices. Future brain implants could enhance brain
functions of healthy persons. According to one report, by 2035
chips wired directly to the user’s brain would make information be
accessible through cognition and might include synthetic
sensory perception beamed direct to the user’s senses
Thought/behaviour control of people, or equipping future
perpetrators with ‘‘super mental power’’?
2. Brain-to-brain communication
Advances in brain-computer interface (BCI) may lead to
‘‘radiotelepathy’’ enabled by direct conversion of neural signals
into radio signals and vice versa. In a US army-funded project,
scientists ‘‘study the neuroscientific and signal-processing
foundations of synthetic telepathy’’
Thought control of people behaviour and actions
3. Cyborg insects
Insects controlled through implanted electrical stimulators.
Researchers envision living communication networks (for
sensing, surveillance, etc) by implanting electronics in insects.
Advanced capabilities could be offered by micro/nano
technologies
Could be used by perpetrators for harming people or agriculture,
for spying and other malicious activities
(Continued)
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 389
Table AI
Technology Potential threats (examples)
4. Brain-computer interface ‘‘mind reading’’ commercial
gadgets
First systems that enable people with disabilities to operate
Internet services or gadgets ‘‘by thought’ ’ have been
demonstrated, as well as systems for the gaming industry, and
further progress is envisioned
Malicious distortion in the double-way communication between
users and gadgets. Could hacking such a device enable
influencing the user’s actions?
5. Human enhancement based on NBIC convergence
Experts believe that NBIC technologies offer unprecedented
enhancement of human performance: augmentation of physical
and mental abilities and new modes of interaction. Some envision
that human and machine intelligence will converge in the coming
decades
Malicious use of specific capabilities by perpetrators, and
hacking of the involved (implanted) devices
Information and communications technologies
1. Internet of things (IoT) and ambient intelligence (AmI)
A network of everyday objects and various sensors, controllable
via the internet. IoT is related to the vision of AmI: people
surrounded by interconnected devices embedded everywhere
and accessed via intuitive interfaces. Such intelligent
environment is expected to seamlessly respond to the presence
and needs of individuals
New opportunities for hacking, identity theft, disruption, and other
malicious activities
2. Radio-frequency identification (RFID) and ‘‘RFID-dust’’
RFID tags allow automatic identification and localisation of
objects or persons. Their miniaturization will enable embedding
‘‘RFID dust’’ everywhere. RFIDs could be a component in the
‘‘internet of things’
Criminals could scan cars, houses or people to locate valuable
goods and select potential victims, or to collect information to
identify categories of people in order to target them. Perpetrators
could steal smart card details, or harm people by interfering with
medical implants
3. Smartphone technologies mash-ups
As various capabilities are ‘‘mashed up’ ’ (brought together in new
combinations and in tandem with new Internet services),
including GPS, face recognition and new ‘‘Augmented Reality’’
(AR), they turn the cellphone into an extremely versatile
surveillance device
Enabling Open Source Intelligence (OSINT) to be carried out
discretely and without any special equipment. A smartphone with
such a rich combination of features could be even more useful for
planning and executing terrorist or criminal actions
4. Cloud computing
Provision of dynamically scalable and often virtualized resources
as a service over the Internet. Providers deliver business
applications online which are accessed from a web browser,
while the software and data are stored on servers
As businesses and individuals are handing storage and other
tasks to outside providers, new opportunities arise for hacking
and cyber-attacks
5. Ultra-dense data storage
Nanotechnology-enabled data storage of 105 terabit per cm3 is
foreseen in the not-so-far future. Together with mobile Internet
connectivity, anybody will be able to have access to any
information
As data storage is getting increasingly miniaturized and dense,
huge amounts of information can be easily transported, stolen
and transferred. This may offer new opportunities for misuse,
including large computer programs to simulate/plan attacks, etc
6. Advanced artificial intelligence
Artificial general intelligence (AGI) is an emerging field aiming at
the building general-purpose systems with intelligence
comparable to the human mind. Related term is ‘‘Strong AI’’.
According to some extrapolations, desktop computers may have
the processing power of human brains by 2029, and by 2045 AI
could be able to ‘‘improve itself’ ’ at a rate that far exceeds
anything conceivable in the past
New opportunities for malicious use by criminals or terrorists
e.g. by interpreting facial expressions and human intentions,
design of ‘‘smart’’ malware for cyber-attacks, or enabling
malicious use of autonomous robots. More speculatively, could it
lead to machines that have intentions themselves?
Robotics
()1. AI-based Robot-Human Interaction and Co-existence
Japan and South Korea are preparing for the ‘‘human-robot
coexistence society,’’ foreseen to emerge before 2030. A
striking feature of this development are ‘‘social robots’’ with AI,
with which people will have emotional and even intimate
interactions
New opportunities for malicious use of robots that have close
intimate interactions with people who trust them?
(Continued)
PAGE 390
j
foresight
j
VOL. 15 NO. 5 2013
About the authors
Aharon Hauptman is Senior Researcher at ICTAF and leader of the technology assessment
workpackage in project FESTOS. Aharon Hauptman can be contacted at:
haupt@post.tau.ac.il
Yair Sharan is Director of ICTAF and Coordinator of FESTOS.
Table AI
Technology Potential threats (examples)
2. Autonomous and semi-autonomous mini robots: toys and
amateur objects
Progress in robotics, combined with micro/nano technologies
enables small robots with relatively high level of autonomy,
developed for industrial and medical applications as well as for
the toys industry
Such ‘‘toys’’ could covertly enter offices or houses, eavesdrop,
disrupt or destroy, or injure people in a similar way to an insect –
maybe using an insect-like body
3. Robotic artificial limbs
Various applications of robots could be implemented in human
bodies, e.g advanced prosthetic hands
Enhancing skills of a person by implementing robot arms or legs,
for malicious purposes by perpetrators
4. Ethical control of robots
The adoption of autonomous robots, in particular in military
environments, leads to new problems of control: Autonomous
decisions have to be based on ethical considerations. Ethical
controls becomes a new field in computer science. The
application of autonomous systems in civilian environments will
require such ethical control systems in new areas
One could envision malfunctioning systems that make disastrous
decisions or malevolent persons that reconfigure such systems
for terrorist or criminal use
5. Swarm robotics
Coordination of large numbers of robots, inspired mainly by the
observation of insects. Large number of simple individuals can
interact to create collectively intelligent systems. Researchers
envision that tiny (millimeters size) robots could be
mass-produced in swarms and programmed for a variety of
applications, such as surveillance, micro-manufacturing,
medicine, cleaning, and more
The self adaptation and self reprogramming could be employed
for malicious behavior of the swarm. The ability to easily
mass-produce tiny robots for swarms makes the threat even more
concrete
VOL. 15 NO. 5 2013
j
foresight
j
PAGE 391
To purchase reprints of this article please e-mail: reprints@emeraldinsight.com
Or visit our web site for further details: www.emeraldinsight.com/reprints
... Three studies (Hauptman and Sharan, 2013;Fears and ter Meulen, 2018;Kirkpatrick et al., 2018) identified illegal gene editing in the context of dangerous pathogens (e.g., viruses). Hauptman et al. refer to the increasingly accessible genetic engineering tools that can be used to make more virulent pathogens. ...
... Such databases now also include millions of consumers' genetic information from commercialized DNA testing kits (e.g., Ancestry DNA and 23AndMe). Data breaches and vulnerabilities in internet protocols-that make bioinformatics tools, shared databases, and cloud computing of genomic data insecurewere identified in four studies (Hauptman and Sharan, 2013;Backes et al., 2016;Wintle et al., 2017;Qu, 2019) (Table 2 Figure 5). Qu's literature review raised issues around "Genetic blackmail" or the act of coercion using the threat of revealing an individual's genetic information unless certain demands are met. ...
... The lack of basic cyber hygiene ( Table 2, Supplementary Table 2) and ease of access to systems generate the propensity for "The Genetic Blackmailers, " identified in a scenario-building exercise (Hauptman and Sharan, 2013) by Hauptman et al. Described as a "wild-card" scenario (lowprobability, high-impact), an individual would misuse DNA information for extortion. ...
Article
Full-text available
Synthetic biology has the potential to positively transform society in many application areas, including medicine. In common with all revolutionary new technologies, synthetic biology can also enable crime. Like cybercrime, that emerged following the advent of the internet, biocrime can have a significant effect on society, but may also impact on peoples' health. For example, the scale of harm caused by the SARS-CoV-2 pandemic illustrates the potential impact of future biocrime and highlights the need for prevention strategies. Systematic evidence quantifying the crime opportunities posed by synthetic biology has to date been very limited. Here, we systematically reviewed forms of crime that could be facilitated by synthetic biology with a view to informing their prevention. A total of 794 articles from four databases were extracted and a three-step screening phase resulted in 15 studies that met our threshold criterion for thematic synthesis. Within those studies, 13 exploits were identified. Of these, 46% were dependent on technologies characteristic of synthetic biology. Eight potential crime types emerged from the studies: bio-discrimination, cyber-biocrime, bio-malware, biohacking, at-home drug manufacturing, illegal gene editing, genetic blackmail, and neuro-hacking. 14 offender types were identified. For the most commonly identified offenders (>3 mentions) 40% were outsider threats. These observations suggest that synthetic biology presents substantial new offending opportunities. Moreover, that more effective engagement, such as ethical hacking, is needed now to prevent a crime harvest from developing in the future. A framework to address the synthetic biology crime landscape is proposed.
... The review of recent academic literature on technological change and innovation as security threats revealed that the methods mostly used are foresight and expert surveys [e.g. [1][2][3][4][5]. The current state of development of science and technology, as well as the complexity and uncertainty surrounding the development require interdisciplinarity and imagination of the highest possible number of experts. ...
... Recently, these issues were discussed in the World Economic Forum's (WEF) Global Risks Reports [4,5] and in the publications of the EU-funded project called Foresight of Evolving Security Threats Posed by Emerging Technologies (FESTOS) [e.g. [1][2][3]. ...
... The potential threats are suggested for example in [1]. Fig. 3. Prioritisation of top ten signals of change represented by emerging technologies [3]. ...
Article
Full-text available
Technological change and innovation, together with the related development of science, have been perceived as drivers of social and economic progress and public optimism in the globalizing world. Indeed, in the past centuries and especially decades, there has been a huge advancement of humankind that can be both felt and measured. However, people have also learned that science and technology can be misused or abused, or they can have unintended consequences (cf. nuclear fission). Especially in times when the public feels that the change is fast and unprecedented, they also provoke fear and resentment. Science, technological change, and innovation can be presented and perceived as security threats, i.e. securitized. It seems that, now, we are living in one of such historical periods. The goal of the paper is to analyse if and how technological change and innovation are presented or perceived as security threats, especially in the Czech political and public discourse. To reach the goal, we can ask the following research questions: Are science, technological change, and innovation securitized? What are the concrete examples of emerging technologies and innovations that are securitized? (e.g. artificial intelligence and robotics, biotechnologies) Is the narrative present in the Czech political and public discourse? Is the securitization process successful? What are the lessons learned and recommendations for policy?
... Conducting foresight in these domains is highly relevant to governments, military institutions, and a range of industry, academic and civil society groups given the rapid speed at which developments are taking place. There are a growing number of ways in which synthetic biology and biotechnology may pave the way to novel and high consequence risks while simultaneously offering new opportunities to address them (Hauptman and Sharan 2013). Foresight processes can help to avoid technological surprise and unexpected societal impacts, in part through identifying possible security threats before they emerge. ...
... In one 2013 foresight study on emerging technologies, new gene transfer technologies and synthetically engineered biological agents ranked amongst the top ten risks, when prioritised by threat intensity and potential for misuse (Hauptman and Sharan 2013). The authors highlighted that security policy can be informed by adopting a long-range perspective where awareness would enable mitigation of threats that might otherwise be unaddressed. ...
Book
Full-text available
Synthetic biology is a field of biotechnology that is rapidly growing in various applications, such as in medicine, environmental sustainability, and energy production. However these technologies also have unforeseen risks and applications to humans and the environment. This open access book presents discussions on risks and mitigation strategies for these technologies including biosecurity, or the potential of synthetic biology technologies and processes to be deliberately misused for nefarious purposes. The book presents strategies to prevent, mitigate, and recover from ‘dual-use concern’ biosecurity challenges that may be raised by individuals, rogue states, or non-state actors. Several key topics are explored including opportunities to develop more coherent and scalable approaches to govern biosecurity from a laboratory perspective up to the international scale and strategies to prevent potential health and environmental hazards posed by deliberate misuse of synthetic biology without stifling innovation. The book brings together the expertise of top scholars in synthetic biology and biotechnology risk assessment, management, and communication to discuss potential biosecurity governing strategies and offer perspectives for collaboration in oversight and future regulatory guidance.
... Conducting foresight in these domains is highly relevant to governments, military institutions, and a range of industry, academic and civil society groups given the rapid speed at which developments are taking place. There are a growing number of ways in which synthetic biology and biotechnology may pave the way to novel and high consequence risks while simultaneously offering new opportunities to address them (Hauptman and Sharan 2013). Foresight processes can help to avoid technological surprise and unexpected societal impacts, in part through identifying possible security threats before they emerge. ...
... In one 2013 foresight study on emerging technologies, new gene transfer technologies and synthetically engineered biological agents ranked amongst the top ten risks, when prioritised by threat intensity and potential for misuse (Hauptman and Sharan 2013). The authors highlighted that security policy can be informed by adopting a long-range perspective where awareness would enable mitigation of threats that might otherwise be unaddressed. ...
Chapter
Full-text available
Rapid developments in the fields of synthetic biology and biotechnology have caused shifts in the biological risk landscape and are key drivers of future threats. From a security perspective, extending our understanding beyond current risks to include emerging threats in these and related fields can play a vital role in informing risk mitigation activities. Insights that are generated can be combined with other efforts to identify vulnerabilities and prevent undesirable outcomes. Emerging risks that may occur at some point in the future are inherently difficult to assess, requiring a systematic approach to examining potential threats. Foresight is a process to consider possible future scenarios. Comprising a range of methods and techniques, foresight processes can offer novel insights into emerging synthetic biology and biotechnology threats. This chapter offers an introduction to foresight, including definitions of key terms that could support a shared lexicon across NATO partners. An overview of different foresight methodologies, their potential applications, and different strengths and limitations are presented. As a key first step, an approach to selecting appropriate questions to guide foresight activities is suggested. Example questions for synthetic biology and biotechnology are highlighted. At the end of the chapter, the authors offer recommendations for the design of a foresight process, with the intention of providing a useable resource for NATO partners investigating emerging synthetic biology and biotechnology threats.
... Denkbar ist dabei etwa die bereits erwähnte Erpressung mittels hochpersönlicher genetischer Informationen, wenn sie entsprechend nötigende Implikationen für die Betroffenen haben (Qu, 2018). Aber auch kreativere und speziellere Szenarien des Missbrauchs genetischer Informationen für kriminelle Zwecke wurden in der Literatur bereits besprochen, wobei ihnen eine geringe Verwirklichungswahrscheinlichkeit zugemessen wird: So nennen etwa Hauptman und Sharan die gezielte Beeinflussung von Strafverfahren durch manipulierte DNA-Proben oder Vaterschaftsklagen gegen Wohlhabende und Einflussreiche unter Verwendung von gefälschten (synthetischen) DNA-Proben (Hauptman & Sharan, 2013). ...
Chapter
Die zunehmende Verstrickung von Informations- und Biotechnologien bringt ein neues technologisches Feld innerhalb der Gesellschaft hervor, das sich auch auf Kriminalität auswirken wird. Vor dem Hintergrund des Verhältnisses zwischen Kriminalität und Technologie führt der Beitrag den Begriff der Cyberbiokriminalität in den (deutschsprachigen) Diskurs ein und konturiert ihn. Zudem werden erste Überlegungen zu ätiologische Ursachen angestellt und Cyberbiosicherheit als praktisches Präventionskonzept vorgestellt. Insgesamt soll so das Phänomen kriminologisch fassbar gemacht werden.
... Awareness about both the current and potential employees' data protection is important. Also, research works have highlighted the importance of cyber security with regard to the use of emerging technology (Raban and Hauptman, 2018;Hauptman and Sharan, 2013). ROI is the second important criteria, highlighting firms' expectations from AI adoption in hiring and its outcomes that are also linked with ROI. ...
Article
Purpose In the context of new workplace environment, this study aims to study and generate insights about artificial intelligence (AI) adoption in hiring process of firms. It is very relevant when AI is dramatically reshaping hiring function in the changing scenario. Design/methodology/approach The objectives are achieved with the help of three studies involving Delphi method to explore the criteria for AI adoption decision. Followed by two multi criteria decision-making techniques, i.e. analytic hierarchy process to identify weights of the criteria and fuzzy technique for order preference by similarity to ideal solution to assess the extent of AI adoption in hiring. Findings The findings reveal that information security and return on investment are considered two very important criteria by human resources managers while contemplating the adoption of AI in hiring process. It was found that AI adoption will be suitable at the sourcing and initial screening stages of hiring. And the suitability of the hiring stage where AI can be applied has been found to have changed from before and after the onset of COVID-19 pandemic situation. The findings and its discussion assist and enhance better decisions about AI adoption in hiring processes of firms amid changing scenario – external and internal to a firm. Research limitations/implications Findings also highlight research implications for future research studies in this emerging area. Practical implications Results act as a starting point for other human resources managers, who are still pondering over the idea of adopting AI in hiring in future. Originality/value This paper through a systematic approach contributes by identifying important evaluation criteria influencing AI adoption in firms and extent of its application in the stages of hiring. It makes a substantial contribution to the under-developed yet emerging paradigm of AI based hiring in practice and research.
... Examples of this include nanotechnology, which became the centre of significant debate and controversies as the technologies developed and could be realised (Macnaghten, 2010), or more recent discussions over blockchain technologies, the value of which is so far still unconfirmed, despite both media and academic speculation as to its future uses. In the early, formative stages of an emerging technology, research and discussion centre on its positive potential, rather than the negatives, as noted by Cordeiro et al. (2013). When a technology matures, and emphasis is placed on the innovation and dissemination of a technology, then there is an increased attention on possible security threats and opportunities, which may also feature in European Union (EU) funded research (Csernatoni, 2019). ...
... In fact, [28] adds that implementation of AI in forecasting adds to the existing threats and also introduces new threats and advances threat characters. Also, AI-based forecasting is an emerging technology, and such technologies are being termed as prone to cyberthreats irrespective of how beneficial they are [29]. So, it is justifiable that AI-powered forecasting is prone to manipulation through cybercrime, which proves this hypothesis to be true. ...
Article
When I was a student, half a century ago, we used to talk of the abolition of distance, because of those then comparatively recent triumphs, the telegraph, the steamship and the railway train. Some of us knew already of the possibility of radio, but nobody believed we should live to take a ticket and fly around the world. The swiftest thing upon the road was a bicycle, and television seemed a fantastic impossibility. All my life I have seen that abolition of distance becoming more and more complete. Much of what you have heard as matter of fact tonight would have seemed fantastic when I was already a young man. And even now I am not very old. In a little while all round the earth will be a few days' journey, and everybody will be potentially within sight and sound of everybody all over the planet. There will be no more distance left and little separation. You will be able to see and talk to your friends anywhere in the world as easily and surely as you send a telegram today. So plainly are things driving in that direction, that it would be childish to argue about this or elaborate it. Before another half century has passed everybody, so to speak, will be on call next door. You cannot doubt it.
and with contribution from all partners
  • B Auffermann
  • L Luoto
Auffermann, B., Luoto, L. and with contribution from all partners (2011), Integrated Security Threats Report, FESTOS Deliverable D3.3, available at: http://tinyurl.com/9auvja8
The Millennium Project
  • J L Petersen
  • K Steinmü Ller
Petersen, J.L. and Steinmü ller, K. (2009) ''Wild cards'', in Glenn, J.C. and Gordon, T.J. (Eds), Futures Research Methodology – V3.0, The Millennium Project.
The World Cafe: Shaping Our Futures Through Conversations That Matter
  • J Brown
  • D Isaacs
Brown, J. and Isaacs, D. (2005), The World Cafe: Shaping Our Futures Through Conversations That Matter, Berrett-Koehler, San Francisco, CA.
The Futures Wheel The Millennium Project
  • J C Glenn
Glenn, J.C. (2009) in Glenn, J.C. and Gordon, T.J. (Eds), The Futures Wheel, in Futures Research Methodology – V3.0, The Millennium Project.
Preparing for High-impact, Low-probability Events – Lessons from Eyjafjallajokull
  • B Lee
  • F Preston
Lee, B. and Preston, F. (2012), Preparing for High-impact, Low-probability Events – Lessons from Eyjafjallajokull, A Chatham House Report.
Preparing for High-impact, Low-probability Events - Lessons from
  • B Lee
  • F Preston
Lee, B. and Preston, F. (2012), Preparing for High-impact, Low-probability Events -Lessons from Eyjafjallajokull, A Chatham House Report.