Access to this full-text is provided by Springer Nature.
Content available from Ethics and Information Technology
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
Ethics and Information Technology (2023) 25:14
https://doi.org/10.1007/s10676-023-09683-0
ORIGINAL PAPER
The irresponsibility ofnotusing AI inthemilitary
H.W.Meerveld1· R.H.A.Lindelauf1 · E.O.Postma2· M.Postma2
Accepted: 27 January 2023 / Published online: 14 February 2023
© The Author(s) 2023
Abstract
The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by
the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a
considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the
military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks
of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii)
take ethical considerations and enhanced performance in military operations into account. A characterization of the debate
on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper.
We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision
support, taking each quadrant of this characterization into account.
Keywords Decision-making· Military decision-making process· Responsible AI· Intelligence cycle
Introduction
While in many private and public sector domains AI solu-
tions are becoming an essential tool driving change and
development, progress in the use of AI for military pur-
poses has been hindered by a number of important ethi-
cal questions for which answers have been lacking. These
questions primarily concern autonomous military plat-
forms, which typically center on the use of lethal autono-
mous weapon systems (LAWS)1 and the potential risk of
nuclear escalation.2 A recent literature review on data sci-
ence and AI in military decision-making found that most
of the studies examining these topics originate in social
sciences. As a result, the debate about the use of AI for
military purposes, although of high strategic importance,
appears to be limited in terms of its scope and perspec-
tive. Additionally, the use of data science at operational
and strategic level seems to be largely under-examined in
current literature (Meerveld & Lindelauf, 2022). In this
paper, we argue that the ethical discussion on the use of
AI in military operations should re-shift its focus from so-
called ‘killer robots’ and the concept of fully autonomous
AI applications to solutions that remain subject of (mean-
ingful) human control. As argued by various researchers
[e.g., Tóth etal. (2022)], the use of Lethal Autonomous
Weapon Systems (LAWS) is generally considered to be
illegal and immoral, despite potentially decreasing risks
to military personnel. There is also consensus among
policy makers that AI cannot fully replace human deci-
sion-making. However, it is necessary to examine both the
opportunities and risks of military AI in a broader con-
text and to explore how AI technology can be controlled,
supervised and potentially assimilated into force structure
and doctrine (Johnson, 2020a, b), either strengthening or
complicating deterrence (Johnson, 2019, 2020a, b). In line
with the consequentialist approach towards the ethics of
military AI, we argue that in discussing the responsibility
of AI-based decision support techniques, military effec-
tiveness and the entire decision-making chain in military
operations should be taken into account. For example,
* R. H. A. Lindelauf
rha.lindelauf.01@mindef.nl; roy_lindelauf@hotmail.com
1 Faculty ofMilitary Sciences, Data Science Center
ofExcellence, Netherlands Defence Academy, Breda,
TheNetherlands
2 Tilburg School ofHumanities andDigital Sciences, Tilburg
University, Tilburg, TheNetherlands
1 See for example Sharkey (2010), Roff, The strategic robot problem:
Lethal autonomous weapons in war (2014), Roff and Danks (2018),
and Tóth etal. (2022).
2 See for example Altmann and Sauer (2017), Johnson (2019, 2020a,
2020b), and Horowitz etal. (2019).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
H.W.Meerveld et al.
1 3
14 Page 2 of 6
certain types of military AI robots subjected to human
control and judgment may be permissible for self-defense
purposes, human-AI teaming could lead to faster and more
appropriate decision-making under pressure and uncer-
tainty, and AI systems could be broadly used for adaptive
training of military personnel, thereby helping to miti-
gate decision-making biases [e.g., by means of detecting
drowsiness or fatigue from neurometric signals in the brain
(Weelden etal. 2022)]. In Fig.1 we visualize the current
debate on responsible AI in a military context and its focal
points (i.e., the lower right quadrant, Machine Weakness
(MW) and the endpoint of the MDMP). In what follows,
we first elaborate on the military decision-making process
(MDMP) that in large part precedes lethal target engage-
ment on a battlefield. Next, we present some examples of
potential use of AI solutions in the MDMP together with
their benefits and infer the issue of the (ir)responsibility
of military AI.
AI insupport ofthemilitary decision‑making
process (mdmp)
Military decision-making consists of an iterative logical
planning method to select the best course of action for a
given battlefield situation. It can be conducted at levels
ranging from tactical to strategic. Each step in this process
lends itself to automation. This does not only hold for the
MDMP, but also for related processes like the intelligence
cycle and the targeting cycle. As argued in Ekelhof (2018),
Fig. 1 Characterization of the
debate on responsible AI in a
military context. The red dashed
lines indicate the focus of cur-
rent literature, while the ideal
scope of the debate is repre-
sented by the blue dashed lines.
(Color figure online)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
The irresponsibility ofnotusing AI inthemilitary
1 3
Page 3 of 6 14
instead of focusing on the target engagement as an endpoint,
the process should be examined in its entirety. To illustrate
this point, we visualized the preferred scope with the blue
circle in Fig.1. Below, we first briefly describe the MDMP.
Subsequently, we explore the potential advantages of AI in
decision-making and provide some examples of how AI can
specifically support the MDMP at several different (sub-)
steps.
The MDMP andits challenges
The US Army defines seven steps in the MDMP: (1) receipt
of mission, (2) mission analysis, (3) course of action (COA)
development, (4) COA analysis, (5) COA comparison, (6)
COA approval, and (7) the order production, dissemina-
tion, and transition (Reese, 2015). The level of detail of the
MDMP depends on the available time and resources, as well
as other factors. Each step in the MDMP has numerous sub-
steps that generate intermediate products. Examples include
intelligence products developed during the intelligence
preparation of the battlefield (IPB) that are used to indicate
COAs and decision points for commanders or geospatial
products from terrain analyses that can include recommen-
dations on battle positions and optimal avenues of approach.
The intelligence cycle, per NATO standard consisting of four
steps (Direct, Collect, Process, and Disseminate) (Davies &
Gustafson, 2013), is the separate but relating sub-process
by which these intelligence products are created. Other
examples of sub-processes in the MDMP are the targeting
cycle, as explained by Ekelhof (2018), or the continuous
lessons learned process in order to incorporate best prac-
tices and lessons learned into military doctrine (Weber &
Aha, 2003), which ultimately forms important input in, for
example, the COA development phase.The MDMP and its
related processes entail many labor-intensive, handcrafted
products. This has two important consequences. First, due
to the complexity of the information space, the MDMP is
hugely susceptible to cognitive biases. These can be both
conscious and unconscious and may result in suboptimal
performance. An example of a cognitive bias is groupthink
which is a problem typically encountered during the analy-
sis and assessment phase of the Intelligence Cycle (Parker,
2020). Another example is the anchoring bias when deci-
sions are made based on initial evidence (the anchor) (Heuer,
1999), as exemplified in a scenario where a group of aviators
need to determine the optimal location of battle positions
after having received an initial list of good locations during
helicopter mission planning. Even though intuitive decision-
making in the MDMP may be effective, it is well known that
both intuition and uncertainty can lead to faulty and erro-
neous decision outcomes (Van Den Bosch & Bronkhorst,
2018). Because our human cognitive mechanisms are ill-
equipped to convert information from a high volume of data
into valuable knowledge (Cotton, 2005), the susceptibility
to cognitive biases increases with the exponential growth of
data volume (Heuer, 1999). It is expected that the challenge
of information overload will only increase, since modern
military operations increasingly rely on open-source data
(Ekelhof, 2018). Second, labor-intensive processes tend to
be time-consuming. The contemporary digitized environ-
ment results in a proliferation of various data sources in
different formats (i.e., numerical, text, sound, and image)
and intelligence requires their fusion and interpretation (Van
Den Bosch & Bronkhorst, 2018). In most military situations,
it is of high importance to design efficient and streamlined
planning processes, avoiding labor-intensive sub-steps, when
possible, to ensure that no time is lost (Hanska, 2020). After
all, the aim is to outpace the opponent’s OODA-loop (i.e.,
Observe, Orient, Decide, Act) (Osinga, 2007) and AI-based
automation can be an important driver of such efficiency
gain. In addition, time pressure can further increase the
chance of a cognitive bias [e.g. (Roskes etal., 2011) and
(Eidelman & Crandall, 2012)]. In sum, human decision-
making mechanisms appear to be deficient in many military
circumstances given a limited capacity to process all poten-
tially relevant data and a limited amount of time. The value
of AI is found in the capacity to support human decision-
making, which optimizes the overall outcome (Lindelauf
etal., 2022). In the next section, we address the opportuni-
ties offered by AI in more detail by presenting examples of
automation of (sub-) elements in the MDMP.
The added value ofAI formilitary decision‑making
Given the limitations of human decision-making, the advan-
tage of (partial) automatization with AI can be found both
in the temporal dimension and in decision quality. A NATO
Research Task Group for instance examined the need for
automation in every step of the intelligence cycle (NATO
Science & Technology Organization, 2020) and found that
AI helps to automate manual tasks, identify patterns in com-
plex datasets and accelerate the decision-making process in
general. Since the collection of more information and per-
spectives results in less biased intelligence products (Richey,
2015), using computer power to increase the amount of data
that can be processed and analyzed may reduce cognitive
bias. Confirmation bias, for instance, can be avoided through
the automated analysis of competing hypotheses (Dhami
etal., 2019). Other advantages of machines over humans
are that they allow for scalable simulations, conduct logi-
cal reasoning, have transferable knowledge and an expand-
able memory space (Suresh & Guttag, 2021), (Silver, etal.,
2016).An important aspect of the current debate about the
use of AI for decision-making concerns the potential dan-
gers of providing AI systems with too much autonomy, lead-
ing to unforeseen consequences. A part of the solution is to
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
H.W.Meerveld et al.
1 3
14 Page 4 of 6
provide sufficient information to the leadership about how
the AI systems have been designed, what their decisions
are based on (explainability), which tasks are suitable for
automation and how to deal with technical errors (Lever &
Schneider, 2021). Tasks not suitable for automation, e.g.,
those in which humans outperform machines, are typically
tasks of high complexity (Blair etal., 2021). The debate on
responsible AI should therefore also take human strengths
(HS quadrant) into account. In practice, AI systems cannot
work in isolation but need to team up with human decision-
makers. Next to the acknowledgment of bounded rationality
in humans and ‘human weakness’ (viz. lower left quadrant
in Fig.1; HW), it is also important to take into considera-
tion that AI cannot be completely free of bias for two rea-
sons. First, all AI systems based on machine learning have
a so-called inductive bias comprising the set of implicit or
explicit assumptions required for making predictions about
unseen data. Second, the output of machine learning systems
is based on past data collected in human decision-making
events (machine weakness, MW, viz. lower right quadrant
in Fig.1). Uncovering the second type of bias may lead to
insights regarding past human performance and may ulti-
mately improve the overall process.
Examples ofAI intheMDMP
It is important to examine the risks of AI and strategies for
their mitigation. This mitigation, however, is useless without
examining the corresponding opportunities at the same time
(MS quadrant in Fig.1). In this paragraph, therefore, we
present some examples of AI applications in the MDMP. In
doing so, we provide an impetus for expanding the debate
on responsible AI by taking every quadrant in Fig.1 into
account.An example of machine strength is the use of AI to
aid the intelligence analyst in the generation of geospatial
information products for tactical terrain analysis. This is an
essential sub-step of the MDMP since military land opera-
tions depend heavily on terrain. AI-supported terrain analy-
sis enables the optimization of possible COAs for a military
commander, and additionally allows for an optimized analy-
sis of the most likely enemy course of action (De Reus etal.,
2021). Another example is the use of autonomous technolo-
gies to aid in target system analysis (TSA), a process that
normally takes months (Ekelhof, 2018). TSA consists of
the analysis of an enemy’s system in order to identify and
prioritize specific targets (and their components) with the
goal of resource optimization in neutralizing the opponent’s
most vulnerable assets (Jux, 2021). Examples of AI use in
TSA include automated entity recognition in satellite footage
to increase the information position necessary to conduct
TSA, and AI-supported prediction of enemy troop loca-
tions, buildup and dynamics based upon information gath-
ered from the imagery analysis phase. Ekelhof (2018) also
provides examples of autonomous technologies currently in
use for weaponeering (i.e., the assessment of which weapon
should be used for the selected targets and related military
objectives) and collateral damage estimation (CDE), both
sub-steps of the targeting process. Another illustrative exam-
ple of the added value of AI for the MDMP is in wargaming,
an important part of the COA analysis phase in the MDMP.
In wargames AI can help participants to understand possible
perspectives, perceptions, and calculations of adversaries for
instance (Davis & Bracken, 2021). Yet another example is
the possibility of a 3D view of a certain COA, enabling swift
examination of the terrain characteristics (e.g., potential
sightlines) to enhance decision-making (Kase, etal., 2022).
AI-enabled cognitive systems can also collect and assess
information about the attentional state of human decision-
makers, using sensor technologies and neuroimaging data to
detect mind wandering or cognitive overload (Weelden etal.,
2022). Algorithms from other domains may also represent
value to the MDMP, such as the weather-routing optimiza-
tion algorithm for ships (Lin etal., 2013), the team forma-
tion optimization tool used in sports (Beal etal., 2019), or
the many applications of deep learning in natural language
processing (NLP) (Otter etal., 2020), with NLP applica-
tions that summarize texts (such as Quillbot and Wordtune)
decreasing time to decision in the MDMP. Finally, digital
twin technology (using AI) has already demonstrated its
value in a military context and holds a promise for future
applications, e.g., enabling maintenance personnel to predict
future engine failures on airplanes (Mendi etal., 2021). In
the future, live monitoring of all physical assets relevant
to military operations, such as (hostile) military facilities,
platforms, and (national) critical infrastructure, might be
possible.
Conclusion
The debate on responsible AI in a military context should
not have a predominant focus on ethical issues regarding
LAWS. By providing a characterization of this debate into
four quadrants, i.e., human–machine versus strength-weak-
ness, we argued that the use of AI in the entire decision-mak-
ing chain in military operations is feasible and necessary. We
described the MDMP and its challenges resulting from the
labor-intensive and handcrafted products it involves. The
susceptibility to cognitive biases and the time-consuming
character of those labor-intensive processes present limita-
tions to human decision-making. We conclude that the value
of AI can, therefore, be found in the capacity to support this
decision-making to optimize its outcome. Ignoring the capa-
bilities of AI to alleviate the limitations of human cogni-
tive performance in military operations, thereby potentially
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
The irresponsibility ofnotusing AI inthemilitary
1 3
Page 5 of 6 14
increasing risks for military personnel and civilians, would
be irresponsible and unethical.
Open Access This article is licensed under a Creative Commons Attri-
bution 4.0 International License, which permits use, sharing, adapta-
tion, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article are
included in the article's Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in
the article's Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
References
Altmann, J., & Sauer, F. (2017). Autonomous weapon systems and
strategic stability. Survival, 59(5), 117–142.
Beal, R., Norman, T. J., & Ramchurn, S. D. (2019). Artificial intel-
ligence for team sports: A survey. The Knowledge Engineering
Review, 34, e28.
Blair, D., Chapa, J., Cuomo, S., & Hurst, J. (2021). Humans and
hardware: an exploration of blended tactical workflows using
John Boyd’s OODA loop. In R. Johnson, M. Kitzen, & T. Sweijs
(Eds.), The conduct of war in the 21st century : Kinetic, con-
nected and synthetic (pp. 93–115). Taylor & Francis Group.
Cotton, A. J. (2005). Information technology-information overload
for strategic leaders. Army War College.
Davies, P. H., & Gustafson, K. (2013). The intelligence cycle is dead,
long live the intelligence cycle: rethinking intelligence funda-
mentals for a new intelligence doctrine. In M. Phythian (Ed.),
Understanding the intelligence cycle (pp. 70–89). Routledge.
Davis, P. K., & Bracken, P. (2021). Artificial intelligence for war-
gaming and modeling. The Journal of Defense Modeling and
Simulation, 15485129211073126.
De Reus, N., Kerbusch, P., Schadd, M., & Ab de Vos, M. (2021).
Geospatial analysis for Machine Learning in Tactical Decision
Support. STO-MP-MSG-184. NATO.
Dhami, M. K., Belton, I. K., & Mandel, D. R. (2019). The “analy-
sis of competing hypotheses” in intelligence analysis. Applied
Cognitive Psychology, 33(6), 1080–1090.
Eidelman, S., & Crandall, C. S. (2012). Bias in favor of the sta-
tus quo. Social and Personality Psychology Compass, 6(3),
270–281.
Ekelhof, M. A. (2018). Lifting the fog of targeting. Naval War Col-
lege Review, 71(3), 61–95.
Hanska, J. (2020). War of time: Managing time and temporality in
operational art. Palgrave Macmillan.
Heuer, R. J. (1999). Psychology of intelligence analysis. Center for
the Study of Intelligence.
Horowitz, M. C., Scharre, P., & Velez-Green, A. (2019). A stable
nuclear future? The impact of autonomous systems and artificial
intelligence. arXiv preprint, arXiv: 1912. 05291.
Johnson, J. (2019). The AI-cyber nexus: Implications for military
escalation, deterrence and strategic stability. Journal of Cyber
Policy, 4(3), 442–460. https:// doi. or g/ 10. 1080/ 23738 871. 2019.
17016 93
Johnson, J. (2020a). Delegating strategic decision-making to
machines: Dr. Strangelove Redux? Journal of Strategic Stud-
ies. https:// doi. org/ 10. 1080/ 01402 390. 2020. 17590 38
Johnson, J. (2020b). Deterrence in the age of artificial intelligence
& autonomy: A paradigm shift in nuclear deterrence theory and
practice? Defense & Security Analysis, 36(4), 422–448.
Jux, A. (2021). Targeting. In M. Willis, A. Haider, D. C. Teletin,
& D. Wagner (Eds.), A Comprehensive approach to counter-
ing unmanned aircraft systems (pp. 147–166). Joint Air Power
Competence Centre.
Kase, S. E., Hung, C. P., Krayzman, T., Hare, J. Z., Rinderspacher,
B. C., & Su, S. M. (2022). The future of collaborative human-
artificial intelligence decision-making for mission planning.
Frontiers in Psychology, 1246.
Lever, M., & Schneider, S. (2021). Decision augmentation and auto-
mation with artificial intelligence: Threat or opportunity for
managers? Business Horizons, 64(5), 711–724. https:// doi. org/
10. 1016/j. bushor. 2021. 02. 026
Lin, Y.-H., Fang, M.-C., & Yeung, R. W. (2013). The optimization
of ship weather-routing algorithm based on the composite influ-
ence of multi-dynamic elements. Applied Ocean Research, 43,
184–194.
Lindelauf, R., Monsuur, H., & Voskuijl, M. (2022). Military heli-
copter flight mission planning using data science and operations
research. In NL ARMS, Netherlands Annual Review of Military
Studies. Leiden University Press.
Meerveld, H., & Lindelauf, R. (2022). Data science in military deci-
sion-making: A literature review. Retrieved from SSRN https://
papers. ssrn. com/ sol3/ papers. cfm? abstr act_ id= 42174 47
Mendi, A. F., Erol, T., & Doğan, D. (2021). Digital twin in the mili-
tary field. IEEE Internet Computing, 26(5), 33–40.
NATO Science and Technology Organization. (2020). Automation in
the intelligence cycle. Retrieved 21 October, 2022, from NATO
https:// www. sto. nato. int/ Lists/ STONe wsArc hive/ displ aynew
sitem. aspx? ID= 552
Osinga, F. P. (2007). Science, strategy and war: The strategic theory
of John Boyd. Routledge.
Otter, D. W., Medina, J. R., & Kalita, J. K. (2020). A survey of the
usages of deep learning for natural language processing. IEEE
Transactions on Neural Networks and Learning Systems, 32(2),
604–624.
Parker, C. G. (2020). The UK National Security Council and misuse
of intelligence by policy makers: Reducing the risk? Intelli-
gence and National Security, 35(7), 990–1006.
Reese, P. P. (2015). Military decisionmaking process: Lessons and
best practices. Center for Army Lessons Learned.
Richey, M. K. (2015). From crowds to crystal balls: Hybrid analytic
methods for anticipatory intelligence. American Intelligence
Journal, 32(1), 146–151.
Roff, H. M. (2014). The strategic robot problem: Lethal autonomous
weapons in war. Journal of Military Ethics, 13(3), 211–227.
Roff, H. M., & Danks, D. (2018). “Trust but Verify”: The difficulty
of trusting autonomous weapons systems. Journal of Military
Ethics, 17(1), 2–20.
Roskes, M., Sligte, D., Shalvi, S., & De Dreu, C. K. (2011). The
right side? Under time pressure, approach motivation leads to
right-oriented bias. Psychological Science, 22(11), 1403–1407.
Sharkey, N. (2010). Saying ‘no!’ to lethal autonomous targeting.
Journal of Military Ethics, 9(4), 369–383.
Silver, D., Huang, A., Maddison, C., Guez, A., Sifre, L., Van Den
Driessche, G., & Dieleman, S. (2016). Mastering the game of go
with deep neural networks and tree search. Nature, 529(7587),
484–489.
Suresh, H., & Guttag, J. (2021). A framework for understanding
sources of harm throughout the machine learning life cycle. In
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
H.W.Meerveld et al.
1 3
14 Page 6 of 6
Equity and access in algorithms, mechanisms, and optimization
(pp. 1–9).
Tóth, Z., Caruana, R., Gruber, T., & Loebbecke, C. (2022). The
dawn of the AI robots: Towards a new framework of AI robot
accountability. Journal of Business Ethics, 178(4), 895–916.
Van Den Bosch, K., & Bronkhorst, A. (2018). Human-AI cooperation
to benefit military decision making. NATO.
Weber, R. O., & Aha, D. W. (2003). Intelligent delivery of military
lessons learned. Decision Support Systems, 34(3), 287–304.
Weelden, E. V., Alimardani, M., Wiltshire, T. J., & Louwerse, M.
M. (2022). Aviation and neurophysiology; A systematic review.
Applied Ergonomics, 105, 103838. https:// doi. org/ 10. 1016/j.
apergo. 2022. 103838
Publisher's Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com