ArticlePDF Available

The irresponsibility of not using AI in the military

Authors:

Abstract and Figures

The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii) take ethical considerations and enhanced performance in military operations into account. A characterization of the debate on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper. We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision support, taking each quadrant of this characterization into account.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
Ethics and Information Technology (2023) 25:14
https://doi.org/10.1007/s10676-023-09683-0
ORIGINAL PAPER
The irresponsibility ofnotusing AI inthemilitary
H.W.Meerveld1· R.H.A.Lindelauf1 · E.O.Postma2· M.Postma2
Accepted: 27 January 2023 / Published online: 14 February 2023
© The Author(s) 2023
Abstract
The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by
the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a
considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the
military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks
of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii)
take ethical considerations and enhanced performance in military operations into account. A characterization of the debate
on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper.
We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision
support, taking each quadrant of this characterization into account.
Keywords Decision-making· Military decision-making process· Responsible AI· Intelligence cycle
Introduction
While in many private and public sector domains AI solu-
tions are becoming an essential tool driving change and
development, progress in the use of AI for military pur-
poses has been hindered by a number of important ethi-
cal questions for which answers have been lacking. These
questions primarily concern autonomous military plat-
forms, which typically center on the use of lethal autono-
mous weapon systems (LAWS)1 and the potential risk of
nuclear escalation.2 A recent literature review on data sci-
ence and AI in military decision-making found that most
of the studies examining these topics originate in social
sciences. As a result, the debate about the use of AI for
military purposes, although of high strategic importance,
appears to be limited in terms of its scope and perspec-
tive. Additionally, the use of data science at operational
and strategic level seems to be largely under-examined in
current literature (Meerveld & Lindelauf, 2022). In this
paper, we argue that the ethical discussion on the use of
AI in military operations should re-shift its focus from so-
called ‘killer robots’ and the concept of fully autonomous
AI applications to solutions that remain subject of (mean-
ingful) human control. As argued by various researchers
[e.g., Tóth etal. (2022)], the use of Lethal Autonomous
Weapon Systems (LAWS) is generally considered to be
illegal and immoral, despite potentially decreasing risks
to military personnel. There is also consensus among
policy makers that AI cannot fully replace human deci-
sion-making. However, it is necessary to examine both the
opportunities and risks of military AI in a broader con-
text and to explore how AI technology can be controlled,
supervised and potentially assimilated into force structure
and doctrine (Johnson, 2020a, b), either strengthening or
complicating deterrence (Johnson, 2019, 2020a, b). In line
with the consequentialist approach towards the ethics of
military AI, we argue that in discussing the responsibility
of AI-based decision support techniques, military effec-
tiveness and the entire decision-making chain in military
operations should be taken into account. For example,
* R. H. A. Lindelauf
rha.lindelauf.01@mindef.nl; roy_lindelauf@hotmail.com
1 Faculty ofMilitary Sciences, Data Science Center
ofExcellence, Netherlands Defence Academy, Breda,
TheNetherlands
2 Tilburg School ofHumanities andDigital Sciences, Tilburg
University, Tilburg, TheNetherlands
1 See for example Sharkey (2010), Roff, The strategic robot problem:
Lethal autonomous weapons in war (2014), Roff and Danks (2018),
and Tóth etal. (2022).
2 See for example Altmann and Sauer (2017), Johnson (2019, 2020a,
2020b), and Horowitz etal. (2019).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
H.W.Meerveld et al.
1 3
14 Page 2 of 6
certain types of military AI robots subjected to human
control and judgment may be permissible for self-defense
purposes, human-AI teaming could lead to faster and more
appropriate decision-making under pressure and uncer-
tainty, and AI systems could be broadly used for adaptive
training of military personnel, thereby helping to miti-
gate decision-making biases [e.g., by means of detecting
drowsiness or fatigue from neurometric signals in the brain
(Weelden etal. 2022)]. In Fig.1 we visualize the current
debate on responsible AI in a military context and its focal
points (i.e., the lower right quadrant, Machine Weakness
(MW) and the endpoint of the MDMP). In what follows,
we first elaborate on the military decision-making process
(MDMP) that in large part precedes lethal target engage-
ment on a battlefield. Next, we present some examples of
potential use of AI solutions in the MDMP together with
their benefits and infer the issue of the (ir)responsibility
of military AI.
AI insupport ofthemilitary decision‑making
process (mdmp)
Military decision-making consists of an iterative logical
planning method to select the best course of action for a
given battlefield situation. It can be conducted at levels
ranging from tactical to strategic. Each step in this process
lends itself to automation. This does not only hold for the
MDMP, but also for related processes like the intelligence
cycle and the targeting cycle. As argued in Ekelhof (2018),
Fig. 1 Characterization of the
debate on responsible AI in a
military context. The red dashed
lines indicate the focus of cur-
rent literature, while the ideal
scope of the debate is repre-
sented by the blue dashed lines.
(Color figure online)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
The irresponsibility ofnotusing AI inthemilitary
1 3
Page 3 of 6 14
instead of focusing on the target engagement as an endpoint,
the process should be examined in its entirety. To illustrate
this point, we visualized the preferred scope with the blue
circle in Fig.1. Below, we first briefly describe the MDMP.
Subsequently, we explore the potential advantages of AI in
decision-making and provide some examples of how AI can
specifically support the MDMP at several different (sub-)
steps.
The MDMP andits challenges
The US Army defines seven steps in the MDMP: (1) receipt
of mission, (2) mission analysis, (3) course of action (COA)
development, (4) COA analysis, (5) COA comparison, (6)
COA approval, and (7) the order production, dissemina-
tion, and transition (Reese, 2015). The level of detail of the
MDMP depends on the available time and resources, as well
as other factors. Each step in the MDMP has numerous sub-
steps that generate intermediate products. Examples include
intelligence products developed during the intelligence
preparation of the battlefield (IPB) that are used to indicate
COAs and decision points for commanders or geospatial
products from terrain analyses that can include recommen-
dations on battle positions and optimal avenues of approach.
The intelligence cycle, per NATO standard consisting of four
steps (Direct, Collect, Process, and Disseminate) (Davies &
Gustafson, 2013), is the separate but relating sub-process
by which these intelligence products are created. Other
examples of sub-processes in the MDMP are the targeting
cycle, as explained by Ekelhof (2018), or the continuous
lessons learned process in order to incorporate best prac-
tices and lessons learned into military doctrine (Weber &
Aha, 2003), which ultimately forms important input in, for
example, the COA development phase.The MDMP and its
related processes entail many labor-intensive, handcrafted
products. This has two important consequences. First, due
to the complexity of the information space, the MDMP is
hugely susceptible to cognitive biases. These can be both
conscious and unconscious and may result in suboptimal
performance. An example of a cognitive bias is groupthink
which is a problem typically encountered during the analy-
sis and assessment phase of the Intelligence Cycle (Parker,
2020). Another example is the anchoring bias when deci-
sions are made based on initial evidence (the anchor) (Heuer,
1999), as exemplified in a scenario where a group of aviators
need to determine the optimal location of battle positions
after having received an initial list of good locations during
helicopter mission planning. Even though intuitive decision-
making in the MDMP may be effective, it is well known that
both intuition and uncertainty can lead to faulty and erro-
neous decision outcomes (Van Den Bosch & Bronkhorst,
2018). Because our human cognitive mechanisms are ill-
equipped to convert information from a high volume of data
into valuable knowledge (Cotton, 2005), the susceptibility
to cognitive biases increases with the exponential growth of
data volume (Heuer, 1999). It is expected that the challenge
of information overload will only increase, since modern
military operations increasingly rely on open-source data
(Ekelhof, 2018). Second, labor-intensive processes tend to
be time-consuming. The contemporary digitized environ-
ment results in a proliferation of various data sources in
different formats (i.e., numerical, text, sound, and image)
and intelligence requires their fusion and interpretation (Van
Den Bosch & Bronkhorst, 2018). In most military situations,
it is of high importance to design efficient and streamlined
planning processes, avoiding labor-intensive sub-steps, when
possible, to ensure that no time is lost (Hanska, 2020). After
all, the aim is to outpace the opponent’s OODA-loop (i.e.,
Observe, Orient, Decide, Act) (Osinga, 2007) and AI-based
automation can be an important driver of such efficiency
gain. In addition, time pressure can further increase the
chance of a cognitive bias [e.g. (Roskes etal., 2011) and
(Eidelman & Crandall, 2012)]. In sum, human decision-
making mechanisms appear to be deficient in many military
circumstances given a limited capacity to process all poten-
tially relevant data and a limited amount of time. The value
of AI is found in the capacity to support human decision-
making, which optimizes the overall outcome (Lindelauf
etal., 2022). In the next section, we address the opportuni-
ties offered by AI in more detail by presenting examples of
automation of (sub-) elements in the MDMP.
The added value ofAI formilitary decision‑making
Given the limitations of human decision-making, the advan-
tage of (partial) automatization with AI can be found both
in the temporal dimension and in decision quality. A NATO
Research Task Group for instance examined the need for
automation in every step of the intelligence cycle (NATO
Science & Technology Organization, 2020) and found that
AI helps to automate manual tasks, identify patterns in com-
plex datasets and accelerate the decision-making process in
general. Since the collection of more information and per-
spectives results in less biased intelligence products (Richey,
2015), using computer power to increase the amount of data
that can be processed and analyzed may reduce cognitive
bias. Confirmation bias, for instance, can be avoided through
the automated analysis of competing hypotheses (Dhami
etal., 2019). Other advantages of machines over humans
are that they allow for scalable simulations, conduct logi-
cal reasoning, have transferable knowledge and an expand-
able memory space (Suresh & Guttag, 2021), (Silver, etal.,
2016).An important aspect of the current debate about the
use of AI for decision-making concerns the potential dan-
gers of providing AI systems with too much autonomy, lead-
ing to unforeseen consequences. A part of the solution is to
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
H.W.Meerveld et al.
1 3
14 Page 4 of 6
provide sufficient information to the leadership about how
the AI systems have been designed, what their decisions
are based on (explainability), which tasks are suitable for
automation and how to deal with technical errors (Lever &
Schneider, 2021). Tasks not suitable for automation, e.g.,
those in which humans outperform machines, are typically
tasks of high complexity (Blair etal., 2021). The debate on
responsible AI should therefore also take human strengths
(HS quadrant) into account. In practice, AI systems cannot
work in isolation but need to team up with human decision-
makers. Next to the acknowledgment of bounded rationality
in humans and ‘human weakness’ (viz. lower left quadrant
in Fig.1; HW), it is also important to take into considera-
tion that AI cannot be completely free of bias for two rea-
sons. First, all AI systems based on machine learning have
a so-called inductive bias comprising the set of implicit or
explicit assumptions required for making predictions about
unseen data. Second, the output of machine learning systems
is based on past data collected in human decision-making
events (machine weakness, MW, viz. lower right quadrant
in Fig.1). Uncovering the second type of bias may lead to
insights regarding past human performance and may ulti-
mately improve the overall process.
Examples ofAI intheMDMP
It is important to examine the risks of AI and strategies for
their mitigation. This mitigation, however, is useless without
examining the corresponding opportunities at the same time
(MS quadrant in Fig.1). In this paragraph, therefore, we
present some examples of AI applications in the MDMP. In
doing so, we provide an impetus for expanding the debate
on responsible AI by taking every quadrant in Fig.1 into
account.An example of machine strength is the use of AI to
aid the intelligence analyst in the generation of geospatial
information products for tactical terrain analysis. This is an
essential sub-step of the MDMP since military land opera-
tions depend heavily on terrain. AI-supported terrain analy-
sis enables the optimization of possible COAs for a military
commander, and additionally allows for an optimized analy-
sis of the most likely enemy course of action (De Reus etal.,
2021). Another example is the use of autonomous technolo-
gies to aid in target system analysis (TSA), a process that
normally takes months (Ekelhof, 2018). TSA consists of
the analysis of an enemy’s system in order to identify and
prioritize specific targets (and their components) with the
goal of resource optimization in neutralizing the opponent’s
most vulnerable assets (Jux, 2021). Examples of AI use in
TSA include automated entity recognition in satellite footage
to increase the information position necessary to conduct
TSA, and AI-supported prediction of enemy troop loca-
tions, buildup and dynamics based upon information gath-
ered from the imagery analysis phase. Ekelhof (2018) also
provides examples of autonomous technologies currently in
use for weaponeering (i.e., the assessment of which weapon
should be used for the selected targets and related military
objectives) and collateral damage estimation (CDE), both
sub-steps of the targeting process. Another illustrative exam-
ple of the added value of AI for the MDMP is in wargaming,
an important part of the COA analysis phase in the MDMP.
In wargames AI can help participants to understand possible
perspectives, perceptions, and calculations of adversaries for
instance (Davis & Bracken, 2021). Yet another example is
the possibility of a 3D view of a certain COA, enabling swift
examination of the terrain characteristics (e.g., potential
sightlines) to enhance decision-making (Kase, etal., 2022).
AI-enabled cognitive systems can also collect and assess
information about the attentional state of human decision-
makers, using sensor technologies and neuroimaging data to
detect mind wandering or cognitive overload (Weelden etal.,
2022). Algorithms from other domains may also represent
value to the MDMP, such as the weather-routing optimiza-
tion algorithm for ships (Lin etal., 2013), the team forma-
tion optimization tool used in sports (Beal etal., 2019), or
the many applications of deep learning in natural language
processing (NLP) (Otter etal., 2020), with NLP applica-
tions that summarize texts (such as Quillbot and Wordtune)
decreasing time to decision in the MDMP. Finally, digital
twin technology (using AI) has already demonstrated its
value in a military context and holds a promise for future
applications, e.g., enabling maintenance personnel to predict
future engine failures on airplanes (Mendi etal., 2021). In
the future, live monitoring of all physical assets relevant
to military operations, such as (hostile) military facilities,
platforms, and (national) critical infrastructure, might be
possible.
Conclusion
The debate on responsible AI in a military context should
not have a predominant focus on ethical issues regarding
LAWS. By providing a characterization of this debate into
four quadrants, i.e., human–machine versus strength-weak-
ness, we argued that the use of AI in the entire decision-mak-
ing chain in military operations is feasible and necessary. We
described the MDMP and its challenges resulting from the
labor-intensive and handcrafted products it involves. The
susceptibility to cognitive biases and the time-consuming
character of those labor-intensive processes present limita-
tions to human decision-making. We conclude that the value
of AI can, therefore, be found in the capacity to support this
decision-making to optimize its outcome. Ignoring the capa-
bilities of AI to alleviate the limitations of human cogni-
tive performance in military operations, thereby potentially
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
The irresponsibility ofnotusing AI inthemilitary
1 3
Page 5 of 6 14
increasing risks for military personnel and civilians, would
be irresponsible and unethical.
Open Access This article is licensed under a Creative Commons Attri-
bution 4.0 International License, which permits use, sharing, adapta-
tion, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article are
included in the article's Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in
the article's Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
References
Altmann, J., & Sauer, F. (2017). Autonomous weapon systems and
strategic stability. Survival, 59(5), 117–142.
Beal, R., Norman, T. J., & Ramchurn, S. D. (2019). Artificial intel-
ligence for team sports: A survey. The Knowledge Engineering
Review, 34, e28.
Blair, D., Chapa, J., Cuomo, S., & Hurst, J. (2021). Humans and
hardware: an exploration of blended tactical workflows using
John Boyd’s OODA loop. In R. Johnson, M. Kitzen, & T. Sweijs
(Eds.), The conduct of war in the 21st century : Kinetic, con-
nected and synthetic (pp. 93–115). Taylor & Francis Group.
Cotton, A. J. (2005). Information technology-information overload
for strategic leaders. Army War College.
Davies, P. H., & Gustafson, K. (2013). The intelligence cycle is dead,
long live the intelligence cycle: rethinking intelligence funda-
mentals for a new intelligence doctrine. In M. Phythian (Ed.),
Understanding the intelligence cycle (pp. 70–89). Routledge.
Davis, P. K., & Bracken, P. (2021). Artificial intelligence for war-
gaming and modeling. The Journal of Defense Modeling and
Simulation, 15485129211073126.
De Reus, N., Kerbusch, P., Schadd, M., & Ab de Vos, M. (2021).
Geospatial analysis for Machine Learning in Tactical Decision
Support. STO-MP-MSG-184. NATO.
Dhami, M. K., Belton, I. K., & Mandel, D. R. (2019). The “analy-
sis of competing hypotheses” in intelligence analysis. Applied
Cognitive Psychology, 33(6), 1080–1090.
Eidelman, S., & Crandall, C. S. (2012). Bias in favor of the sta-
tus quo. Social and Personality Psychology Compass, 6(3),
270–281.
Ekelhof, M. A. (2018). Lifting the fog of targeting. Naval War Col-
lege Review, 71(3), 61–95.
Hanska, J. (2020). War of time: Managing time and temporality in
operational art. Palgrave Macmillan.
Heuer, R. J. (1999). Psychology of intelligence analysis. Center for
the Study of Intelligence.
Horowitz, M. C., Scharre, P., & Velez-Green, A. (2019). A stable
nuclear future? The impact of autonomous systems and artificial
intelligence. arXiv preprint, arXiv: 1912. 05291.
Johnson, J. (2019). The AI-cyber nexus: Implications for military
escalation, deterrence and strategic stability. Journal of Cyber
Policy, 4(3), 442–460. https:// doi. or g/ 10. 1080/ 23738 871. 2019.
17016 93
Johnson, J. (2020a). Delegating strategic decision-making to
machines: Dr. Strangelove Redux? Journal of Strategic Stud-
ies. https:// doi. org/ 10. 1080/ 01402 390. 2020. 17590 38
Johnson, J. (2020b). Deterrence in the age of artificial intelligence
& autonomy: A paradigm shift in nuclear deterrence theory and
practice? Defense & Security Analysis, 36(4), 422–448.
Jux, A. (2021). Targeting. In M. Willis, A. Haider, D. C. Teletin,
& D. Wagner (Eds.), A Comprehensive approach to counter-
ing unmanned aircraft systems (pp. 147–166). Joint Air Power
Competence Centre.
Kase, S. E., Hung, C. P., Krayzman, T., Hare, J. Z., Rinderspacher,
B. C., & Su, S. M. (2022). The future of collaborative human-
artificial intelligence decision-making for mission planning.
Frontiers in Psychology, 1246.
Lever, M., & Schneider, S. (2021). Decision augmentation and auto-
mation with artificial intelligence: Threat or opportunity for
managers? Business Horizons, 64(5), 711–724. https:// doi. org/
10. 1016/j. bushor. 2021. 02. 026
Lin, Y.-H., Fang, M.-C., & Yeung, R. W. (2013). The optimization
of ship weather-routing algorithm based on the composite influ-
ence of multi-dynamic elements. Applied Ocean Research, 43,
184–194.
Lindelauf, R., Monsuur, H., & Voskuijl, M. (2022). Military heli-
copter flight mission planning using data science and operations
research. In NL ARMS, Netherlands Annual Review of Military
Studies. Leiden University Press.
Meerveld, H., & Lindelauf, R. (2022). Data science in military deci-
sion-making: A literature review. Retrieved from SSRN https://
papers. ssrn. com/ sol3/ papers. cfm? abstr act_ id= 42174 47
Mendi, A. F., Erol, T., & Doğan, D. (2021). Digital twin in the mili-
tary field. IEEE Internet Computing, 26(5), 33–40.
NATO Science and Technology Organization. (2020). Automation in
the intelligence cycle. Retrieved 21 October, 2022, from NATO
https:// www. sto. nato. int/ Lists/ STONe wsArc hive/ displ aynew
sitem. aspx? ID= 552
Osinga, F. P. (2007). Science, strategy and war: The strategic theory
of John Boyd. Routledge.
Otter, D. W., Medina, J. R., & Kalita, J. K. (2020). A survey of the
usages of deep learning for natural language processing. IEEE
Transactions on Neural Networks and Learning Systems, 32(2),
604–624.
Parker, C. G. (2020). The UK National Security Council and misuse
of intelligence by policy makers: Reducing the risk? Intelli-
gence and National Security, 35(7), 990–1006.
Reese, P. P. (2015). Military decisionmaking process: Lessons and
best practices. Center for Army Lessons Learned.
Richey, M. K. (2015). From crowds to crystal balls: Hybrid analytic
methods for anticipatory intelligence. American Intelligence
Journal, 32(1), 146–151.
Roff, H. M. (2014). The strategic robot problem: Lethal autonomous
weapons in war. Journal of Military Ethics, 13(3), 211–227.
Roff, H. M., & Danks, D. (2018). “Trust but Verify”: The difficulty
of trusting autonomous weapons systems. Journal of Military
Ethics, 17(1), 2–20.
Roskes, M., Sligte, D., Shalvi, S., & De Dreu, C. K. (2011). The
right side? Under time pressure, approach motivation leads to
right-oriented bias. Psychological Science, 22(11), 1403–1407.
Sharkey, N. (2010). Saying ‘no!’ to lethal autonomous targeting.
Journal of Military Ethics, 9(4), 369–383.
Silver, D., Huang, A., Maddison, C., Guez, A., Sifre, L., Van Den
Driessche, G., & Dieleman, S. (2016). Mastering the game of go
with deep neural networks and tree search. Nature, 529(7587),
484–489.
Suresh, H., & Guttag, J. (2021). A framework for understanding
sources of harm throughout the machine learning life cycle. In
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
H.W.Meerveld et al.
1 3
14 Page 6 of 6
Equity and access in algorithms, mechanisms, and optimization
(pp. 1–9).
Tóth, Z., Caruana, R., Gruber, T., & Loebbecke, C. (2022). The
dawn of the AI robots: Towards a new framework of AI robot
accountability. Journal of Business Ethics, 178(4), 895–916.
Van Den Bosch, K., & Bronkhorst, A. (2018). Human-AI cooperation
to benefit military decision making. NATO.
Weber, R. O., & Aha, D. W. (2003). Intelligent delivery of military
lessons learned. Decision Support Systems, 34(3), 287–304.
Weelden, E. V., Alimardani, M., Wiltshire, T. J., & Louwerse, M.
M. (2022). Aviation and neurophysiology; A systematic review.
Applied Ergonomics, 105, 103838. https:// doi. org/ 10. 1016/j.
apergo. 2022. 103838
Publisher's Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Ciò ha indotto Ding et Dafoe (2023) ad affermare che "AI is the new electricity" poiché, analogamente a quanto accaduto a seguito dell'introduzione della corrente elettrica negli affari militari, le nuove tecnologie potrebbero aumentare l'efficacia dello strumento militare, accrescere la sicurezza nazionale e la capacità di competere in ambito internazionale. In tale quadro, il ricorso a questi nuovi sistemi tecnico operativi potrebbe migliorare ad esempio il supporto logistico, le comunicazioni e l'intelligence, la sicurezza informatica (Wirkuttis et Klein, 2017) e marittima (Liu et al., 2017;Munim et al., 2020), la protezione delle infrastrutture critiche (Bagheri et Ghorbani, 2008;Dick et al., 2019), la quantità di dati utilizzabili nel corso della pianificazione delle operazioni e il wargaming (Kase et al., 2022;Meerveld et al., 2023), il processo di decision making (Hoel et al., 2019), la condotta delle attività operative nei nuovi domini spazio (Li et al., 2022;Lu et al., 2023) ...
... Molti autori si sono soffermati sul tema dell'etica, ritenendo che un uso incontrollato dell'IA in ambito militare possa creare inutili ed ingiuste sofferenze ad altri individui o essere viventi (Sebo et Long, 2023;Ladak, 2023). Pekarev (2023) ha condotto una serie di interviste al personale militare estone (l'Estonia è una delle prime nazioni ad aver utilizzato sistemi d'arma autonomi in una operazione terrestre), rilevando una convergenza di opinioni in merito alla necessità di attribuire all'IA compiti di supporto al processo di decision making (posizione sostenuta anche da Meerveld et al., 2023) e di demandare all'uomo ogni decisione sull'utilizzo della forza militare. Fanni et Giancotti (2023) hanno, invece, esaminato il tema dell'implementazione dell'IA nella Difesa italiana, constatando che i Leader intervistati hanno posto particolare enfasi su questioni fondamentali come ad esempio il controllo umano e la responsabilità, dimostrandosi, altresì, molto interessati al tema della trasformazione digitale e alle sue implicazioni etiche. ...
Article
Full-text available
In the framework of activities to understand the public’s perception of artificial intelligence (AI), it is appropriate to extend this field of research to the security and defence sector. For this reason, the purpose of this study is to observe how a sample of the Italian public assesses the introduction of intelligent systems in the military (in this study, the term “intelligent systems” refers to applications/equipment that use artificial intelligence to function), what is the state of mind generated by the possible use of these technological innovations and whether human control is an expected condition in Italy. An online survey, hosted by the google.it platform (https://www.google.it/intl/it/forms/about/), was prepared and the link to the questionnaire was disseminated in Italy through social networks (Facebook, LinkedIn e X) and the whatsApp application. The 524 participants were able to join the initiative from 30 December 2023 to 15 January 2024 and answer anonymously and on a voluntary basis. At the end of the examination of the collected data, it emerged that the participants have not yet taken a specific position with respect to the theme of AI. The use of intelligent applications in the military sector is of interest but is also a source of concern which has led a clear majority of the sample to prefer that human control of these emerging technologies is maintained. Therefore, it is highly recommended to initiate interventions to keep humans at the centre of the new digital age and to devolve to AI the task of enhancing their capabilities. An effective communication plan describing the opportunities offered by intelligent systems to be introduced in the defence sector could be useful in supporting or improving current levels of public consensus. Key words: artificial intelligence, Public opinion, Defense, Security
... Decision support by intelligent systems can contribute to this challenge; as argued in the Alliance Concept, 'MDO must be optimized through the effective exploitation of technology that provides information and decision advantage' [9]. To support decision-makers, many emphasize the potential of AI in planning and execution [42,43,44], and specifically in MDO applications [45]. ...
... In fact, some authors claim that it may even be irresponsible not to use certain technologies such as AI in military decision-support systems or in weapon systems (cf.Meerveld et al. 2023). ...
Article
Full-text available
Complex technologies such as Artificial Intelligence (AI) can cause harm, raising the question of who is liable for the harm caused. Research has identified multiple liability gaps (i.e., unsatisfactory outcomes when applying existing liability rules) in legal frameworks. In this paper, the concepts of shared responsibilities and fiduciary duties are explored as avenues to address liability gaps. The development, deployment and use of complex technologies are not clearly distinguishable stages, as often suggested, but are processes of cooperation and co-creation. At the intersections of these stages, shared responsibilities and fiduciary duties of multiple actors can be observed. Although none of the actors have complete control or a complete overview, many actors have some control or influence, and, therefore, responsibilities based on fault, prevention or benefit. Shared responsibilities and fiduciary duties can turn liability gaps into liability overlaps. These concepts could be implemented in tort and contract law by amending existing law (e.g., by assuming that all stakeholders are liable unless they can prove they did not owe a duty of care) and by creating more room for partial liability reflecting partial responsibilities (e.g., a responsibility to signal or identify an issue without a corresponding responsibility to solve that issue). This approach better aligns legal liabilities with responsibilities, increases legal certainty, and increases cooperation and understanding between actors, improving the quality and safety of technologies. However, it may not solve all liability gaps, may have chilling effects on innovation, and may require further detailing through case law.
... ning( Meerveld et al, 2023). This possibility leads us to the question of responsible use of artificial intelligence in the military, specifically concerning the delegation of a portion of responsibility and decision-making authority.The defence and protection system, particularly its most critical operational component-the civil protection system-conducts its actions and activities according to doctrine. ...
Article
Full-text available
The crisis management system is a key element of a modern, reliable, effective, and sustainable security system. In contemporary settings, management functions are implemented through the capabilities of a command-information system. Consequently, most activities are conducted within cyberspace, meaning that management functions cannot be effectively executed without adequate telecommunications and information technology support. The primary target of hostile cyber activities is precisely the command-information system. Developing an optimal methodology for the use of command-information systems, as well as for prevention and protection against cyber-attacks, is among the most critical tasks for modern defence and security systems. One approach to enhancing risk management is through the application of artificial intelligence (AI). AI should ensure superiority in cyberspace, primarily at the national level, providing the foundation for executing legitimate and lawful cyber activities in a state's defence system. This paper aims to explore the role and significance of artificial intelligence within the realm of cybersecurity by analyzing its implications for risk management in contemporary environments. Command in modern military operations, the nature of modern warfare risks, the protection of command-information systems, the autonomy of artificial intelligence in decision-making, and an understanding of cyber threats are factors through which the challenges of modern command systems can be understood.
... 150 For overviews of various initiatives in military applications of AI, see Michael Raska and Richard A. Bitzinger, eds., The AI Wave in Defence Innovation (Abingdon and New York: Routledge, 2023); Heiko Borchert, Torben Schütz, and Joseph Verbovszky, eds., The Very Long Game: 25 Case Studies on the Global State of Defense AI (Cham: Springer Nature Switzerland, 2024). as many other large bureaucratic organizations, are lookingto improve the efficiency of their work and ensure that their tasks are streamlined.151 This extends to logistics, human resources, and maintenance, among others, but is particularly relevant for combat and targeting because speed and efficiency are considered key factors of success on the battlefield. ...
Research
Full-text available
A new report published by the Center for War Studies at the University of Southern Denmark reviews developments and debates related to AI-based decision support systems (AI DSS) in military decision-making on the use of force. Written by Anna Nadibaidze, Ingvild Bode, and Qiaochu Zhang, this report contributes to ongoing discussions on AI DSS in the military domain by providing a review of 1) the main developments in relation to AI DSS, focusing on specific examples of existing systems; and 2) the main debates about opportunities and challenges related to various uses of AI DSS, with a focus on issues of human-machine interaction in warfare.
Article
Modern warfare has significantly advanced with the introduction of Artificial Intelligence (AI) into operations. Through enhancing decision-making procedures, streamlining logistics, and enabling the use of autonomous weaponry, AI technologies improve military capabilities. But using AI to military applications presents difficult moral and legal issues, especially when it comes to adhering to international humanitarian law (IHL). IHL sets regulations that guard civilians and control hostilities in order to reduce suffering caused to civilians during armed conflicts. The compatibility of AI technology with core IHL principles such as distinction, proportionality, military necessity, and humanity is examined in this article. There is still significant worry about AI systems' capacity to reliably discriminate between military targets and people, raising the possibility of unintentional harm to civilians. Furthermore, responsibility for autonomous systems'actions and decisions presents legal dilemmas regarding responsibility and liability in cases of IHL violations. This study evaluates the potential of AI to comply with IHL requirements and suggests strategies for mitigating associated risks. It emphasizes the importance of developing robust legal frameworks and international cooperation to ensure that AI applications in military operations adhere to humanitarian standards, thereby balancing technological innovation with the imperatives of human rights and ethical warfare.
Article
Full-text available
The command-and-control system during the decision-making process in modern military operations, in addition to traditional methods, is increasingly based on the processing, analysis and interpretation of large amounts of information, which is partly a consequence of the use of modern technologies and changing operational environment. Among other things, the speed and dynamism of modern conflicts require quick decisions based on complex information. Managing such a large volume of information and making timely decisions can be a major challenge for commanders unless they have appropriate tools and adequate support. The use of artificial intelligence can be one of the potential solutions to the said challenge, because it can significantly optimise and speed up the decision-making process in modern military operations. The subject of this paper refers to defining the determinants, challenges and possibilities for the application of artificial intelligence in decision-making process in military operations. During the research, the method of content analysis and comparative method were applied, to eventually draw conclusions using the techniques of analysis and synthesis, and create a model of a possible place and role of artificial intelligence in the command system in military operations. The research results are presented within the framework of a functional model of one of possible ways to use artificial intelligence in the decision-making process. The model shows the possibility of artificial intelligence, together with humans, to be a significant factor in the decision-making process that will ensure supremacy in the process of making decisions. It can be concluded that the automation of the processes and the increase of autonomy in command using artificial intelligence will improve the decision-making process and certainly contribute to making future military operations more efficient and effective, as well as safer.
Preprint
Full-text available
Background: With the expansion of Artificial Intelligence (AI) in the contemporary era and the emergence of autonomous vehicles as a result, different ethical challenges have also arisen. Further, these challenges can be answered and investigated with different ethical and moral approaches. Therefore, we will find that this is a significant issue and also reviewing the researches that have been done in this regard is also of great importance. Methods: Using the four-steps method to conduct a systematic review, we first extracted related documents by searching for relevant keywords in the Web of Science (WoS) databases, and also conducted a systematic review using the VOSviewer (version 1.6.20). Results: After extracting these documents and using the VOSviewer, active countries in this field have been examined in terms of the number of documents and citations, active journals, active publishers, documents in terms of the number of citations, and also active authors in this field, as well as keywords and terms.
Article
Full-text available
In an increasingly complex military operating environment, next generation wargaming platforms can reduce risk, decrease operating costs, and improve overall outcomes. Novel Artificial Intelligence (AI) enabled wargaming approaches, based on software platforms with multimodal interaction and visualization capacity, are essential to provide the decision-making flexibility and adaptability required to meet current and emerging realities of warfighting. We highlight three areas of development for future warfighter-machine interfaces: AI-directed decisional guidance, computationally informed decision-making, and realistic representations of decision spaces. Progress in these areas will enable development of effective human-AI collaborative decision-making, to meet the increasing scale and complexity of today’s battlespace.
Article
Full-text available
Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, and in the military, such as Lethal Autonomous Weapon Systems. However, there is a scarcity of in-depth theoretical, methodological, or empirical studies that address these ethical issues, for instance, the impact of morality and where accountability resides in AI robots’ use. To address this dearth, this study offers a conceptual framework that interpretively develops the ethical implications of AI robot applications, drawing on descriptive and normative ethical theory. The new framework elaborates on how the locus of morality (human to AI agency) and moral intensity combine within context-specific AI robot applications, and how this might influence accountability thinking. Our theorization indicates that in situations of escalating AI agency and situational moral intensity, accountability is widely dispersed between actors and institutions. ‘Accountability clusters’ are outlined to illustrate interrelationships between the locus of morality, moral intensity, and accountability and how these invoke different categorical responses: (i) illegal, (ii) immoral, (iii) permissible, and (iv) supererogatory pertaining to using AI robots. These enable discussion of the ethical implications of using AI robots, and associated accountability challenges for a constellation of actors—from designer, individual/organizational users to the normative and regulative approaches of industrial/governmental bodies and intergovernmental regimes.
Article
Full-text available
In this paper, we discuss how artificial intelligence (AI) could be used in political-military modeling, simulation, and wargaming of conflicts with nations having weapons of mass destruction and other high-end capabilities involving space, cyberspace, and long-range precision weapons. AI should help participants in wargames, and agents in simulations, to understand possible perspectives, perceptions, and calculations of adversaries who are operating with uncertainties and misimpressions. The content of AI should recognize the risks of escalation leading to catastrophe with no winner but also the possibility of outcomes with meaningful winners and losers. We discuss implications for the design and development of families of models, simulations, and wargames using several types of AI functionality. We also discuss decision aids for wargaming, with and without AI, informed by theory and exploratory work using simulation, history, and earlier wargaming.
Conference Paper
Full-text available
Tactical military land operations heavily depend on the terrain, thus the terrain is always taken into account in the military decision making process. Terrain related (geospatial) tactical information products, such as optimal routes or avenues of approach are usually determined by terrain analysts in the Intelligence cell, however automated generation is possible as well. These products can be used in decision support tools to support the planning process. When machine learning is used in these decision support tools, the products can also be of benefit for modelling the behaviour of military units that is required for finding well-performing courses of action by machine learning. This work presents an overview of geospatial products and classifies them into a tier-based architecture in which products are based on products of underlying tiers. We furthermore formalize the steps of creating tactical terrain models and tactical mission models that are required for machine learning. Based on two practical examples we demonstrate how geospatial products can be generated in the proposed architecture, how these products can be used in machine learning for tactical planning, and how the learned courses of actions and intelligence products can be supplied to the planner in support of decision making.
Article
This paper systematically reviews 20 years of publications (N = 54) on aviation and neurophysiology. The main goal is to provide an account of neurophysiological changes associated with flight training with the aim of identifying neurometrics indicative of pilot's flight training level and task relevant mental states, as well as to capture the current state-of-art of (neuro)ergonomic design and practice in flight training. We identified multiple candidate neurometrics of training progress and workload, such as frontal theta power, the EEG Engagement Index and the Cognitive Stability Index. Furthermore, we discovered that several types of classifiers could be used to accurately detect mental states, such as the detection of drowsiness and mental fatigue. The paper advances practical guidelines on terminology usage, simulator fidelity, and multimodality, as well as future research ideas including the potential of Virtual Reality flight simulations for training, and a brain-computer interface for flight training.
Article
Artificial intelligence (AI) has emerged as a promising and increasingly available technology for managerial decision-making. With the adoption of AI-enabled software, organizations can leverage various benefits of the technology; however, they also have to consider intended and unintended consequences of using the technology for managerial roles. It is still unclear whether managers will benefit from enhancing their abilities with AI-enabled software or become powerless puppets that announce AI-enabled software results. Our research has revealed distinct ways in which organizations can use AI-enabled decision-making solutions: as tools or novelties, for decision augmentation or automation, and as a voluntary or mandatory option. In this paper, we discuss the implications of each of these combinations on the relevant managers. We consider outcomes related to managerial job design and derive practical advice for organizational designers and managers who work with AI. Our outcomes provide answers on how to deal with the conflict-riddled relationship between managers and technology with regard to capabilities, responsibilities, and acceptance of AI-enabled software.