ArticlePDF Available

Investigating the kill cloud: information warfare, autonomous weapons & AI

Authors:

Abstract and Figures

The “kill chain”—involving the analysis of data by human users of military technologies, the understanding of that data, and human decisions—has fast been replaced by the “kill cloud” that necessitates, allows, and exacerbates increased thirst for domination, violence against distant populations, and a culture of experimentation with human lives. This commentary reports an interdisciplinary discussion organised by the Disruption Network Lab that brought together whistleblowers, artists, and experts investigating the impact of artificial intelligence and other emerging technologies on networked warfare. Exposing the problematics of networked warfare and the kill cloud, their colonial overtones, effects on human subjects in real life, erroneous scientific rationalities, and the (business) practices and logics that enable this algorithmic machinery of violence. The conference took place from the 29th of November to the 1st of December 2024 at the Kunstquartier Bethanien in Berlin, Germany.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
Digital War (2025) 6:4
https://doi.org/10.1057/s42984-025-00101-x
REVIEW
Investigating thekill cloud: information warfare, autonomous
weapons & AI
IshmaelBhila1
© The Author(s) 2025
Abstract
The “kill chain”—involving the analysis of data by human users of military technologies, the understanding of that data,
and human decisions—has fast been replaced by the “kill cloud” that necessitates, allows, and exacerbates increased thirst
for domination, violence against distant populations, and a culture of experimentation with human lives. This commentary
reports an interdisciplinary discussion organised by the Disruption Network Lab that brought together whistleblowers, art-
ists, and experts investigating the impact of artificial intelligence and other emerging technologies on networked warfare.
Exposing the problematics of networked warfare and the kill cloud, their colonial overtones, effects on human subjects in
real life, erroneous scientific rationalities, and the (business) practices and logics that enable this algorithmic machinery of
violence. The conference took place from the 29th of November to the 1st of December 2024 at the Kunstquartier Bethanien
in Berlin, Germany.
Keywords Autonomous weapons systems· Algorithmic warfare· Artificial intelligence (AI)· Surveillance· Kill chain
Introduction
The idea of the kill chain that characterised the “global war
on terror”, problematic as it was, was based on the prin-
ciple of “analyse-reflect-act” (Brose 2020). It allowed and
necessitated new practices of targeted killing that were
seen as morally plausible (Statman 2004) and more accu-
rate. Yet, the myth of accuracy and objectivity of emerging
technology-assisted targeted killing has been debunked by
many scholars over time (Fereidooni and Heidt 2024), and
the practices of these “kill lists” make the “messy target-
ing process more opaque and less traceable” (Weber 2016,
p. 107). On the other hand, the “kill cloud” worsens this
opacity and messiness with its characteristic dependence
on vast amounts of data, increased autonomy of machines,
and machine learning techniques. The idea of the kill cloud
was first introduced by Lisa Ling and Cian Westmoreland
to denote a rapidly growing networked infrastructure with
a global reach, whose principal aim is domination in every
spectrum of warfare, “including space, cyberspace, and the
electromagnetic spectrum” (Bazzichelli, 2021; Ling and
Westmoreland 2021).
This commentary summarises the discussions at the
“Investigating the Kill Cloud: Information Warfare, Autono-
mous Weapons & AI” conference that took place at the Kun-
stquartier Bethanien in Berlin, Germany, from the 29th of
November to the 1st of December 2024. As a celebration of
the Disru ption Netwo rk Labs tenth anniversary, the Disrup-
tion Network Lab cooperated with the “Meaningful Human
Control: Autonomous Weapons Systems Between Regula-
tion and Reflection” (MEHUCO) research network to bring
together whistleblowers, artists, civil society, and scholars
from a variety of disciplines including legal studies, science
and technology studies (STS), International Relations (IR),
political theory, media and communication studies, sociol-
ogy, computer science, and others. The conference, curated
by Tatiana Bazzichelli in collaboration with Jutta Weber,
was also aimed at providing space to share the results of the
investigations on the impact of artificial intelligence (AI)
on new technologies of war, automated weapons, and net-
worked warfare by the Disru ption Netwo rk Insti tutes affili-
ated research fellows. These research fellows (Fig.1)were
Lisa Ling (whistleblower, former Technical Sergeant, US
Air Force Drone Surveillance Programme, USA), Jack
Poulson (Executive Director—Tech Inqui ry, USA), Naomi
* Ishmael Bhila
ishmael.bhila@uni-paderborn.de
1 Paderborn University, Paderborn, Germany
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
I.Bhila
4 Page 2 of 7
Colvin (Programme Director—Bluep rint for Futur e Speech,
UK), and Joana Moll (Artist & Researcher, ES). The three-
day event also included roundtable discussions aimed at
fostering discussion, knowledge sharing, and collaboration
among experts from various disciplines.
The MEHUCO research network, led by Paderborn
University, consists of five institutions of higher education
(University Bonn, Leibniz University Hannover, Hamburg
University, Ostfalia University of Applied Sciences, and
Paderborn University). The “Investigating the Kill Cloud”
conference was organised by the Disruption Network Lab
together with the MEHUCO subproject “Swarm Technolo-
gies – Control and Autonomy in Complex Weapons Sys-
tems” hosted at Paderborn University. Although the discus-
sions and papers presented were numerous, wide-ranging,
and specialist, this paper presents a synthesised report of
these different talks. It focuses on a few themes that came
out of the conference while suggesting areas of further
research. All the presentations and performances are avail-
able online for the reader to immerse themselves in more
detail.
AI, thekill cloud, andwhistleblowing
Contemporary war and related military information prac-
tices rely on secrecy and classified data. Yet, with the
increased automation of warfare, this secrecy is also closely
constitutive of impunity, war crimes, and arbitrary killing
of innocent populations from afar. The role of declassified
data in “peering through the fog of war” has become urgent
and indispensable (O’Loughlin etal. 2010). It is within this
context that Jesselyn Radack argued that “whistleblowing
is critical in the context of drone war because we would
have no idea what’s occurring in the most secret and most
lethal government programme without the perspective of
insiders.”1 Without whistleblowers, Radack pointed out, it
would be impossible to know how governments undercount
civilian deaths, regarding thousands of lives as collateral.
Thus, the revealing work of people like Chelsea Manning,
Julian Assange, Brandon Bryant, Daniel Hale, Christopher
Aaron, and others has been instrumental in exposing the
ugly truths about remote warfare. Jack Poulson, whose pres-
entation was on the methods of database investigation of
Western intelligence and special operations, a major concern
of data-centric warfare is the propensity of governments (in
his case the US government) to delete evidence from the
public while having legal justifications to conceal any data
that would expose the military’s injustices. 2
In addition to exposing what is already taking place,
Thomas Drake(Fig.2) posited that whistleblowing helps
warn the public about the threats of data-centric warfare.
3 For Drake, whistleblowers pay an incredibly high price,
risking everything to expose injustices for the public's good.
In data-centric warfare, it is essential to “hold AI to its own
mirror” (ibid). It is crucial to expose the dangers of AI as
Fig. 1 Keynote Panel with the Disruption Network Institute Fellows. From Left—Joana Moll, Naomi Colvin, Jack Poulson, Lisa Ling, and Tati-
ana Bazzichelli (Moderator). Picture credits: Disruption Network Lab
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Investigating thekill cloud: information warfare, autonomous weapons & AI Page 3 of 7 4
it expands the killing fields, increases the “murder ratio”,
and becomes a narrative for the permissibility of inhuman
methods of warmongering from a distance. Whistleblowing
is crucial in exposing the false beliefs held by the public
regarding “sanitised warfare,” which is touted as the advan-
tage of data-centric warfare (Lisa Ling). 4
Experimental, business‑based warfare
For STS scholars, who were part of the aptly named “Disarm-
ing the Kill Cloud” 5 panel(Fig.3), data-centric warfare is
based on “sloppy military data science” relying on the princi-
ples of correlation (Jutta Weber), with increased algorithmic
intensification that does not align with the truth of the lived
realities of those on the ground (Lucy Suchman). Suchman
critiqued the idea of “ground truth” in military data practices,
arguing that together with the datasets from which it is built,
ground truth presupposes and requires operations of datafi-
cation—rendering worlds of interests as numbers (also see
Suchman 2020). For Eric Reichborn-Kjennerud, data-centric
warfare represents military logic, not the reality of the physi-
cal world. For Elke Schwartz, this military-business logic is
underpinned by a logic of trial and error, a high-risk-high-
reward business model that encourages the capitalist think-
ing of “move fast and break things” and “done is better than
perfect”. This military philosophy mandates mistakes and
valorises error in search of high financial rewards. Marijn
Hoijtink’s work on this panel expanded these ideas to include
the idea of “the platform” (see Hoijtink 2022; Hoijtink and
Planqué-van Hardeveld 2022) which embeds the idea of per-
fect knowledge, the ability to link everything to everything, to
systematise, and naturalise the logic of targeting as exemplified
by the US Department of Defense’s (DoD) Joint All Domain
Command and Control (JADC2) that links all US military ser-
vices through data sensors, platforms, and other communica-
tion technologies. Hoijtink’s presentation proved the impact
and influence of platform owners in contemporary warfare,
which transforms the logic and practices of warfare. Naomi
Colvin’s presentation showed how the UK’s policy positions
on AI safety and risk are intricately linked to ideas from AI
labs and Silicon Valley (Colvin 2024).
The work by Joana Moll, which suggests going beyond
the techno-economic business model views of digital adver-
tising (Ad Tech), showed how the mechanisms of Ad Tech
are grounded in “critical relationships to bodies, emotion and
cognition, natural resources, warfare, ideologies, and past and
present politics” (Moll 2024, p. 5). Thus, Moll’s performance
and presentation showed these intricate relationships while
exposing how Ad Tech has become a major enabler of danger-
ous forms of surveillance that have fed into the mechanics of
the kill cloud.
Fig. 2 Thomas Drake Giving a Keynote Speech. Picture credits: Disruption Network Lab
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
I.Bhila
4 Page 4 of 7
The coloniality ofalgorithmic violence
The study of algorithmic warfare and autonomous weap-
ons systems has largely focused on legality, state posi-
tions and narratives, imaginaries, governance, ethics, and
the problems of meaningful human control. Yet, power,
historical context, and colonial violence are at the core of
these systems' practices, governance, and conceptualisa-
tion. In a joint presentation covering their work with the
Airspace Tribunal, an ongoing project that is developing
a proposed new human right to counter the colonisation
of the sky, Shona Illingworth and Anthony Downey’s talk
characterised the “pathologies of algorithmic warfare” as
an offshoot of and central to neocolonialism. A number
of the presentations at the conference exposed the colo-
niality of algorithmic violence—that is, a set of values,
attitudes, epistemologies, and power structures that pro-
vide rationalisation and justification of colonial forms of
violence and dominance (see Quijano 2007). For Donatella
Della Ratta (discussion contribution), AI simply provides
a narrative through which machines can be blamed for
colonial violence perpetrated by human actors. Rather,
algorithmic warfare is mainly targeted at lives that do not
matter (see Butler 2009). For Khalil Dewan, algorithmic
violence is necropolitical—it ascribes the right to death
for those whose lives are deemed losable and disposable
(Mbembe 2019). 6 Shona Illingworth drew on evidence
gathered through the Airspace Tribunal’s development of
the propo sed new human right to protect the freedom to
live without physical or psychological threat from above,
that highlights how the sky, especially for the postcolo-
nial subject (see Jabri 2012 on the postcolonial subject),
is associated with imprisonment and death (Mohammed
2024), causing long-term psychological trauma and physi-
ological harm (Illingworth etal. 2024).
The realities of algorithmic warfare are inadequately
dealt with by the existing legal debates. For Anthony
Downey, there is a need to question the historical function/
context of AI, its operative logic in warfare, and its sys-
temic functioning. AI is used to provide justifications for
algorithmic violence, and it actualises threats it was cre-
ated to predict (Downey 2024), which has real ramifications
on real lives. These ramifications are even more threaten-
ing for populations in the Global South. For example, the
assault on privacy that people lament in the West is nothing
compared to the assault on privacy on people who live in
places with the most violent foreign surveillance practices,
like Gaza, Afghanistan, and elsewhere (Lisa Ling). 7 It is
therefore essential to question and critically analyse It is
therefore essential to question and critically analyse the
Fig. 3 The “Disarming the Kill Cloud” Panel in Session. From Left—Lucy Suchman, Marijn Hoijtink, Eric Reichborn-Kjennerud, Elke
Schwarz, and Jutta Weber (Moderator). Picture credits: Disruption Network Lab
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Investigating thekill cloud: information warfare, autonomous weapons & AI Page 5 of 7 4
epistemological underpinnings, power structures, and colo-
nial injustices created by the use of AI and related technolo-
gies in warfare.
Automated surveillance andtargeted killing
inGaza
The outbreak of the war in Gaza, characterised by the Inter-
national Court of Justice’s (ICJ) ruling on genocide, 8 United
Nations (UN) and several other human rights organisations
as a genocide, exposed several aspects related to algorithmic
warfare. In the first instance, it brought to the public’s atten-
tion the realities of algorithms and AI in warfare. Secondly,
it forced the discourse on autonomous weapons systems to
move from theoretical imaginations of how AI and machine
learning techniques could influence warfare to discussions
on how AI has facilitated genocide in practice (Amnesty
International 2024). Thirdly, the war transformed the dis-
course that had largely been focused on autonomous weap-
ons systems to now include other aspects of information
warfare like Decision Support Systems that are equally prob-
lematic. The “Automated Surveillance & Targeted Killing in
Gaza” panel(Fig.4), 9 which consisted of Matt Mahmoudi,
Sophia Goodfriend, and Khalil Dewan and was moderated
by Matthias Monroy, considered how surveillance technolo-
gies and practices of targeted killing operated in Gaza.
For the panellists, the technologies used for control and
killing in Gaza are used to justify the ongoing injustices.
Sophia Goodfriend argued that the algorithmic violence tak-
ing place in Gaza was largely based on three key political
decisions: (1) the intentional targeting of civilian homes, (2)
raising the threshold of civilian casualties, and (3) encour-
aging soldiers to over-rely on AI decisions (i.e. automation
bias). For Khalil Dewan, the conscious and deliberate politi-
cal move towards the automation of violence in Gaza has
a long history, with Israel being the first country to have a
targeting policy in the year 2000.
In practice, this targeted killing depends on a massive
infrastructure of surveillance technologies that include the
Red Wolf facial recognition technology, Blue Wolf (Good-
friend 2023), MABAT 2000, Checkpoint 56, and others,
leading to what Matt Mahmoudi characterised as “automated
apartheid” (also see Aizeki etal. 2024 on surveillance, con-
trol, and violence). This automated violence is done with
impunity, particularly with many actors and infrastructures
operating from outside the warzone, including satellites,
developers, and so on (Dewan). For Khalil Dewan, the Law
of Armed Combat (LAC) faces a key problem in the Gaza
case because the perpetrators of violence cannot be attacked
back. In addition, Dewan argued, the legal understanding of
“hostile intent” of the victims in Gaza is erroneous because
the entire population is labelled as hostile. Gaza has shown
us that there is a need for new critical approaches that
Fig. 4 The “Automated Surveillance & Automated Killing in Gaza” Panel in Session. From Left—Khalil Dewan, Matt Mahmoudi, Sophia
Goodfriend, and Matthias Monroy (Moderator). Picture credits: Disruption Network Lab
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
I.Bhila
4 Page 6 of 7
question the injustices of the laws of war that are biased
against Global South populations.
Conclusion
As the effects of network-centric warfare become more and
more real, and as the numbers of unaccounted—for victims
pile up, it has become essential to expand the study of algo-
rithmic warfare and the action to constrain its excesses. As
the “Investigating the Kill Cloud” conference showed, the
role of whistleblowers in exposing these excesses cannot
be overstated. Secondly, as suggested by Anthony Downey
at the conference, there is an urgent need to develop fit-for-
purpose interdisciplinary methodologies for thinking “from
within” the apparatuses that animate algorithmic warfare
rather than merely representing or reflecting on them (see
also Bazzichelli, 2021 on the idea of disruption from within).
Jack Poulson’s work presents a big step in doing this from a
whistleblowing perspective, while the majority of scholars
represented in this report are largely critical of algorithmic
violence. Thirdly, the frames of thinking on algorithmic
warfare have been largely limited in scope, leaving most of
the existing scholarship unable to question the conditions of
injustice and power both at governance and practice levels.
Thus, subversive theorisation, including Global South and
postcolonial frameworks, can help unsettle the existing sys-
temic injustices necessitated and encouraged by emerging
technologies of war. This subversive thinking can open up
spaces for gender-sensitive, vulnerability-focused, and envi-
ronmentally conscious streams of questioning that have the
potential to unsettle systemic violence and domination. My
own work has so far questioned these injustices and colonial-
ity at the governance level (Bhila 2024a, 2024b). The work
of the Stop Killer Robots Campaign and like-minded organi-
sations has exposed the fallacies of algorithmic warfare and
targeted killing, but there is still a need for collaborative
efforts towards the critique of the “kill cloud” at all levels,
and by various communities of practice and thought.
Notes
1. Jesselyn Radack, Investigating the Kill Cloud Confer-
ence, 29 November 2024. Accessible at https:// www.
disru ption lab. org/ inves tigat ing- the- kill- cloud
2. Jack Poulson, Investigating the Kill Cloud Conference,
29 November 2024. Accessible at https:// www. disru
ption lab. org/ inves tigat ing- the- kill- cloud
3. Thomas Drake, Investigating the Kill Cloud Confer-
ence, 29 November 2024. Accessible at https:// www.
disru ption lab. org/ inves tigat ing- the- kill- cloud
4. Lisa Ling, Investigating the Kill Cloud Conference, 29
November 2024. Accessible at https:// www. disru ption
lab. org/ inves tigat ing- the- kill- cloud
5. Panel—“Disarming the Kill Cloud” at the Investigat-
ing the Kill Cloud Conference, with Lucy Suchman,
Erik Reichborn-Kjennerud, Marijn Hoijtink, and Elke
Schwarz, moderated by Jutta Weber. https:// www. disru
ption lab. org/ inves tigat ing- the- kill- cloud
6. Khalil Dewan, on Panel “Automated Surveillance &
Targeted Killing in Gaza”, Investigating the Kill Cloud
Conference, 30 November 2024. Accessible at https://
www. disru ption lab. org/ inves tigat ing- the- kill- cloud
7. Lisa Ling, Investigating the Kill Cloud Conference, 29
November 2024. Accessible at https:// www. disru ption
lab. org/ inves tigat ing- the- kill- cloud
8. Summary of the Order of 26 January 2024 | INTERNA-
TIONAL COURT OF JUSTICE, 2024).
9. Panel—“Automated Surveillance & Targeted Killing in
Gaza” at the Investigating the Kill Cloud Conference,
with Matt Mahmoudi, Sophia Goodfriend, and Khalil
Dewan. Moderated by Matthias Monroy.
Funding Open Access funding enabled and organized by Projekt
DEAL.
Open Access This article is licensed under a Creative Commons Attri-
bution 4.0 International License, which permits use, sharing, adapta-
tion, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article are
included in the article’s Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in
the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
Aizeki, M., Mahmoudi, M., Schupfer, C., 2024. Resisting Borders and
Technologies of Violence. Haymarket Books.
Amnesty International. 2024. ‘You Feel Like You Are Subhuman’: Isra-
el’s Genocide Against Palestinians in Gaza. London: Amnesty
International Ltd.
Bhila, I. 2024a. Strained missions : The diplomatic dilemmas of small
states from the Global South in the area of autonomous weapons
systems. Small States 7: 203–220.
Bhila, I. 2024b. Putting algorithmic bias on top of the agenda in the dis-
cussions on autonomous weapons systems. Digit. War 5: 201–212.
https:// doi. org/ 10. 1057/ s42984- 024- 00094-z.
Brose, C., 2020. The Kill Chain: Defending America in the Future of
High-Tech Warfare. New York: Hachette Books.
Butler, J. 2009. Frames of war: When is life grievable? New York:
Verso.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Investigating thekill cloud: information warfare, autonomous weapons & AI Page 7 of 7 4
Colvin, N. 2024. “Ethical but Lethal”: The UK Position on AI Safety
and Autonomous Weapons Systems. Berlin: Disruption Network
Institute.
Downey, A. 2024. Algorithmic predictions and pre-emptive vio-
lence: Artificial intelligence and the future of unmanned aer-
ial systems. Digit. War 5: 123–133. https:// doi. org/ 10. 1057/
s42984- 023- 00068-7.
Fereidooni, S., Heidt, V., 2024. The Fallacy of Precision: Deconstruct-
ing the Narrative Supporting AI-Enhanced Military Weaponry, in:
Harms and Risks of AI in the Military. https:// doi. org/ 10. 18356/
6fce2 bae- en
Goodfriend, S. 2023. Algorithmic State Violence: Automated Surveil-
lance and Palestinian Dispossession in Hebron’s Old City. Int.
J. Middle East Stud. 55: 461–478. https:// doi. org/ 10. 1017/ S0020
74382 30008 79.
Hoijtink, M. 2022. ‘Prototype warfare’: Innovation, optimisation, and
the experimental way of warfare. Eur. J. Int. Secur. 7: 322–336.
https:// doi. org/ 10. 1017/ eis. 2022. 12.
Hoijtink, M., Planqué-van Hardeveld, A., 2022. Machine Learning and
the Platformization of the Military: A Study of Google’s Machine
Learning Platform TensorFlow. Int. Polit. Sociol. 16, olab036.
https:// doi. org/ 10. 1093/ ips/ olab0 36
Illingworth, S., A. Hoskins, A. Downey, and R. Salecl. 2024. The Air-
space Tribunal special issue: Editors’ introduction. Digit. War 5:
1–2. https:// doi. org/ 10. 1057/ s42984- 023- 00089-2.
Jabri, V., 2012. The Postcolonial Subject: Claiming Politics/Governing
Others in Late Modernity. London: Routledge.
Summary of the Order of 26 January 2024 | INTERNATIONAL
COURT OF JUSTICE, 2024.
Ling, L., Westmoreland, C., 2021. The Kill Cloud: Real World Impli-
cations of Network Centric Warfare, in: Bazzichelli, T. (Ed.),
Whistleblowing for Change: Exposing Systems of Power and
Injustice, Digitale Gesellschaft. transcript Verlag, Bielefeld,
Germany, pp. 129–152. https:// doi. org/ 10. 14361/ 97838 39457 931
Mbembe, A. 2019. Necropolitics. Durham: Duke University Press.
Mohammed, O. 2024. The fear of the sky: Trans-generational trauma
and the need for a new human right. Digit. War 5: 66–69. https://
doi. org/ 10. 1057/ s42984- 023- 00069-6.
Moll, J., 2024. Cookies at War: A Somatic Approach to the Kill Cloud,
Investigating the Kill Cloud. Disruption Network Institute.
O’Loughlin, J., F.D.W. Witmer, A.M. Linke, and N. Thorwardson.
2010. Peering into the Fog of War: The Geography of the WikiLe-
aks Afghanistan War Logs, 2004–2009. Eurasian Geography and
Economics 51: 472–495. https:// doi. org/ 10. 2747/ 1539- 7216. 51.4.
472.
Quijano, A. 2007. Coloniality and Modernity/Rationality. Cultural
Studies 21: 168–178. https:// doi. org/ 10. 1080/ 09502 38060 11643
53.
Statman, D. 2004. Targeted Killing. Theor. Inq. Law 5: 179–198.
https:// doi. org/ 10. 2202/ 1565- 3404. 1090.
Suchman, L. 2020. Algorithmic warfare and the reinvention of accu-
racy. Crit. Stud. Secur. 8: 175–187. https:// doi. org/ 10. 1080/ 21624
887. 2020. 17605 87.
Bazzichelli, T. (Ed.), 2021. Whistleblowing for Change: Exposing
Systems of Power and Injustice, 1st ed, Digitale Gesellschaft.
transcript Verlag, Bielefeld, Germany. https:// doi. org/ 10. 14361/
97838 39457 931
Weber, J. 2016. Keep adding. On kill lists, drone warfare and the poli-
tics of databases. Environ. Plan. Soc. Space 34: 107–125. https://
doi. org/ 10. 1177/ 02637 75815 623537.
Ishmael Bhila is a Doctoral Researcher at Paderborn University study-
ing small-state and Global South perspectives on algorithmic warfare,
particularly autonomous weapons systems. His research focuses on
the deliberative spaces in international forums like the United Nations
Convention on Certain Conventional Weapons (UNCCW) where auton-
omous weapons systems are discussed.Ishmael was also a Research
Fellow inthe Meaningful Human Control: Between Regulation and
Reflexion (MeHuCo) research network.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Emerging technologies around autonomous weapons systems pose significant threats, particularly to small states in the Global South. Despite these threats, many such small states have struggled to participate in multilateral discussions to regulate and prohibit autonomous weapons systems, while the negotiations have been ongoing within the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS) under the United Nations Convention on Certain Conventional Weapons (UNCCW) since 2017. This paper analyses the dilemmic positions small state diplomats find themselves in when trying to devote time and expertise to international discussions on autonomous weapons systems while at the same time negotiating the power politics within the international law-making system and working with a limited expertise pool and resource base. The research is based on interviews with diplomats in Geneva, participation data collated by the UN, and virtual and in-person observation of the GGE on LAWS discussions in the UNCCW. The paper concludes that disarmament diplomats from small states from the Global South face various challenges, including small governments back home that cannot address emerging issues, great power politics, unequal international legal systems, and absent common positions on disarmament. Nevertheless, these same small states have dealt with such challenges, so as to decolonise the asymmetric diplomatic space within which they operate.
Article
Full-text available
Biases in artificial intelligence have been flagged in academic and policy literature for years. Autonomous weapons systems—defined as weapons that use sensors and algorithms to select, track, target, and engage targets without human intervention—have the potential to mirror systems of societal inequality which reproduce algorithmic bias. This article argues that the problem of engrained algorithmic bias poses a greater challenge to autonomous weapons systems developers than most other risks discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS), which should be reflected in the outcome documents of these discussions. This is mainly because it takes longer to rectify a discriminatory algorithm than it does to issue an apology for a mistake that occurs occasionally. Highly militarised states have controlled both the discussions and their outcomes, which have focused on issues that are pertinent to them while ignoring what is existential for the rest of the world. Various calls from civil society, researchers, and smaller states for a legally binding instrument to regulate the development and use of autonomous weapons systems have always included the call for recognising algorithmic bias in autonomous weapons, which has not been reflected in discussion outcomes. This paper argues that any ethical framework developed for the regulation of autonomous weapons systems should, in detail, ensure that the development and use of autonomous weapons systems do not prejudice against vulnerable sections of (global) society.
Article
Full-text available
The military rationale of a pre-emptive strike is predicated upon the calculation and anticipation of threat. The underlying principle of anticipation, or prediction, is foundational to the operative logic of AI. The deployment of predictive, algorithmically driven systems in unmanned aerial systems (UAS) would therefore appear to be all but inevitable. However, the fatal interlocking of martial paradigms of pre-emption and models of predictive analysis needs to be questioned insofar as the irreparable decisiveness of a pre-emptive military strike is often at odds with the probabilistic predictions of AI. The pursuit of a human right to protect communities from aerial threats needs to therefore consider the degree to which algorithmic auguries—often erroneous but nevertheless evident in the prophetic mechanisms that power autonomous aerial apparatuses—essentially authorise and further galvanise the long-standing martial strategy of pre-emption. In the context of unmanned aerial systems, this essay will outline how AI actualises and summons forth “threats” through (i) the propositional logic of algorithms (their inclination to yield actionable directives); (ii) the systematic training of neural networks (through habitually biased methods of data-labelling); and (iii) a systemic reliance on models of statistical analysis in the structural design of machine learning (which can and do produce so-called “hallucinations”). Through defining the deterministic intentionality, systematic biases and systemic dysfunction of algorithms, I will identify how individuals and communities—configured upon and erroneously flagged through the machinations of so-called “black box” instruments—are invariably exposed to the uncertainty (or brute certainty) of imminent death based on algorithmic projections of “threat”.
Article
Full-text available
This article provides an ethnographic account of automated surveillance technologies' impact in the occupied West Bank, taking Blue Wolf—a biometric identification system deployed by the Israeli army—as a case study. Interviews with Palestinian residents of Hebron subjected to intensive surveillance, a senior Israeli general turned biometric start-up founder, and testimonies from veterans tasked with building up Blue Wolf's database provide a rare view into the uneven texture of life under algorithmic surveillance. Their narratives reveal how automated surveillance systems function as a form of state-sponsored terror. As a globalized information economy intersects with the eliminatory aims of Israeli settler colonialism in Hebron, new surveillance technologies erode Palestinian social life while allowing technocratic settlers to recast the violence of occupation as an opportunity for capital investment and growth. Attending to the texture of life under algorithmic surveillance in Hebron ultimately reorients theories of accumulation and dispossession in the digital age away from purely economistic framings. Instead, I foreground the violent political imperatives that drive innovations in surveillance, in Palestine and worldwide.
Article
Full-text available
In recent years, the concept of ‘prototype warfare’ has been adopted by Western militaries to accelerate the experimental development, acquisition, and deployment of emerging technologies in warfare. Building on scholarship at the intersection of Science and Technology Studies and International Relations investigating the broader discursive and material infrastructures that underpin contemporary logics of war, and taking a specific interest in the relationship between science, technology, and war, this article points out how prototype warfare captures the emergence of a new regime of warfare, which I term the experimental way of warfare . While warfare has always been defined by experimental activity, what is particular in the current context is how experimentation spans across an increasingly wide range of military practices, operating on the basis of a highly speculative understanding of experimentation that embraces failure as a productive force. Tracing the concept of prototype warfare across Western military discourse and practice, and zooming in on how prototype warfare takes experimentation directly into the battlefield, the article concludes by outlining how prototype warfare reconfigures and normalises military intervention as an opportunity for experimentation, while outsourcing the failures that are a structural condition of the experimental way of warfare to others, ‘over there’.
Article
Full-text available
Against the background of the growing use of machine learning (ML) based technologies by the military, our article calls for an analytical perspective on ML platforms to understand how ML proliferates across the military and to what effects. Adopting a material–technical perspective on platforms as developed within new media studies, and bringing this literature to critical security studies, we suggest that a focus on platforms and the technical work they do is needed to understand how digital technologies are emerging and shaping security practices. Through a detailed study of Google's open-source ML platform TensorFlow and a discussion of the US Department of Defense Algorithmic Warfare Cross-Functional Team, or Project Maven, we make two broader contributions. First, we identify a broader “platformization” of the military, with which we refer to the growing involvement and permeation of the (technomaterial) ML platform as the infrastructure that enables new practices of decentralized and experimental algorithm development across the military. Second, we draw out how this platformization is accompanied by new entanglements between the military and actors in the corporate domain, especially Big Tech, which play a key role in this context, as well as the open-source community that is organized around these platforms.
Article
This article aims to integrate two interrelated strands in critical security studies. The first is mounting evidence for the fallacy of claims for precision and accuracy in the United States ‘counterterrorism’ programme, particularly as it involves expanding aerial surveillance in support of operations of extrajudicial assasination. The second line of critical analysis concerns growing investment in the further automation of these operations, more specifically in the form of the US Department of Defense Algorithmic Warfare Cross-Functional Team, or Project Maven. Building upon generative intersections of critical security studies and science and technology studies (STS), I argue that the promotion of automated data analysis under the sign of artificial intelligence can only serve to exacerbate military operations that are at once discriminatory and indiscriminate in their targeting, while remaining politically and legally unaccountable.