ArticlePDF Available

Putting algorithmic bias on top of the agenda in the discussions on autonomous weapons systems

Authors:

Abstract

Biases in artificial intelligence have been flagged in academic and policy literature for years. Autonomous weapons systems—defined as weapons that use sensors and algorithms to select, track, target, and engage targets without human intervention—have the potential to mirror systems of societal inequality which reproduce algorithmic bias. This article argues that the problem of engrained algorithmic bias poses a greater challenge to autonomous weapons systems developers than most other risks discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS), which should be reflected in the outcome documents of these discussions. This is mainly because it takes longer to rectify a discriminatory algorithm than it does to issue an apology for a mistake that occurs occasionally. Highly militarised states have controlled both the discussions and their outcomes, which have focused on issues that are pertinent to them while ignoring what is existential for the rest of the world. Various calls from civil society, researchers, and smaller states for a legally binding instrument to regulate the development and use of autonomous weapons systems have always included the call for recognising algorithmic bias in autonomous weapons, which has not been reflected in discussion outcomes. This paper argues that any ethical framework developed for the regulation of autonomous weapons systems should, in detail, ensure that the development and use of autonomous weapons systems do not prejudice against vulnerable sections of (global) society.
Vol.:(0123456789)
Digital War (2024) 5:201–212
https://doi.org/10.1057/s42984-024-00094-z
ORIGINAL ARTICLE
Putting algorithmic bias ontop oftheagenda inthediscussions
onautonomous weapons systems
IshmaelBhila1
Published online: 31 May 2024
© The Author(s) 2024
Abstract
Biases in artificial intelligence have been flagged in academic and policy literature for years. Autonomous weapons sys-
tems—defined as weapons that use sensors and algorithms to select, track, target, and engage targets without human interven-
tion—have the potential to mirror systems of societal inequality which reproduce algorithmic bias. This article argues that
the problem of engrained algorithmic bias poses a greater challenge to autonomous weapons systems developers than most
other risks discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS),
which should be reflected in the outcome documents of these discussions. This is mainly because it takes longer to rectify
a discriminatory algorithm than it does to issue an apology for a mistake that occurs occasionally. Highly militarised states
have controlled both the discussions and their outcomes, which have focused on issues that are pertinent to them while ignor-
ing what is existential for the rest of the world. Various calls from civil society, researchers, and smaller states for a legally
binding instrument to regulate the development and use of autonomous weapons systems have always included the call for
recognising algorithmic bias in autonomous weapons, which has not been reflected in discussion outcomes. This paper
argues that any ethical framework developed for the regulation of autonomous weapons systems should, in detail, ensure that
the development and use of autonomous weapons systems do not prejudice against vulnerable sections of (global) society.
Keywords Autonomous weapons systems· Algorithmic bias· Automation bias· Inequality· Roboethics
Introduction
On the 17th of May 2023, the Chair of the Group of Govern-
mental Experts (GGE) on Lethal Autonomous Weapons Sys-
tems (LAWS), Ambassador Flavio Soares Damico opened
the morning session of the GGE meetings. On the agenda
was the discussion of paragraph 23–30 of the 2023 Report
of the GGE, dealing broadly with issues of human–machine
interaction in relation to autonomous weapons systems and
how these could be regulated. Paragraph 27 of the draft
report made passing reference to automation bias and “unin-
tended bias”. Noting that algorithmic bias would affect peo-
ple of colour, minorities, and other vulnerable populations,
the Philippine delegate pointed out that the report needed
to make “a clearer reference to the need to spell out the
risks arising from possible racial and gender bias".1 In the
same manner, the Canadian delegation noted that the lan-
guage used in the making of the Chair’s report would have
to “expand on the concept of unintended biases… to include
the language such as ethnicity, gender, age, and disability".2
Costa Rica, Panama, and Mexico buttressed the same point,
with Mexico going further to suggest that the outcome docu-
ment should include measures to prevent- not mitigate- algo-
rithmic biases that come with AI.3
Despite these calls for clear language on the prevention
of algorithmic bias in autonomous weapons systems, the
draft report that was produced on the next day omitted issues
of race, and the final report did not include any of the sug-
gested strong language, instead encouraging measures to
“reduce automation bias in system operators” and “reduce
unintended bias in artificial intelligence capabilities related
to the use of the weapon system".4 The report ignored all
the calls for the recognition of such a central problem in the
use of AI, particularly autonomous weapons systems which
may disproportionately impact vulnerable populations. The
following sections of this paper will show how AI systems
have disproportionately affected vulnerable populations,
* Ishmael Bhila
ishmael.bhila@port.ac.uk
1 School ofLaw, University ofPortsmouth, Portsmouth, UK
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
202
I.Bhila
making a case for a closer consideration of such problems
when discussing autonomy in weapons.
This paper, based on postcolonial critique of the socio-
technologies of war, contributes to the emerging discussion
on inequality and bias in autonomous weapons systems.
From the onset, it should be noted that the paper does not
address challenges with autonomous weapons systems that
target military objects; the paper is concerned with the
development of autonomous weapons systems that identify,
track, select, target, and engage human targets. While there
is extensive literature on bias in AI, the same level of scru-
tiny is yet to be applied to autonomous weapons systems. A
few scholars have addressed the problem of the potential risk
of bias posed by autonomy in weapons. Figueroa and others
address algorithmic bias against persons with disabilities
and the silence of that discussion in international discussions
on autonomous weapons systems (Figueroa etal. 2023).
Shama Ams’ paper deals with the convergence of military
and civilian uses of AI and addresses algorithmic bias in
passing (Ams 2023), and Catherine Jones’ paper focuses on
Western-centric research methods, with automation bias in
lethal autonomous weapons used only as an example (Jones
2021). While there have been increased scholarly debates on
military applications of AI, the analysis of how the most sig-
nificant forum to discuss the potential regulation of autono-
mous weapons systems has accounted for algorithmic bias
has not been examined so far. This article therefore makes a
key empirical contribution to discourse relating to the global
governance of military applications of AI.
The problem of engrained algorithmic bias poses a
greater challenge to the justifications for the use of autono-
mous weapons systems by their developers than the risks of
proliferation, incidental loss of life, access by terrorists, and
other identified risks. This is mainly because it takes longer
to rectify a discriminatory algorithm as seen in the many
examples given in the following sections of this paper than
it is to issue an apology for a mistake that occurs occasion-
ally. Powerful states have controlled both the discussions and
the outcome, which has focused on issues that are pertinent
to them while ignoring what is existential for the rest of the
world. This paper unpacks these dynamics, making a case
for centring the issue of algorithmic bias in outcome docu-
ments to reflect the discussions that take place within the
GGE on LAWS discussions.
Based on multidisciplinary literature from science and
technology studies (STS), engineering, computer science,
social science, and other fields, Section1 draws attention to
the emergence of biases in deep learning processes, natural
language processing (NLP), machine learning, and system
training and how these can have a profound impact on the
development and use of autonomous weapons systems. In
understanding these biases, the paper shows how these short-
comings can be transferred to autonomous weapons systems,
and how the risk of bias escalates in new contexts from the
system’s environment of development, particularly in dif-
ferent geographies and communities in the Global South.
Having shown the biases in AI and autonomous weap-
ons systems development, Section2 goes on to argue that
international discussions on autonomous weapons systems
should give centrality to the problem of algorithmic bias as
it would affect most of the global population if not properly
addressed. Section3 argues that both procedural and sub-
stantive international law should contain strong language
that can achieve, to use the Mexican delegation’s terms, the
prevention rather than the reduction of algorithmic bias in
autonomous weapons. While procedural law deals with the
rules, processes, and procedures of how international law-
making practices are conducted, substantive law seeks to
address inequalities, enhance the voice of the marginalised,
eradicate prejudices, acknowledge differences, and accom-
plish structural change (Fredman 2016). The paper con-
cludes by showing how language in the CCW process has
avoided adequately addressing a clear problem and makes a
case for a more sensitive approach that does not perpetuate
existing inequalities.
Methodology
This paper is a result of ongoing PhD research on the par-
ticipation of small states in the making of international
law relating to autonomous weapons systems. Based on
the postcolonial technoscientific framework, the paper cri-
tiques the situated knowledge that has marginalised issues
that are pertinent to the discourse and practice of algorithmic
warfare and to those who are most likely to be affected by
them. The paper adopts a qualitative research methodology,
analysing state submissions/proposals in the United Nations
Convention on Certain Conventional Weapons (CCW) since
the year 2017 with the start of formal discussions through
the establishment of the Group of Governmental Experts
(GGE) on Lethal Autonomous Weapons Systems (LAWS).
State submissions are proposals made to the group to guide
discussion and to suggest what a normative and practical
outcome on the subject should look like. For this paper, I
analysed states’ treatment of the problem of algorithmic bias
in their submissions and the differences in state interests
towards mitigating the issue. A total of 73 working papers,
submissions and other proposals were analysed covering the
period 2017–2023.
The methodology also relied heavily on statements made
by states in the CCW GGEon LAWS meetings that have
been taking place on average twice each yearsince 2017.
These statements are in the form of legal debates guided
by the Chair of the meeting who sets the questions and
agenda, usually culminating in a chair’s report at the end
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
203Putting algorithmic bias ontop oftheagenda inthediscussions onautonomous weapons systems
of each session. I sought to highlight when and how states
voiced concerns about algorithmic bias, the reaction to
those concerns, and an analysis of the outcomes of those
interventions.
Finally, the research analysed whether concerns and sug-
gestions about algorithmic bias were incorporated in the
Chair’s reports, and if they did, how these suggestions were
included. In this sense, I looked not only at the idea of the
inclusion of the concept but also the quality and importance
that it was awarded in relation to other issues. The state-
ments, submissions, Chairs’ agendas and reports, and related
documents are publicly available on the United Nations
Office at Geneva databases.5
The nature andforms ofautomation
andalgorithmic bias
The sociotechnologies of security (or insecurity) are open to
failure, owing mainly to their “inherent contradictions and
irremediable fault lines” (Suchman etal. 2017). Autono-
mous weapons systems are based on sensors, AI, and other
emerging technologies for profiling, biometrics, thermal
imaging, data mining, satellite observation, and population
metrics; the use of which is based on hierarchies of knowl-
edges, assumptions, vocabularies, and modes of attention
(Wilke 2017). As autonomous weapon systems are expected
to identify, monitor, and engage targets, their ability to tell
significant facts about human life, particularly in contexts
foreign to their conditions of design and development, is
highly overestimated (Adelman 2018).
This section considers several areas which characterise
autonomous weapons systems and how these are liable to
racialisation, discrimination, and bias. The paper acknowl-
edges the positive aspects of AI. However, the purpose of
this study is to analyse algorithmic bias and its potential
impact in the development and use of autonomous weap-
ons systems. The positive aspects of AI both in civilian
and military spaces are well documented and are still being
realised. This paper also focuses on the algorithms that ani-
mate autonomous and AI technologies despite autonomous
weapons systems being not always based on AI technologies.
The paper considers autonomy in weapons as a spectrum
with the potential for having challenges at any level, not as
a fixed system based solely on one type of technology. This
elusive nature of autonomy in weapons makes it essential to
have robust regulatory frameworks before they are deployed.
This contribution seeks to add to the conversation on a com-
prehensive regulatory regime for AI in the military domain,
focusing only on the pressing issue of algorithmic bias as
it pertains to autonomous weapons systems. We ought to
learn from what is already known about the problems of
AI. The argument is not about banning the development of
AI, and the paper does not engage on the debate on whether
autonomous weapon systems are legally or ethically permis-
sible in international law and in practice, it simply aims to
attract more attention to the problem of algorithmic bias
when discussions on autonomous weapons systems are done.
Autonomous weapons, like most AI-based security sys-
tems, engage in data collection, storage, and management
to enable the conduction of intelligence, surveillance, and
reconnaissance (ISR). For example, uncrewed aerial vehi-
cles (UAVs) can collect information about the profiles and
nature of targets, improve their functionality without human
oversight, and can be programmed to include numerous
responses to respective challenges (Konert and Balcerzak
2021). This data collection, storage, and management has
the potential to lead to racialisation through the biased crea-
tion and utilisation of data (M’charek etal. 2014). In 2021,
the USA government published that multiple civilians had
been killed through “targeted killing" using drones, and
Peter Lee gave the example of Afghan civilians who were
killed having been misidentified as terrorists, noting that
these new weapons are used deliberately to coerce popu-
lations (Lee 2021), echoing Judith Butler’s argument that
those targeted are viewed as people whose lives are injura-
ble and lose-able (Butler 2009). Algorithms are only as
good as the data they are fed, which means that who creates
them and where they are used matter the most. The culture,
beliefs, and value system of the developer are influential in
how the algorithms will perform in settings that are differ-
ent from where they were programmed. Algorithmic bias is
classified into three categories: preexisting bias- which is
influenced by unequal social structures and culture, techni-
cal bias—that emanates from technical shortcomings, and
emergent bias—which is a result of a change in environment
or context within which the algorithm is used (Friedman
and Nissenbaum 1996). I propose that efforts to regulate
autonomous weapons systems should consider bias at all
these levels to avoid unintended harms against marginalised
and vulnerable populations.
It is essential to consider the risk posed by targeting
humans using sensors in war as this has serious ethical
and bias implications. The USA, for example, uses elec-
tronic and visual data collected through sensors to gather
intelligence in its "global war on terror" (US Office of the
Secretary of Defense 2007). Roboticists like Ronald Arkin
proposed the use of robots (autonomous weapons systems)
in war that could be emotionless and that utilise electro-
optics, robotic sensors, and synthetic aperture to observe
and target humans (Arkin 2010). However, several scholars
have criticised this uncritical trust in the use of automated
sensors and AI in targeting who to kill. Critics of those
who are pro-autonomous weapons have argued that tar-
geting using algorithms is murky when it comes to dis-
tinguishing between civilians and combatants, especially
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
204
I.Bhila
in unfamiliar cultural contexts (Sharkey 2010). Others
questioned the ability of autonomous weapons to identify
legitimate targets and to make strategic and tactical deci-
sions (Roff 2014; Johnson 2022; Hunter and Bowen 2023).
This takes us to the question of the “target of colour."
AI systems can learn from conversations, observation, and
identification of patterns, all of which are liable to sys-
temic bias (Klugman 2021). With the proliferation of the
use of large language models (LLMs) in many domains,
companies like Palantir developed an artificial intelligence
platform (AIP) for defence to “unleash the power of LLMs
and cutting-edge AI for defence".6 Palantir has worked
with the USA and UK governments for the provision of
military and security data and surveillance services, and
it has expanded its market to European security services,
accompanied by controversies of data privacy concerns
(Johnston and Pitel 2023). Data used to train machine
learning models can reproduce inequalities and be incom-
plete, leading to biased outcomes (Ferrara 2023). Military
applications of facial recognition are already in imple-
mentation with the Ukrainian battlefield having already
integrated Clearview AI’s facial recognition software to
identify enemies (Dave and Dastin 2022). The gathering of
biometric data using AI is highly faulty among people of
colour, with one large-scale study showing that AI largely
misidentified people from East and West Africa and East
Asian people migrating to the US, while algorithms made
in China were effective at identifying East Asian people
while misidentifying American Indians, African Ameri-
cans, and other Asian populations (Grother etal. 2019).
Buolamwini and Gebru discovered that machine learning
algorithms are more likely to discriminate and misclassify
darker-skinned females (at a rate of 34.7%) as compared
to light-skinned males (0.8%) (Buolamwini and Gebru
2018). This is mainly a result of sampling bias whereby
an algorithm is trained to recognise a certain section of
society (Ferrara 2023) and autonomous weapons systems
are largely developed and trained in the USA, China,
states inthe European Union, and a few other leaders in
AI development, which leaves populations in parts of the
world where majorities are not white at a very high risk.
Google’s Google Photo algorithm was recorded to have
misidentified a black couple as gorillas and still could not
find a viable solution to their biased algorithm after years
(Vincent 2018; Grant and Hill 2023). If facial recogni-
tion software is used by states to identify security threats
(Israel HLS & CYBER 2022), the risk of killing the wrong
people grows extremely high if autonomous weapons sys-
tems are used among racially different populations with
the high probability in AI of misidentifying people of col-
our. Software like Faception (FACEPTION|Facial Person-
ality Analytics, 2023) that claim to be able to identify
a terrorist or paedophile through facial recognition are
highly controversial but have been used by governments
(Buolamwini and Gebru 2018).
In addition to the biased collection and usage of biomet-
ric data, AI also has the capability to learn directly through
voice recognition and interaction with humans (Kim etal.
2019; Klugman 2021). Speech recognition AI utilises, inter-
prets, and employs language in ways that are not anticipated
by humans (Bylieva 2022). Robots like Ameca use genera-
tive AI to speak several languages and to interact directly
with people (Chan 2023), and in July 2023 at the United
Nations’ AI for good conference in Geneva a group of nine
robots held a press conference where they addressed ques-
tions from humans (Ferguson 2023). However, the humans
addressing the robots at the AI for good conference were told
to speak slowly and there were obvious inconsistencies and
poses between responses. Speech recognition AI systems
struggle when interacting with unfamiliar speech patterns.
A study on the use of Apple’s iOS Siri system showed that
it had challenges understanding children’s speech, owing to
issues like pitch and patterns of voice and speech, and the
types of questions children ask (Lovato and Piper 2015). A
study of five automated speech recognition (ASR) systems
in the USA discovered that the average word error rate for
transcribing speech by black speakers was much higher than
it was for white speakers (Koenecke etal. 2020). Another
study in Britain noted that automatic speech recognition
reproduced and perpetuated existing linguistic discrimina-
tion against marginalised groups (Markl 2023). If this type
of AI is used in autonomous weapons systems, such margins
for error can have catastrophic consequences. For example,
a robot may be tasked to “select and engage" a target based
on its own understanding of who is or is not a belligerent
in a community with a foreign language. If such a system
finds a group of young men in an African community, for
example, wresting and insulting each other and shouting
at the top of their voices while enjoying themselves, the
chances of it profiling them as combatants is extremely high,
simply because it lacks an understanding of the culture and
language patterns.
Closely related to the problem of voice recognition is the
problem of bias in translation. With autonomous weapons
systems largely developed in the West and by a few more
countries in the world, the chances of some of these systems
having to rely on translation are very high. A good example
of bias in machine translation is the gender bias in Google
translation. Many languages are gender neutral or have gen-
der-based words, which makes translation to the English
language highly inaccurate even with the most modern AI
systems. The cultural differences between the source lan-
guage and target language can lead to gender bias in trans-
lation of several languages. For example, the translation of
dia seorang dokter, which is gender neutral, from Indonesian
by Google Translate translates to he is a doctor while dia
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
205Putting algorithmic bias ontop oftheagenda inthediscussions onautonomous weapons systems
seroang perawat which is also gender neutral translates to
she is a nurse (Fitria 2021). This is the challenge of ste-
reotyping in machine translation (Savoldi etal. 2021). The
translator automatically assumes that a doctor should be
male while a nurse should be female. The study that showed
this bias was done in 2021, and at the time of this study in
2023, the same bias was unchanged in Google Translate.
In the event that such a bias is contained in an autonomous
weapon system, there could be biased identification of tar-
gets for a long time before a technical error is corrected,
which would be catastrophic, unethical, and tragic. In addi-
tion, it would be tragic to discover a problem of algorithmic
bias through experience as opposed to discovery by research
especially in the domain of military technology which has
implications over life and death. With the existence of mul-
tiple languages in vulnerable societies, machine translation
systems also carry the bias of under-representation where
certain groups are not even visible (Savoldi etal. 2021).
For example, an AI system that relies on natural language
learning and translation would easily misidentify people in
multiple ethnic settings in Zimbabwe. A simple search on
Google Translate of the word mukororo, a traditional Ndau
word that means son automatically mistranslates to Shona,
the dominant language in Zimbabwe, the result of which is
thief. The correct Shona word for thief is not mukororo but
gororo. In such a case, an AI system can easily misidentify
someone as a thief who is simply being endearingly being
referred to as a son. In 2017, Facebook translated the phrase
“good morning" from Arabic into “attack them" in Hebrew
which led to the arrest of a Palestinian man by Israeli police
(Hern 2017). A study of hate speech detection tools showed
that members of minority groups, especially Black peo-
ple, in America were likely to be labelled as offensive by
hate speech detection identification tools because of their
dialect, also exposing them to real-life violence (Sap etal.
2019). These cases show that translation AI has already been
proven as faulty in many cases, and any military AI that
would be based on such systems is likely to be ethically
questionable.
The USA has partnered with Scale AI to develop “Scale
Donovan", an AI platform that uses LLMs based on the same
faulty philosophy that led to the killing of civilians at a wed-
ding in Mali- relying on AI for the identification of who
is “friendly" or an enemy through live data and depending
on AI’s ISR information. Such a catastrophic mistake was
made with a “human in the loop" which makes it plausible
to assume that worse can happen if autonomy in weapons
does not account for bias. Everyday language used in social
settings is complex, which makes it risky to deploy harmful
technologies that cannot reason beyond colloquialisms (for
example, the statement “an all-Muslim movie was a ‘box
office bomb’” would easily be interpretated as stereotypical
by most people, assuming that all Muslims are terrorists- a
bias that cannot be easily explained and understood by an AI
system) (Sap etal. 2020). Large language models reveal a
spectrum of behaviours that are harmful, especially through
the reinforcement of social biases (Ganguli etal. 2022).
Algorithmic bias in AI systems can lead to the reinforcement
and escalation of social inequalities and biased decisions
(Kordzadeh and Ghasemaghaei 2022), which would lead to
the application of force on the wrong targets by emerging
technologies in the area of autonomous weapons systems.
The identification of what is perceived as hostile by AI
can also be very problematic. If autonomous weapons sys-
tems and emerging technology-based systems, select and
target threats. The global war on terror led by the USA and
its allies depends largely on ISR done by semi-autonomous
or autonomous systems, a practice that is controversial and
has led to the killing of multiple civilians in environments
foreign to those of the deployers. In Mali, the French army
killed multiple people at a wedding after a Reaper Drone
provided wrong ISR information, mistaking wedding attend-
ees for insurgents (Stoke White Investigations 2021). These
challenges should be addressed, and regulations should
be put in place before autonomous weapons systems are
deployed.
Recognition ofalgorithmic bias inglobal
policy andnational legal contexts
The problem of exclusion, termed as representational harm
by Kate Crawford, is a widely recognised challenge in AI
debates (Ruttkamp-Bloem 2023). Various studies have
acknowledged that there is a crisis with regards to diversity
in AI (West etal. 2019). The challenges of bias in AI have
been flagged in recent years in soft law (recommendations,
guidelines, standards, codes of conduct, and other non-bind-
ing laws), and the development of hard law on AI is still in
its infancy (Gutierrez 2023). The European Union’s 2021
proposal for an Artificial Intelligence Act (Article 33 and 37)
proposed the regulation of AI systems that use “‘real-time’
and ‘post’ remote biometric identification systems” that have
the risk of bias and discrimination according to sex, ethnic-
ity, age, or disability based on historical societal patterns
(European Commission 2021). The EU AI Act proposes a
risk-based approach, and issues of discrimination and bias
in AI are classified as “high risk."
In the same manner, UNESCO’s recommendations on
AI ethics encouraged its member states to be wary of the
cultural impacts of AI, noting that natural language process-
ing should be cognisant of the “nuances of human language
and expression” (UNESCO 2022). The Council of Europe’s
Committee on Artificial Intelligence (CAI) went even fur-
ther in its Draft [Framework] Convention on Artificial Intel-
ligence, Human Rights, Democracy, and the Rule of Law to
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
206
I.Bhila
suggest that states should manage risk through ensuring that
those who may be affected by AI should have their perspec-
tives heard when risk and impact assessments are done.7
However, this provision in the draft convention would have
been useful in addressing the effects of algorithmic bias, par-
ticularly in more risky technologies like autonomous weap-
ons systems, but it falls agonisingly short in tackling the
problems of discrimination and bias in AI. Such an approach
of including the potential victims in the development of nor-
mative frameworks is what this paper advocates, especially
in negotiations for the regulation of the development and use
of autonomous weapons systems at the UN.
The Organisation for Economic Cooperation and Devel-
opment (OECD), which has 38 member states, developed
their own recommendations on AI that were endorsed by a
large global state population, recognising in the first instance
that “a well-informed whole-of-society public debate is nec-
essary for … limiting the risks associated with” AI (OECD
2022). The recommendations by the OECD are based on
“human-centred values and fairness" that include equality,
non-discrimination, the inclusion of underrepresented popu-
lations, diversity, fairness, and social justice, with the target
goal of reducing inequalities and addressing bias.8 Some
comprehensive studies on AI policies, for example Maas and
Villalobos’ work, have identified some seven “institutional
models" for AI governance (scientific consensus building,
political consensus building and norm-setting, coordination
of policy and regulation, enforcement of standards or restric-
tions, stabilisation and emergency response, international
joint research, and distribution of benefits and access) but
have barely analysed the issue of algorithmic in those mod-
els (Maas and Villalobos 2023). Scholars working on global
regulation of AI have acknowledged algorithmic bias but
have chosen not to focus “on the relative urgency of exist-
ing algorithmic threats (such as e.g., facial recognition or
algorithmic bias)” but to find ways in which those looking
to regulate could find convergencies for ethical AI (Stix and
Maas 2021, p. 261). This underlines the urgency for the rec-
ognition of algorithmic bias in AI not only in practice but
also in global regulation efforts.
The number of global and domestic legislations that
aim to mitigate the risks of AI has increased by more than
six times since 2016 (Maslej etal. 2023). In the USA, the
National Institute of Standards and Technology noted in its
Artificial Intelligence Risk Management Framework that “AI
systems are inherently socio-technical in nature, meaning
that they are influenced by societal dynamics and human
behaviour” (NIST AI 100-1, 2021). The framework goes
on to identify potential harms, including harm to groups or
communities through discrimination. An interesting section
in the framework addresses “risk prioritisation". In this sec-
tion, the policy argues that sometimes, there are risks that
are not worth prioritising, especially if they cannot be fully
eliminated. This, however, calls to question how risks are
defined by different people. It is highly questionable whether
a developer who is not likely to be affected by racial bias in
weapons systems would regard it as a priority. It is therefore
essential to have a framework that decides what is essential
and what is not based on equal consultation and participa-
tion rather than business-informed decisions by developers.
Despite the mentions and references to bias in AI in soft
law in global context, the issue is yet to be fully addressed,
with no policy fully devoting space to the risks of algorith-
mic bias and how it should be prevented. With governments
not committed to developing binding regulations for AI to
maximise its benefits (Marchant etal. 2020), it is hard to see
challenges that are socially embedded like algorithmic bias
being given the attention and urgency they deserve.
The CCW negotiations andalgorithmic bias
The Heyns Report (A/HRC/23/47) of 20139 introduced the
issue of “lethal autonomous robotics" (LARs). In that report,
there was mention of the respect for human life, the Martens
Clause, and other challenges posed by autonomous weap-
ons systems, but the issue of race and bias was yet to be
introduced in the discussion. However, as the depth in dis-
cussion developed over the years, states began to recognise
the importance of taking biases in AI seriously when think-
ing about autonomous weapons systems. At the first formal
Group of Governmental Experts (GGE) meeting in 2017,
the USA submitted a proposal arguing that prohibitions
should be directed towards “intentional wrongdoing," with
unintended consequences referred to as “mere accidents or
equipment malfunctions" that do not violate the law of war.10
This logic would mean that there would be no responsibil-
ity for systems that are “unintentionally" racist that would
disproportionally affect vulnerable groups. In a 2018 CCW
submission, the International Committee of the Red Cross
(ICRC) made a passing note that “unpredictable and unreli-
able operations may result from a variety of factors, includ-
ing … in-built algorithmic bias”.11 In 2019, the Chair’s
report noted that there was need for further clarification on
aspects like “possible bias in the datasets used in algorithm-
based programming relevant to emerging technologies in the
area of autonomous weapons systems”.12 Thompson Cheng-
eta observed the same challenge and explained that.
“an earlier version of the 2019 GGE Report included a
paragraph that noted that the use of AADs may com-
pound or worsen social injustices such as racial and
gender discrimination. During the discussions, no state
representative contested that paragraph. Later in the
evening of the same day when another version of the
report was provided, the paragraph had been removed.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
207Putting algorithmic bias ontop oftheagenda inthediscussions onautonomous weapons systems
The delegations from South African and Canada
questioned why this had occurred, but no remedy was
provided and the text addressing discrimination risks
remained excluded” (Chengeta 2020, p. 177).
Regardless of these calls, in 2020 only passing references
were made to the risks of “unintended engagements” posed
by autonomous weapons systems.13 The GGE took one step
forward and backtracked twice on the issue despite the hope
of the equalisation represented by UN organs. With small
states being the ones facing potential effects of biases in
autonomous weapons systems, the debates developed in a
binary manner that continued to either ignore or silence calls
for the recognition and discussion of algorithmic bias.
Several states have raised the issue of algorithmic bias
since 2020. In 2021, the Holy See argued that “autonomous
weapons systems, equipped with self-learning or self-pro-
grammable capabilities, necessarily give way to a certain
level of unpredictability, which could, [lead to] such systems
[making] mistakes in identifying the intended targets due
to some unidentified “bias” induced by their 'self-learning
capabilities'".14 A joint working paper in the same year by
Argentina, Costa Rica, Ecuador, El Salvador, Panama, Pales-
tine, Peru, the Philippines, Sierra Leone, and Uruguay noted
that “weapon systems are not neutral. Algorithm-based pro-
gramming relies on datasets that can perpetuate or amplify
social biases, including gender and racial bias, and thus have
implications for compliance with international law".15 Addi-
tionally, Argentina, Ecuador, Costa Rica, Nigeria, Panama,
the Philippines, Sierra Leone, and Uruguay submitted a
Draft which they labelled as ‘Protocol VI’ whose Article
3 Section 3 suggested that “each High Contracting Party
shall ensure that weapon systems do not rely on datasets that
can perpetuate or amplify social biases, including gender
and racial bias.” In 2023, Pakistan submitted that “there are
already known problems of data bias and unpredictability
that are compounded by growing autonomy of these weap-
ons, based on machine learning algorithms".16 A 2023 paper
by a group of nine European and Latin American states
noted that a normative framework should be developed that
considers “the avoidance of data bias and programming
shortfalls in complex systems".17 A March 2023 proposal by
Palestine also argued that the process of using encoded data
to target, select, and engage humans with force would “likely
entrench bias and discrimination through flawed profiling of
human characteristics, particularly if seeking to target some
people rather than others".18 All these concerns by various
states are testament of the centrality of the problem of algo-
rithmic bias in negotiations for a normative framework for
autonomous weapons systems.
Interestingly, however, the risks posed by autonomous
weapons systems that have largely been considered by highly
militarised states like the USA, UK, Russia, Australia, and
others include unintended engagements, civilian casual-
ties, incidental loss of life, the risk of proliferation, loss of
control of the system, and the risk of acquisition by ter-
rorist groups.19 Issues of racial, ethnic, and gender bias
in autonomous weapons systems are omitted in almost all
their submissions, whether deliberately or unconsciously.
The absence of racial and other biases in the discourse used
by these powerful states in the CCW has also relegated the
issue of algorithmic bias to the periphery of theoutcomes
of the discussions in the GGE. Western philosophy of sci-
ence, which informs such discourse, has marginalised such
pertinent concerns like algorithmic bias to the periphery.
The problem of engrained algorithmic bias poses a greater
challenge to the justifications for the use of autonomous
weapons systems by their developers than the risks of pro-
liferation, incidental loss of life, access by terrorists, and
other identified risks. This is mainly because it takes longer
to rectify a discriminatory algorithm- as seen in Google’s
failure to fix its racist and sexist translations for years even
until now- than it is to issue an apology for a mistake that
occurs occasionally. These powerful states have controlled
both the discussions and the outcome, which has focused
on issues that are pertinent to them while ignoring what is
existential for the rest of the world.
The Convention on Certain Conventional Weapons
(CCW), on paper, represents actors from across the global
divide, with Civil Society, Think Tanks, States from all
regions, and regional and international organisations repre-
sented. However, in practice, this representationalism is nei-
ther existent nor desired by some of the actors. At the time
of writing, the CCW had 126 state parties, four of which
were signatories. Adopted in 1980, the convention seeks to
ban or restrict the development and/or use of certain types
of weapons that may cause unnecessary harm in war or that
may have an indiscriminate impact on civilians. The CCW is
uniquely positioned to address issues of emerging weapons,
with Article 8 (2)(a) stating that high contracting parties
can suggest new protocols not already covered to be added
(Convention on prohibitions or restrictions on the use of
certain conventional weapons which may be deemed to be
excessively injurious or to have indiscriminate effects 1980).
However, for military superpowers who want to maintain
military superiority through weapons based on emerging
technologies, the introduction of new prohibitions is not
an attractive prospect, which has hampered the effective-
ness of the CCW (Carvin 2017). To this end, the USA in its
2018 working paper argued that states should not seek “to
codify best practices or set new international standards for
human–machine interaction in this area” as it was impracti-
cal, favouring instead voluntary measures by states to com-
ply to IHL.20 For the majority of the world, however, whose
security is not guaranteed and whose vulnerabilities are
many, international law provides the best option for security.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
208
I.Bhila
The biases that are synonymous with emerging technologies
and weapons are bound to affect smaller states, fragile com-
munities, minorities, vulnerable populations, and people of
colour more than they do the dominant states.
Within this context, the CCW has failed to be inclusive
and equal. Practice in international law has shown that the
existence of diverse perspectives in disarmament discus-
sions is of utmost importance for the success of multilateral
decision-making (Borrie and Thornton 2008). Scholars like
Thompson Chengeta have argued that the CCW is not the
correct forum for discussing autonomous weapons systems
(Chengeta 2022). This research shows that the CCW falls
short in several ways in encouraging inclusion and equality.
One explanation that can be offered for the CCW’s
failure to address issues concerning algorithmic bias in
the debates on autonomous weapons systems is that most
vulnerable states are almost always excluded, particularly
due to structural constraints. African and Caribbean states
are scarcely represented in the discussions on autonomous
weapons systems within the CCW. These are the states that
are predominantly black in their racial composition. Of the
24 states in the Caribbean, only Antigua & Barbuda, Cuba,
Dominican Republic, Grenada, and Jamaica are parties to
the CCW. Less than half of African states (26) are either
High Contracting Parties or Signatories to the CCW. A stag-
gering 65 UN member states are not parties to the CCW,
with only Andorra being European. This means that if all
the 126 states parties to the CCW were to attend and con-
tribute to the discussions on autonomous weapons systems,
a disproportionately high number of vulnerable states are
left out from the onset.
In addition, many small states who are part of the CCW
do not have the capacity to be in the discussions on a yearly
basis. During fieldwork for this study, I realised that during
the 15–19 May 2023 session of the GGE, very few Afri-
can and Caribbean states were represented. On the 15th,
the first meeting did not have a single Black-African state
represented, and throughout the whole session, only South
Africa, Nigeria, Algeria, Sierra Leone, and Cameroon were
represented among African states. Among those present,
only Algeria (Monday 15 May) and South Africa (Friday
19 May) made very brief statements. For Caribbean states,
Cuba- which has always been present at GGE meetings on
autonomous weapons systems made several contributions
to the discussions. The research showed that most smaller
states simply cannot afford to provide and fund personnel
to attend these meetings, even if they are part of the CCW.
This reflects on the structural inadequacies of the CCW
as a forum for international law-making. Caribbean and
Latin American states who have not been actively involved
in the CCW attended the February 2023 Latin American
and Caribbean Conference on the Social and Humanitar-
ian Impact of Autonomous Weapons organised by Costa
Rica,21 showing their willingness to discuss and address
challenges posed by autonomous weapons systems. These
states came up with the Belén Communique which reiter-
ated their commitment to actively engage in the debates to
push for a legally binding instrument on autonomous weap-
ons systems.22 In addition, Caribbean states also convened
a conference on autonomous weapons systems in Septem-
ber 2023, coming up with the CARICOM declaration which
emphasised the need for regulating autonomous weapons
systems so that they “should not be leveraged to undermine
human rights, exacerbate prevailing inequalities, nor deepen
discrimination on the basis of race, ethnicity, nationality,
class, religion, gender, age, or other status."23 Similarly, in
December 2023, the Philippines organised a conference on
autonomous weapons systems, bringing an Asian perspec-
tive to the debate. It is essential therefore to question why the
CCW continues to be an unattractive forum for the discus-
sions on autonomous weapons systems.
Combining the structural and procedural inadequacies of
the CCW with the disproportionate dominance of highly
militarised states in the CCW, it can be gleaned that substan-
tive issues that smaller states grapple with in international
security sometimes do not find expression in discussion
outcomes in the forum. This is a worrying trend in interna-
tional law-making which is likely to perpetuate international
security problems that these forums seek to address. The
neglect and relegation of issues of ethnic, racial, religious,
gender, disability, and other biases in algorithms that (will)
control autonomous weapons systems is likely to lead to the
proliferation not the mitigation of conflicts and global polari-
sation when the effects begin to be fully felt among vulner-
able populations. If these challenges are to be addressed, the
voices that call for caution on the dehumanising potential of
autonomous weapons systems must be heeded to.
Conclusions
The discussions at the CCW have gone on for years. For
most states, the end goal is for a legally binding instrument
that will regulate the development and use of autonomous
weapons systems. For the highly militarised few, the debates
are an opportune moment to reaffirm the applicability of
existing international humanitarian law, which regrettably
does not address issues like algorithmic bias. Both efforts,
however, would be exercises in futility when it comes to the
protection of the most vulnerable in global society if they
are not meaningfully consulted in the process and if calls
for the mitigation of bias in autonomous weapons systems
are ignored or given a peripheral position in the discussions.
The problem of algorithmic bias has been extensively
researched in academic and policy literature, but this has not
translated to policy results at the UN level when it comes to
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
209Putting algorithmic bias ontop oftheagenda inthediscussions onautonomous weapons systems
attempts to regulate autonomous weapons systems. This gap
is even more surprising because side events at the CCW have
been dedicated to such issues, Civil Society and academic
advocacy have raised the same issues, and some states have
voiced the concerns to do with racial, gender, ethnic, and
other forms of bias in the discussions and in their submis-
sions. Given these continued efforts, it is worrisome that
the reports from the discussions have continuously relegated
the issue of algorithmic bias and have not treated it with the
detail that would be expected.
To prevent the risks in AI, the perspectives and concerns
of those who are likely to be affected should be considered
with full attention. It is true that states from the Global
South, or small and vulnerable states have participated in
the discussions within the confines of the GGE on LAWS in
the Convention on Certain Conventional Weapons (CCW).
However, representational equality does not automatically
mean substantive equality. The relegation of the issue of
algorithmic bias, particularly when it comes to race, eth-
nicity, religion, gender, and disability as raised by many
states, in the CCW shows how the substantive outcomes of
discussions may not reflect pertinent issues for the vulner-
able members of global community. With the discussions
still ongoing, we can only hope that such critical issues will
gain traction and be given full attention for the protection of
vulnerable states and peoples.
Notes
1. The Philippines statement, GGE on LAWS, 17 May
2023, accessible at https:// conf. unog. ch/ digit alrec ordin
gs/ index. html? guid= public/ 61. 0500/ D9079 0E4- 53C2-
4A71- A4D3- 9AFAE 8A80A 26_ 10h07 & posit ion=
2498& chann el= ENGLI SH
2. Canada statement, GGE on LAWS, 17 May 2023,
accessible at https:// conf. unog. ch/ digit alrec ordin gs/
index. html? guid= public/ 61. 0500/ D9079 0E4- 53C2-
4A71- A4D3- 9AFAE 8A80A 26_ 10h07 & posit ion=
3585& chann el= ENGLI SH
3. Mexico statement, GGE on LAWS, 17 May 2023,
accessible at https:// conf. unog. ch/ digit alrec ordin gs/
index. html? guid= public/ 61. 0500/ D9079 0E4- 53C2-
4A71- A4D3- 9AFAE 8A80A 26_ 10h07 & posit ion=
8624& chann el= ENGLI SH
4. Report of the 2023 session of the Group of Govern-
mental Experts on Emerging Technologies in the Area
of Lethal Autonomous Weapons Systems, Para. 27.
5. The recordings of the state debates/statements can be
found at https:// conf. unog. ch/ digit alrec ordin gs/ index.
html? guid= public/ and the state submissions and other
conference documents are available at https:// libra ry.
unoda. org/ and https:// meeti ngs. unoda. org/ ccw-/ conve
ntion- on- certa in- conve ntion al- weapo ns- group- of-
gover nment al- exper ts- on- lethal- auton omous- weapo
ns- syste ms- 2023.
6. See https:// www. palan tir. com/ aip/ defen se/ for more
details.
7. Article 24(2b) Risk and Impact Management Frame-
work- Committee on Artificial Intelligence (CAI),
Revised Zero Draft [Framework] Convention on Arti-
ficial Intelligence, Human Rights, Democracy and the
Rule of Law.
8. Section1.2 and 1.4(c).
9. Report of the Special Rapporteur on extrajudicial, sum-
mary, or arbitrary executions, Christof Heyns, 9 April
2013.
10. CCW/GGE.1/2017/WP.6 Working Paper entitled
Autonomy in Weapons Systems submitted by the
United States of America, 10 November 2017, Para.
30.
11. CCW/GGE.1/2018/WP5 Working Paper entitled Ethics
and autonomous weapon systems: An ethical basis for
human control submitted by the ICRC, 29 March 2018,
Para. 45.
12. CCW/GGE.1/2019/3 Report of the 2019 session of the
Group of Governmental Experts on Emerging Technol-
ogies in the Area of Lethal Autonomous Weapons Sys-
tems https:// docum ents- dds- ny. un. org/ doc/ UNDOC/
GEN/ G19/ 285/ 69/ PDF/ G1928 569. pdf? OpenE lement
Para. 20(a).
13. CCW/GGE.1/2020/WP.7 Chairperson’s Summary
Para. 37(a).
14. CCW/CONF.VI/WP.3 Working Paper entitled Trans-
lating Ethical Concerns into a Normative and Opera-
tional Framework for Lethal Autonomous Weapons
Systems submitted by Holy See, 20 December 2021,
Para. 9.
15. CCW/GGE.1/2021/WP.7 Joint Working Paper, 27 Sep-
tember 2021, Para. 9(e).
16. CCW.GGE.1/2023/WP.3 Working Paper entitled Pro-
posal for an international legal instrument on Lethal
Autonomous Weapons Systems (LAWS) submitted by
Pakistan, Para 13.
17. Joint Commentary of Guiding Principles A, B, C, and
D by Austria, Belgium, Brazil, Chile, Ireland, Ger-
many, Luxembourg, Mexico, and New Zealand.
18. State of Palestine’s Proposal for the Normative and
Operational Framework on Autonomous Weapons Sys-
tems, March 2023.
19. CCW/GGE.1/2022/WP2 Working Paper entitled Prin-
ciples and Good Practices on Emerging Technologies
in the Area of Lethal Autonomous Weapons Systems
submitted by Australia, Canada, Japan, the Republic
of Korea, the United Kingdom, and the United States,
Para. 32.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
210
I.Bhila
20. Working paper by the USA on “Human–Machine Inter-
action in the Development, Deployment and Use of
Emerging Technologies in the Area of Lethal Autono-
mous Weapons Systems CCW/GGE.2/2018/WP.4 Para
45.
21. Details about this conference are accessible here
https:// confe renci aawsc ostar ica20 23. com/? lang= en
22. Belén Communique, ‘Further action 2’ https:// confe
renci aawsc ostar ica20 23. com/ wp- conte nt/ uploa ds/
2023/ 02/ EN- Commu nique- of- La- Ribera- de- Belen-
Costa- Rica- Febru ary- 23- 24- 2023.. pdf
23. Section II, CARICOM Declaration, 2023, found at
https:// www. caric om- a ws20 23. com/_ files/ ugd/ b69acc_
c1ffb 97ed9 02493 0a320 5ae4e 34c1b 45. pdf
Declarations
Conflict of interest The author states that there is no conflict of inter-
est.
Open Access This article is licensed under a Creative Commons Attri-
bution 4.0 International License, which permits use, sharing, adapta-
tion, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article are
included in the article's Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in
the article's Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
References
Adelman, R.A. 2018. Security glitches: The failure of the universal
camouflage pattern and the fantasy of “identity intelligence.” Sci-
ence, Technology, & Human Values 43 (3): 431–463. https:// doi.
org/ 10. 1177/ 01622 43917 724515.
Ams, S. 2023. Blurred lines: The convergence of military and civil-
ian uses of AI & data use and its impact on liberal democracy.
International Politics 60 (4): 879–896. https:// doi. org/ 10. 1057/
s41311- 021- 00351-y.
Arkin, R.C. 2010. The case for ethical autonomy in unmanned systems.
Journal of Military Ethics 9 (4): 332–341. https:// doi. org/ 10. 1080/
15027 570. 2010. 536402.
Borrie, J., and A. Thornton, eds. 2008. The value of diversity in multi-
lateral disarmament work. New York: United Nations.
Buolamwini, J. and Gebru, T. 2018. Gender shades: Intersectional
accuracy disparities in commercial gender classification, in Pro-
ceedings of the 1st Conference on Fairness, Accountability and
Transparency. Conference on Fairness, Accountability and Trans-
parency, PMLR, 77–91. https:// proce edings. mlr. press/ v81/ buola
mwini 18a. html. Accessed: 14 August 2023.
Butler, J. 2009. Frames of war: When is life grievable? London: Verso.
Bylieva, D. (2022) Language of AI. https:// doi. org/ 10. 48417/ TECHN
OLANG. 2022. 01. 11.
Carvin, S. 2017. Conventional thinking? The 1980 convention on cer-
tain conventional weapons and the politics of legal restraints on
weapons during the cold war. Journal of Cold War Studies 19
(1): 38–69.
Chan, K. (2023) What’s new in robots? An AI-powered humanoid
machine that writes poems, AP News. https:// apnews. com/ artic
le/ robot- show- artifi cial- intel ligen ce- chatg pt- 0d0b4 e0bfe ec186
0f162 98bc7 0322e 99. Accessed 15 August 2023.
Chengeta, T. 2020. Autonomous armed drones and the challenges to
multilateral consensus on value-based regulation. In Ethics of
drone strikes: Restraining remote-control killing, ed. C. Enemark,
170–189. Edinburgh: Edinburgh University Press.
Chengeta, T. (2022) Is the convention on conventional weapons the
appropriate framework to produce a new law on autonomous
weapon systems, in Viljoen, F. etal. (eds) A life interrupted:
Essays in honour of the lives and legacies of Christof Heyns. Pre-
toria University Law Press, pp. 379–397. https:// www. pulp. up.
ac. za/ edited- colle ctions/ a- life- inter rupted- essays- in- honour- of-
the- lives- and- legac ies- of- chris tof- heyns. Accessed 5 April 2023.
Convention on Prohibitions or Restrictions on the Use of Certain
Conventional Weapons which may be Deemed to be Excessively
Injurious or to have Indiscriminate Effects (1980). https:// disar
mament. unoda. org/ the- conve ntion- on- certa in- conve ntion al-
weapo ns/. Accessed 7 August 2023.
Dave, P. and Dastin, J. 2022. Exclusive: Ukraine has started using
clearview AI’s facial recognition during war, Reuters, https://
www. reute rs. com/ techn ology/ exclu sive- ukrai ne- has- start ed-
using- clear view- ais- facial- recog nition- during- war- 2022- 03- 13/.
Accessed 15 Aug 2023.
European Commission (2021) Proposal for a regulation of the Euro-
pean parliament and of the council laying down harmonised
rules on artificial intelligence (artificial intelligence act) and
Amending Certain Union Legislative Acts, 2021/0106 (COD).
https:// www. europ arl. europa. eu/ news/ en/ headl ines/ socie ty/ 20230
601ST O93804/ eu- ai- act- first- regul ation- on- artifi cial- intel ligen ce.
Accessed 9 Aug 2023.
FACEPTION|Facial Personality Analytics faception. https:// www. facep
tion. com. Accessed 14 Aug 2023.
Ferguson, D. 2023. Robots say they have no plans to steal jobs or
rebel against humans, The Guardian, https:// www. thegu ardian.
com/ techn ology/ 2023/ jul/ 08/ robots- say- no- plans- steal- jobs- rebel-
again st- humans. Accessed 15 Aug 2023.
Ferrara, E. 2023. Fairness and bias in artificial intelligence: A brief
survey of sources, impacts, and mitigation strategies.
Figueroa, M.D., etal. 2023. The risks of autonomous weapons: An
analysis centred on the rights of persons with disabilities. Inter-
national Review of the Red Cross 105 (922): 278–305. https:// doi.
org/ 10. 1017/ S1816 38312 20008 81.
Fitria, T.N. 2021. Gender bias in translation using google translate:
Problems and solution. Rochester, NY. https:// papers. ssrn. com/
abstr act= 38474 87. Accessed 15 Aug 2023.
Fredman, S. 2016. Substantive equality revisited. International Journal
of Constitutional Law 14 (3): 712–738. https:// doi. org/ 10. 1093/
icon/ mow043.
Friedman, B., and H. Nissenbaum. 1996. Bias in computer systems.
ACM Transactions on Information Systems 14 (3): 330–347.
https:// doi. org/ 10. 1145/ 230538. 230561.
Ganguli, D. etal. 2022. Red teaming language models to reduce harms:
Methods, scaling behaviors, and lessons learned. arXiv. https://
doi. org/ 10. 48550/ arXiv. 2209. 07858.
Grant, N. and Hill, K. 2023. Google’s photo app still can’t find gorillas
and neither can apple’s, The New York Times, 22 May. https://
www. nytim es. com/ 2023/ 05/ 22/ techn ology/ ai- photo- labels-
google- apple. html.Accessed 25 Jan 2024.
Grother, P.J., Ngan, M.L. and Hanaoka, K.K. (2019) Face recogni-
tion vendor test part 3: Demographic effects. National institute
of standards and technology. https:// www. nist. gov/ publi catio
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
211Putting algorithmic bias ontop oftheagenda inthediscussions onautonomous weapons systems
ns/ face- recog nition- vendor- test- part-3- demog raphic- effec ts.
Accessed 14 Aug 2023.
Gutierrez, C.I. 2023. Uncovering incentives for implementing AI gov-
ernance programs: Evidence from the field. IEEE Transactions
on Artificial Intelligence 4 (4): 792–798. https:// doi. org/ 10. 1109/
TAI. 2022. 31717 48.
Hern, A. 2017. Facebook translates “good morning” into “attack
them”, leading to arrest, The Guardian, https:// www. thegu ard-
ian. com/ techn ology/ 2017/ oct/ 24/ faceb ook- pales tine- israel- trans
lates- good- morni ng- attack- them- arrest. Accessed 15 Aug 2023.
Hunter, C., and B.E. Bowen. 2023. We’ll never have a model of an AI
major-general: Artificial intelligence, command decisions, and
kitsch visions of war. Journal of Strategic Studies. https:// doi.
org/ 10. 1080/ 01402 390. 2023. 22416 48.
Israel HLS&CYBER 2022—The international conference & exhibi-
tion Expo-Wizard. https:// hls- cyber- 2022. israel- expo. co. il/ expo.
Accessed 14 Aug 2023.
Johnson, J. 2022. The AI commander problem: Ethical, political, and
psychological dilemmas of human-machine interactions in AI-
enabled warfare. Journal of Military Ethics 21 (3–4): 246–271.
https:// doi. org/ 10. 1080/ 15027 570. 2023. 21758 87.
Johnston, I. and Pitel, L. (2023) German states rethink reliance on
Palantir technology. https:// www. ft. com/ conte nt/ 790ee 3ae- f0d6-
4378- 9093- fac55 3c335 76. Accessed 25 Jan 2024.
Jones, C.M. 2021. Western centric research methods? Exposing inter-
national practices. Journal of ASEAN Studies 9 (1): 87–100.
https:// doi. org/ 10. 21512/ JAS. V9I1. 7380.
Kim, Na-Young., Yoonjung Cha, and Hea-Suk. Kim. 2019. Future
english learning: Chatbots and artificial intelligence. Multime-
dia-Assisted Language Learning 22 (3): 32–53. https:// doi. org/
10. 15702/ mall. 2019. 22.3. 32.
Klugman, C.M. 2021. Black boxes and bias in AI challenge autonomy.
The American Journal of Bioethics 21 (7): 33–35. https:// doi. org/
10. 1080/ 15265 161. 2021. 19265 87.
Koenecke, A. etal. 2020. Racial disparities in automated speech recog-
nition, Proceedings of the national academy of sciences, 117(14),
pp. 7684–7689. https:// doi. org/ 10. 1073/ pnas. 19157 68117.
Konert, A., and T. Balcerzak. 2021. Military autonomous drones
(UAVs)—from fantasy to reality. Legal and ethical implications.
Transportation Research Procedia 59: 292–299. https:// doi. org/
10. 1016/j. trpro. 2021. 11. 121.
Kordzadeh, N., and M. Ghasemaghaei. 2022. Algorithmic bias: review,
synthesis, and future research directions. European Journal of
Information Systems 31 (3): 388–409. https:// doi. org/ 10. 1080/
09600 85X. 2021. 19272 12.
Lee, P. 2021. Modern warfare: ‘precision’ missiles will not stop civil-
ian deaths—here’s why, The Conversation. http:// theco nvers ation.
com/ modern- warfa re- preci sion- missi les- will- not- stop- civil ian-
deaths- heres- why- 171905. Accessed 25 Jan 2024.
Lovato, S. and Piper, A.M. 2015. Siri, is this you?: Understanding
young children’s interactions with voice input systems’, in Pro-
ceedings of the 14th international conference on interaction
design and children. New York, NY, USA: Association for com-
puting machinery (IDC ’15), pp. 335–338. https:// doi. org/ 10.
1145/ 27718 39. 27719 10.
M’charek, A., K. Schramm, and D. Skinner. 2014. Topologies of
race: Doing territory, population and identity in Europe. Science,
Technology, & Human Values 39 (4): 468–487. https:// doi. org/ 10.
1177/ 01622 43913 509493.
Maas, M. and Villalobos, J.J. 2023. International AI institutions: A
literature review of models, examples, and proposals. Legal Pri-
orities Project. Available at: https:// www. legal prior ities. org/ resea
rch/ inter natio nal- ai- insti tutio ns. html.Accessed 3 Oct 2023.
Marchant, G.E., Tournas, L. and Gutierrez, C.I. 2020. Governing
emerging technologies through soft law: Lessons for artificial
intelligence. Rochester, NY. https:// papers. ssrn. com/ abstr act=
37618 71. Accessed 22 Aug 2023.
Markl, N. 2023. Language variation, automatic speech recognition
and algorithmic bias. PhD Thesis. The University of Edinburgh.
https:// era. ed. ac. uk/ handle/ 1842/ 41277. Accessed 25 Jan 2024.
Maslej, N. etal. 2023. AI Index Report 2023—Artificial intelligence
index. Stanford-California: Institute for Human-Centered AI,
Stanford University. https:// aiind ex. stanf ord. edu/ report/. Accessed
22 Aug 2023.
NIST AI 100-1. 2021. AI risk management framework’, NIST. https://
www. nist. gov/ itl/ ai- risk- manag ement- frame work. Accessed 22
Aug 2023.
OECD. 2022. Recommendation of the council on artificial intelligence.
https:// legal instr uments. oecd. org/ en/ instr uments/ OECD- LEGAL-
0449. Accessed 22 Aug 2023.
Roff, H.M. 2014. The strategic robot problem: Lethal autonomous
weapons in war. Journal of Military Ethics 13 (3): 211–227.
https:// doi. org/ 10. 1080/ 15027 570. 2014. 975010.
Ruttkamp-Bloem, E. 2023. Epistemic just and dynamic in Africa AI
ethics. In Responsible AI in Africa: Challenges and opportunities,
ed. D.O. Eke, K. Wakunuma, and S. Akintoye, 13–34. Cham:
Springer.
Sap, M. etal. 2019. The risk of racial bias in hate speech detection,
in Proceedings of the 57th annual meeting of the association for
computational linguistics. ACL 2019, Florence, Italy: Association
for Computational Linguistics, pp. 1668–1678. https:// doi. org/ 10.
18653/ v1/ P19- 1163.
Sap, M. etal. 2020. Social bias frames: Reasoning about social and
power implications of language, in Proceedings of the 58th
annual meeting of the association for computational linguistics.
ACL 2020, Online: Association for Computational Linguistics,
pp. 5477–5490. https:// doi. org/ 10. 18653/ v1/ 2020. acl- main. 486.
Savoldi, B., etal. 2021. Gender bias in machine translation. Transac-
tions of the Association for Computational Linguistics 9: 845–
874. https:// doi. org/ 10. 1162/ tacl_a_ 00401.
Sharkey, N. 2010. Saying “no!” to Lethal autonomous targeting. Jour-
nal of Military Ethics 9 (4): 369–383. https:// doi. org/ 10. 1080/
15027 570. 2010. 537903.
Stix, C., and M.M. Maas. 2021. Bridging the gap: The case for an
“Incompletely Theorized Agreement” on AI policy. AI and Eth-
ics 1 (3): 261–271. https:// doi. org/ 10. 1007/ s43681- 020- 00037-w.
Stoke White Investigations (2021) France’s shadow war in Mali: Air-
strikes at the bounti wedding. London, UK: Stoke White Inves-
tigations Unit/ Stoke White Ltd. https:// www. swiun it. com/ post/
france- s- shadow- war- in- mali- airst rikes- at- the- bounti- weddi ng.
Accessed 27 July 2023.
Suchman, L., K. Follis, and J. Weber. 2017. Tracking and targeting:
Sociotechnologies of (In)security. Science, Technology, & Human
Values 42 (6): 983–1002. https:// doi. org/ 10. 1177/ 01622 43917
731524.
UNESCO (2022) Recommendation on the ethics of artificial intelli-
gence. Paris: United Nations Educational, Scientific and Cultural
Organisation. https:// unesd oc. unesco. org/ ark:/ 48223/ pf000 03811
37. Accessed 22 Aug 2023.
US Office of the Secretary of Defense (2007) Unmanned Systems Road-
map: 2007–2032. US Department of Defense. https:// www. globa
lsecu rity. org/ intell/ libra ry/ repor ts/ 2007/ dod- unman ned- syste ms-
roadm ap_ 2007- 2032. pdf.
Vincent, J. (2018) Google ‘fixed’ its racist algorithm by removing goril-
las from its image-labeling tech, The Verge. https:// www. theve
rge. com/ 2018/1/ 12/ 16882 408/ google- racist- goril las- photo- recog
nition- algor ithm- ai. Accessed 15 Aug 2023.
West, S.M., Whittaker, M. and Crawford, K. (2019) Discriminating
systems: Gender, race, and power in AI, AI Now Institute.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
212
I.Bhila
Wilke, C. 2017. Seeing and unmaking civilians in Afghanistan. Sci-
ence, Technology, & Human Values. https:// doi. org/ 10. 1177/
01622 43917 703463.
Ishmael Bhila Doctoral researcher in international law and interna-
tional security studying the participation of small states in the making
of international law on autonomous weapons systems.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Within the field of autonomous weapons systems discussions, small states -particularly from the Global South -face unique challenges as they grapple with a new reality of algorithmic warfare, a challenge that is likely to affect them disproportionately as seen in the use of AI in Gaza by Israel (Karner, 2024). Despite these challenges, between 2017 and 2024, small states from the Global South managed to evolve discussions on autonomous weapons systems to address their concerns, including issues relating to algorithmic bias (Bhila, 2024) and other specific risks that would relate to them. As the problem of autonomous weapons systems has only gained centrality at the UN in recent years, this paper addresses an important gap in autonomous weapons systems literature, with small state perspectives having been neglected in the study of the ongoing discussions (Bode et al., 2024). ...
... The possible biases of AI systems, surveillance, and other emerging (including military) technologies against populations or communities of colour are well documented (Ams, 2023;Jones, 2021). Autonomous weapons systems are likely to carry the same societal biases programmed into them, and the global governance discourse on autonomous weapons systems has so far peripheralized the issue of algorithmic bias which is central to Global South and small state imaginaries of and concerns about autonomy in weapons (Bhila, 2024). Moreover, research shows that small states with ongoing armed conflict, political instability, and problems of terrorism in Asia and Africa are likely to be more affected by autonomous weapons systems (Austero et al., 2020). ...
Article
Full-text available
Emerging technologies around autonomous weapons systems pose significant threats, particularly to small states in the Global South. Despite these threats, many such small states have struggled to participate in multilateral discussions to regulate and prohibit autonomous weapons systems, while the negotiations have been ongoing within the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS) under the United Nations Convention on Certain Conventional Weapons (UNCCW) since 2017. This paper analyses the dilemmic positions small state diplomats find themselves in when trying to devote time and expertise to international discussions on autonomous weapons systems while at the same time negotiating the power politics within the international law-making system and working with a limited expertise pool and resource base. The research is based on interviews with diplomats in Geneva, participation data collated by the UN, and virtual and in-person observation of the GGE on LAWS discussions in the UNCCW. The paper concludes that disarmament diplomats from small states from the Global South face various challenges, including small governments back home that cannot address emerging issues, great power politics, unequal international legal systems, and absent common positions on disarmament. Nevertheless, these same small states have dealt with such challenges, so as to decolonise the asymmetric diplomatic space within which they operate.
... This subversive thinking can open up spaces for gender-sensitive, vulnerability-focused, and environmentally conscious streams of questioning that have the potential to unsettle systemic violence and domination. My own work has so far questioned these injustices and coloniality at the governance level (Bhila 2024a(Bhila , 2024b. The work of the Stop Killer Robots Campaign and like-minded organisations has exposed the fallacies of algorithmic warfare and targeted killing, but there is still a need for collaborative efforts towards the critique of the "kill cloud" at all levels, and by various communities of practice and thought. ...
Article
Full-text available
The “kill chain”—involving the analysis of data by human users of military technologies, the understanding of that data, and human decisions—has fast been replaced by the “kill cloud” that necessitates, allows, and exacerbates increased thirst for domination, violence against distant populations, and a culture of experimentation with human lives. This commentary reports an interdisciplinary discussion organised by the Disruption Network Lab that brought together whistleblowers, artists, and experts investigating the impact of artificial intelligence and other emerging technologies on networked warfare. Exposing the problematics of networked warfare and the kill cloud, their colonial overtones, effects on human subjects in real life, erroneous scientific rationalities, and the (business) practices and logics that enable this algorithmic machinery of violence. The conference took place from the 29th of November to the 1st of December 2024 at the Kunstquartier Bethanien in Berlin, Germany.
Article
Full-text available
Military AI optimists predict future AI assisting or making command decisions. We instead argue that, at a fundamental level, these predictions are dangerously wrong. The nature of war demands decisions based on abductive logic, whilst machine learning (or ‘narrow AI’) relies on inductive logic. The two forms of logic are not interchangeable, and therefore AI’s limited utility in command – both tactical and strategic – is not something that can be solved by more data or more computing power. Many defence and government leaders are therefore proceeding with a false view of the nature of AI and of war itself.
Preprint
Full-text available
BACKGROUND The significant advancements in applying Artificial Intelligence (AI) to healthcare decision-making, medical diagnosis, and other domains have simultaneously raised concerns about the fairness and bias of AI systems, particularly in areas like healthcare, employment, criminal justice, and credit scoring. Such systems can lead to unfair outcomes and perpetuate existing inequalities. This survey paper offers a succinct, comprehensive overview of fairness and bias in AI, addressing their sources, impacts, and mitigation strategies. OBJECTIVE We review sources of bias, such as data, algorithm, and human decision biases, and assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes. We explore various proposed mitigation strategies, discussing the ethical considerations of their implementation and emphasizing the need for interdisciplinary collaboration to ensure effectiveness. METHODS Through a systematic literature review spanning multiple academic disciplines, we present definitions of AI bias and its different types, and discuss the negative impacts of AI bias on individuals and society. We also provide an overview of current approaches to mitigate AI bias, including data pre-processing, model selection, and post-processing. RESULTS Addressing bias in AI requires a holistic approach, involving diverse and representative datasets, enhanced transparency and accountability in AI systems, and the exploration of alternative AI paradigms that prioritize fairness and ethical considerations. CONCLUSIONS This survey contributes to the ongoing discussion on developing fair and unbiased AI systems by providing an overview of the sources, impacts, and mitigation strategies related to AI bias.
Article
Full-text available
Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI "rational" efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining "meaningful" human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights into human-machine interactions to elucidate how AI shapes our capacity to think about future warfare's political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare-the "AI commander problem."
Chapter
Full-text available
This chapter considers the potential for actualising the ideal for responsible AI on the African continent, focusing on the AI ethics policy environment in Africa. I consider the impact of context and culture on successful adoption of AI technologies in general and on trust in AI technology and openness to AI regulation in particular. It concludes that actionable AI ethics in Africa should be driven by dynamic and epistemic just ethical systems.
Preprint
Full-text available
We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM); an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF). We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models.
Article
Autonomous weapons systems have been the subject of heated debate since 2010, when Philip Alston, then Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, brought the issue to the international spotlight in his interim report to the United Nations (UN) General Assembly 65th Session. Alston affirmed that “automated technologies are becoming increasingly sophisticated, and artificial intelligence reasoning and decision-making abilities are actively being researched and receive significant funding. States’ militaries and defence industry developers are working to develop ‘fully autonomous capability’, such that technological advances in artificial intelligence will enable unmanned aerial vehicles to make and execute complex decisions, including the identification of human targets and the ability to kill them.” ¹ Later, in 2013, Christof Heyns, who was Special Rapporteur for Extrajudicial, Summary or Arbitrary Executions at the time, published a report that elaborated further on the issues raised by what he called “lethal autonomous robotics”. ² Following a recommendation by Advisory Board on Disarmament Matters at the UN General Assembly 68th Session, the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, as amended on 21 December 2021, started discussing autonomous weapons systems in 2014. Then, the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE on LAWS) ³ was created in 2016 to focus on this issue. ⁴ While the group has kept meeting since then, no clear steps have been taken yet towards a normative framework on autonomous weapons as of September 2022. In all these years, persons with disabilities – including conflict survivors – have not been included in discussions, nor has the disability perspective been reflected in international debate on autonomous weapons. Only recently has there been any effort to consider the rights of persons with disabilities when examining ethical questions related to artificial intelligence (AI). In this article, we will examine how and why autonomous weapons have a disproportionate impact on persons with disabilities, because of the discrimination that results from a combination of factors such as bias in AI, bias in the military and the police, barriers to justice and humanitarian assistance in situations of armed conflict, and the lack of consultation and participation of persons with disabilities and their representative organizations on issues related to autonomy in weapons systems.
Chapter
This chapter explores ethical challenges potentially arising from AI-controlled drones, focusing on how their use might be restrained through international legal regulation. The starting point is the 2013 recommendation of a moratorium on the production of lethal autonomous weapon systems (LAWS) to the United Nations (UN) Human Rights Council by its Special Rapporteur on extrajudicial, summary or arbitrary executions. The response by UN member states to this recommendation was to resolve that relevant discussions should occur within the framework of the UN Convention on Certain Conventional Weapons (CCW). However, the critical problem identified in this chapter is that the introduction of CCW-based regulation requires consensus among all the treaty’s members. Thus, to achieve principled and legally-binding restraints on the use of autonomous armed drones, scholars and policy practitioners need to confront a set of challenges to multilateral consensus. These challenges include: threats to multilateralism in arms-control generally; ongoing concerns about a military AI arms race; anti-activist sentiments and ‘banphobia’ among arms-control diplomats; and differing international understandings of what moral values are applicable to the deployment of autonomous weapons systems.