ArticlePDF Available

The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South

Authors:
  • Ministry of Foreign Affairs of Brazil
1
Unpublished Paper, 16 September 2019
The militarization of artificial intelligence: a wake-up call for the Global South
Eugenio V. Garcia*
ORCID: https://orcid.org/0000-0002-7207-4653
Website: https://eugeniovargasgarcia.academia.edu
Abstract
The militarization of artificial intelligence (AI) is well under way and leading military powers
have been investing large resources in emerging technologies. Calls for AI governance at
international level are expected to increase and the United Nations is well positioned to offer a
commonly agreed platform for prevention, foresight, and cooperation among states and other
stakeholders to address the impact of new technologies. A telling example for the area of
strategic studies is the work of the Group of Governmental Experts (GGE) on lethal autonomous
weapons systems, under the UN Convention on Certain Conventional Weapons (CCW), the
most important multilateral discussion on AI, peace and security today. The article makes the
case for further engagement from scholars, policymakers, and practitioners of the Global South
in the ongoing debate on AI policy and international relations to mitigate risks and adopt
governance tools, including norms, principles, regimes, institutions, and political commitments.
Keywords
Artificial intelligence; governance of new technologies; international security; lethal
autonomous weapons systems; Global South.
Artificial intelligence (AI) will have a profound impact upon world politics across the
board, as much as upon society generally. Converging technologies that often overlap and affect
each other have become drivers of innovation, as in the case of nanotechnology, biotechnology,
information technology, and cognitive science (NBIC). Usage of AI in business, government,
and everyday life has been developing at breakneck pace, fueled by powerful hardware capacity,
abundant data, and online training of self-learning algorithms. As the discipline of International
Relations (IR) takes on this expanding research field and its systemic implications, a coming AI
* Senior Adviser on peace and security at the Office of the President of the United Nations General Assembly, New
York. Diplomat, Ministry of Foreign Affairs of Brazil. PhD in History of International Relations, University of
Brasilia. Researcher on international security, new technologies, and AI governance. The opinions in this paper are
the sole responsibility of the author. Email: eugenio.vargasgarcia@un.org
2
race has frequently been portrayed as a perilous trend capable of reshaping the character of
international security.
1
An overview of state initiatives, strategies, and related governmental investments in AI
reported that at the time of writing less than 20 countries had active AI national plans and, not
surprisingly, almost all of them were developed countries.
2
Indeed, there is no shortage of
analyses focusing upon great-power competition and an epic struggle for AI global supremacy,
picturing prominently the United States and China, followed closely by military powers and
wealthy nations (Russia, G7 countries, Israel, and others), and a few second-tier, aspiring tech
players, which may occupy someday a special place in the AI landscape.
3
The Global South, however, is clearly underrepresented in this debate, with many areas
of Africa, Asia, Latin America and the Caribbean completely absent in terms of scholars,
politicians, and policymakers engaged in this vital conversation. And yet, when it comes to
assess global risks in peace and security and identify where new AI weapons are likely to be
deployed in the first place, both researchers and practitioners in the developing world have
reasons to be concerned. It is no wonder that proponents of a Global IR have been urging for
greater participation from actors of the non-Western world as a means to ‘bring the Rest in’.
4
This article will take as a starting point the fact that the militarization of AI has already
begun and will pose many challenges beyond the realm of security itself. I argue that AI
governance is also bound to gain traction and the United Nations will eventually get further
involved in providing space for international cooperation and facilitating negotiations on how
to deal with controversies surrounding AI policymaking. In this context, diplomatic
deliberations in Geneva on lethal autonomous weapons systems are a revealing case-study of
opportunities and predicaments encountered along the way, insofar as they highlight inter alia
how critical it is for the Global South to be abreast of developments in this area, prepare in
advance, and act accordingly on distinct fronts.
1
Christian Brose, War’s sci-fi future: the new revolution in military affairs, Foreign Affairs, vol. 98, n. 3, May-June
2019, p. 122-134; Kenneth Payne, Artificial intelligence: a revolution in strategic affairs? Survival, vol. 60, Issue 5,
2018, p. 7-32; Paige Gasser, et al. Assessing the strategic effects of artificial intelligence, Center for Global Security
Research, Lawrence Livermore National Laboratory, and Technology for Global Security, 2018,
https://www.tech4gs.org/assessing-the-strategic-effects-of-artificial-intelligence.html (access 22 April 2019);
Jeremy Rabkin and John Yoo, Striking power: how cyber, robots, and space weapons change the rules of war, New
York: Encounter Books, 2017.
2
Campbell noted that ‘there are few states with AI national strategies or plans or significant investments in several
geographic regions across the globe, including: South America, Central America, Eastern Europe, Central Asia,
Southeast Asia, and Africa’. Thomas A. Campbell, Artificial intelligence: an overview of state initiatives, UNICRI
and FutureGrasp, 2019, cf. Executive Summary, http://www.unicri.it/in_focus/files/Report_AI-
An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf (access 29 July 2019).
3
Kai-Fu Lee, AI superpowers: China, Silicon Valley, and the new world order, Boston: Houghton Mifflin Harcourt,
2018; Daniel Wagner and Keith Furst, AI supremacy: winning in the era of machine learning, Scotts Valley, CA:
CreateSpace, 2018; Michael Horowitz, Artificial intelligence, international competition, and the balance of power,
Texas National Security Review, Austin: University of Texas, 2018, https://tnsr.org/2018/05/artificial-intelligence-
international-competition-and-the-balance-of-power/ (access 21 November 2018); Patrick Tucker et al. The race for
AI: the return of great power competition is spurring the quest to develop artificial intelligence for military purposes,
Defense One ebook, March 2018, https://www.defenseone.com/assets/race-ai/portal/ (access 23 April 2019).
4
Amitav Acharya and Barry Buzan, The making of global international relations: origins and evolution of IR at its
centenary, Cambridge: Cambridge University Press, 2019, p. 302.
3
The militarization of AI is well under way
Strategists and military advisers often claim that the militarization of AI is irreversible.
5
Resembling what had occurred with previous general-purpose technologies, such as electricity
or the combustion engine, armed forces will seek to incorporate AI-driven capabilities in their
organizational structure, operations, and weaponry. In reality, this is already the case in many
missile and rocket defense systems, anti-personnel sentry weapons, loitering munitions, and
combat air, sea, and ground vehicles.
A SIPRI report, released in 2017, mapped the development of autonomy in weapons
systems to conclude that autonomy is already used in a wide array of tasks in weapon systems,
including many connected to the use of force, such as supporting target identification, tracking,
prioritization, and selection of targets in certain cases.
6
In 2019, another study corroborated these
findings by surveying current military research and development in autonomous weapons
conducted in seven countries (United States, China, Russia, United Kingdom, France, Israel, and
South Korea), which stand out among states most heavily involved in AI development in the
defense industry.
7
In a move largely unnoticed by the general public, Russia sent to the International Space
Station, in August 2019, the Skybot F-850 humanoid robot, also called FEDOR (Final
Experimental Demonstration Object Research), which will allegedly undergo tests for missions
in outer space, such as helping in construction of bases on the moon. Remote-controlled by a
human operator wearing an exoskeleton, this is the same robot depicted in 2017 in online videos
as being able to walk, crawl, lift weights, use tools, drive a car, and shoot with two guns
simultaneously.
8
AI military applications go much beyond gun-shooting androids though. Their wide-
ranging scope will potentially unsettle all five domains of warfare (land, sea, air, outer space,
and cyberspace) and multiple dimensions pertaining to command, control, communications,
computers, intelligence, surveillance, and reconnaissance (C4ISR). Prompted by promises of
more efficient data analysis, better performance of warfare systems, and reduced costs, great
powers project spending large amounts of resources in preparation for this impending future,
5
A typical example is Michael Kolton, The inevitable militarization of artificial intelligence, The Cyber Defense Review,
8 February 2016, https://cyberdefensereview.army.mil (access 10 January 2018).
6
Vincent Boulanin and Maaike Verbruggen, Mapping the development of autonomy in weapons systems, Stockholm:
SIPRI, 2017, p. 115. https://www.sipri.org/sites/default/files/2017-
11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf (access 4 April 2018).
7
Frank Slijper et al. State of AI: artificial intelligence, the military and increasingly autonomous weapons, Utrecht:
PAX, April 2019, https://www.paxvoorvrede.nl/media/files/state-of-artificial-intelligence--pax-report.pdf
(access 20 May 2019).
8
Humanoid robot ‘commands’ Russian rocket test flight, CBS News, 22 August 2019,
https://www.cbsnews.com/news/humanoid-robot-passenger-russians-launch-key-rocket-test-flight-skybot-f-
850-fedor/ (access 23 August 2019).
4
trying to pursue a strategic security advantage (and the economic benefits) that can allow them
to outperform their rivals, mainly in range, speed, and lethality.
9
By far the world’s biggest power in terms of military expenditure and global firepower,
the United States has been planning and investing to stay ahead of strategic competitors and
secure its dominant place at the top. The Department of Defense launched in 2018 an Artificial
Intelligence Strategy to articulate its approach and methodology, to be implemented by the
newly-created Joint Artificial Intelligence Center (JAIC), for accelerating the adoption of AI-
enabled capabilities ‘to strengthen our military, increase the effectiveness and efficiency of our
operations, and enhance the security of the nation’.
10
One of the goals of the Pentagon is to forge partnerships with the dynamic private sector,
Silicon Valley companies, and academic institutions leading in technological research, not
always keen to be associated in the public eye with military projects. The Joint Enterprise
Defense Infrastructure (JEDI), a multibillion-dollar contract meant to do precisely that, will
amass human, financial, and material resources to build a single cloud computing architecture
across military branches to connect US forces wherever they are in the world. Similarly
ambitious, Project Maven (Algorithmic Warfare Cross-Functional Team) will further develop
software and related systems to automate data collection globally by using machine learning to
scan drone video footage, thus expediting the analytical process to swiftly identify valuable
targets. In fact, Maven’s software has reportedly already been used in as many as six combat
locations in the Middle East and Africa.
11
In the meantime, the Defense Advanced Research Projects Agency (DARPA) will allocate
up to US$ 2 billion over the next five years in AI weapons research. DARPA has already proved
instrumental to spur cutting-edge research that produced tangible results, such as the
antisubmarine ship Sea Hunter, which has been tested at sea since 2016. The Sea Hunter is
thought to be the first of a whole new class of ships with trans-oceanic range, designed to
autonomously scan and detect enemy submarines, as well as carry out other military tasks with
no crew on board. For now, the experimental model does not carry armaments.
12
9
‘Moving faster than your adversary enhances offensive mobility and makes you harder to hit. Striking from
further away similarly benefits the element of surprise and minimizes exposure to enemy fire. […] AI makes it
possible to analyze dynamic battlefield conditions in real time and strike quickly and optimally while minimizing
risks to one’s own forces.AI and the military: forever altering strategic stability, T4GS Reports, Technology for Global
Security and Center for Global Security Research, 13 February 2019, p. 7, http://www.tech4gs.org/ai-and-human-
decision-making.html (access 22 April 2019).
10
Summary of the 2018 Department of Defense Artificial Intelligence Strategy: harnessing AI to advance our security and
prosperity, Washington, DC. 2018, p. 7-8, https://media.defense.gov/2019/Feb/12/2002088963/-1/-
1/1/Summary-of-DoD-AI-Strategy.pdf (access 5 August 2019).
11
Weaponised AI is coming, are algorithmic forever wars our future? The Guardian, 11 October 2018,
https://www.theguardian.com/commentisfree/2018/oct/11/war-jedi-algorithmic-warfare-us-military (access 5
August 2019).
12
The US Navy also has a demonstration aerial combat vehicle, the Northrop Grumman X-47B, which had its first
flight in 2011 and it can autonomously take off and land on aircraft carriers. Following changes in the program’s
concept, Boeing won in 2018 a contract to develop four aerial refueling drones (MQ-25A Stingray) by 2024. Navy
picks Boeing to build MQ-25A Stingray carrier-based drone, USNI News, 30 August 2018,
https://news.usni.org/2018/08/30/navy-picks-boeing-build-mq-25a-stingray-carrier-based-drone (access 12 July
2019).
5
Although still not on a par with the United States, China is also restructuring its armed
forces and vigorously funding more projects on research and development of AI capabilities,
taking advantage of the existing ‘military-civil fusion’ that in practice makes governmental
agencies and private companies act in close coordination. This concerted effort is in line with
the anticipated ‘revolution in military affairs’, which Chinese strategists wish to respond by
scaling up resources to prepare the military for the ‘intelligentization’ of warfare. There are plans
to boost autonomy for hazardous expeditionary capabilities, not only in outer space, but also in
deep sea exploration and polar missions both in the Arctic and in Antarctica.
13
The People’s
Liberation Army ‘will likely leverage AI to enhance its future capabilities, including in
intelligent and autonomous unmanned systems; AI-enabled data fusion, information
processing, and intelligence analysis; war-gaming, simulation, and training; defense, offense,
and command in information warfare; and intelligent support to command decision-making’,
as summarized by Kania.
14
Chinese leaders have made it clear that they are fully aware of the challenges to overcome
and will mobilize all means regarded necessary to be at the forefront in AI technology. Despite
concerns about a possible arms race in the horizon, they mostly see ‘increased military usage of
AI as inevitable’ and are ‘aggressively pursuing it’. In 2018, China’s Ministry of National
Defense established to that effect two new Beijing-based research organizations focused upon
AI and autonomous systems under the National University of Defense Technology (NUDT).
15
Russia may not have the same economic leverage the United States and China possess to
finance high-tech ventures and start-ups, but its niche expertise and overall military capabilities
must be duly reckoned with. As two scholars remarked, notwithstanding a ‘smaller private
sector innovation ecosystem’, the Russian Defense Ministry has been committing significant
resources in AI, machine learning, and robotics to help their forces perform military tasks that
range from logistics to combat missions, chiefly in urban terrain, maximizing effectiveness by
facilitating navigation and maneuver, improving situational awareness, and enhancing precise
targeting.
16
In practical terms, Russia tested in Syria many state-of-the-art weapons using emerging
technologies, some of them for the first time, including small autonomous vehicles for
intelligence, surveillance, and reconnaissance (Scarab and Sphera), in addition to a remote-
13
Elsa Kania, Chinese military innovation in artificial intelligence, Washington, DC: Testimony before the US-China
Economic and Security Review Commission, CNAS, June 2019, p. 2-5 and 18-19,
https://www.cnas.org/research/technology-and-national-security (access 5 August 2019).
14
Elsa Kania, Battlefield singularity: artificial intelligence, military revolution, and China’s future military power,
Washington, DC: CNAS, November 2017, p. 4, https://www.cnas.org/research/technology-and-national-security
(access 6 August 2019).
15
Gregory C. Allen, Understanding China’s AI strategy: clues to Chinese strategic thinking on artificial intelligence
and national security, Washington, DC: CNAS, February 2019, p. 5-7,
https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy (access 6 August 2019).
16
‘Concurrently, the Russian defense sector is actually working on an unmanned ground vehicle designed
specifically to withstand tough urban combat conditions. The project called Shturm (Storm) is based on the T -72
tank chassis and features specific defensive technologies and offensive armaments for a city fight’. Margarita
Konaev and Samuel Bendett, Russian AI-enabled combat: coming to a city near you? War on the Rocks,
commentary, Texas National Security Review, 31 July 2019, https://warontherocks.com/2019/07/russian-ai-
enabled-combat-coming-to-a-city-near-you (access 2 August 2019).
6
controlled mine-clearing vehicle (Uran-6). Other existing projects include a stealth heavy combat
aerial vehicle, the Okhotnik-B (Hunter), a sixth-generation aircraft under development by
Sukhoi, and designs for more advanced main battle tanks. As Sychev noted, ‘the fire control
system of the next generation Russian T-14 tank, based upon the Armata universal heavy
crawler platform, will be capable of autonomously detecting targets and bombarding them until
they are completely destroyed’.
17
Israel also has autonomous or semi-autonomous weapons systems in operation, notably
the Iron Dome, the anti-missile defense system employed by the Israeli Defense Forces to
intercept short-range projectiles, particularly in areas adjacent to the Gaza Strip. The Protector
and the Silver Marlin are both autonomous surface vehicles manufactured by an Israeli
company for maritime patrol missions. Israel has also been developing the Carmel Program to
upgrade its ground forces and put into operation fast-moving armored vehicles, equipped with
multiple sensors and AI technology to detect threats and improve target engagement, weapon
system management, and automatic maneuvering.
18
While the list of examples could well be extended, depending on the greater or lesser
level of autonomy for each weapons system, most states in the Global South are far from any
comparable build-up. This is true not only for military applications. In the long run, AI-driven
industry automation in rich countries of repetitive, labor-intensive jobs, can arguably displace
traditional, comparative advantages of developing countries, such as cheap workforce and raw
materials. The level of AI readiness will have a dramatic effect upon competitiveness and is
likely to be a critical factor in investments and GDP growth. The widening gap in prosperity
and wealth would mostly affect those countries unable to develop digital skills and
infrastructure to reap the rewards of AI opportunities in business performance, productivity,
and innovation.
19
If not long ago global inequality was gauged in terms of have and have-not
countries, a new divide could be emerging between AI-ready and not-ready.
20
AI risk, safety and security may still seem disconnected from the day-by-day reality of
many developing countries struggling with more urgent problems related to poverty, hunger,
violence, underdevelopment, or environmental degradation, to name just a few. Even so, this
detachment will not insulate them from consequences, unintended or not, of damaging
situations originated elsewhere, as happens to be the case of climate change and vulnerable
small islands facing extreme flooding. In armed conflict, disruptive changes can alter the
correlation of forces very quickly. Even small improvements in speed and accuracy could
17
Vasily Sychev, The threat of killer robots, The UNESCO Courier, n. 3, Artificial intelligence: the promises and the
threats, Paris, 2018, p. 28, https://unesdoc.unesco.org/ark:/48223/pf0000265211 (access 12 June 2019).
18
Israel unveils new ground forces concept: fast & very high tech, Breaking Defense, 5 August 2019,
https://breakingdefense.com/2019/08/israel-unveils-new-ground-forces-concept-fast-very-high-tech (access 7
August 2019).
19
Cf. Digital economy report 2019, UNCTAD, 2019, https://unctad.org/en (access 6 September 2019); James
Manyika and Jacques Bughin, The promise and challenge of the age of artificial intelligence, McKinsey Global Institute,
2018, p. 4, https://www.mckinsey.com/featured-insights/artificial-intelligence (access 4 March 2019).
20
There are no Latin American or African countries in the top 20 ranking of government AI readiness. Cf.
Government artificial intelligence readiness index 2019, Oxford Insights and Canada’s International Development
Research Centre, 2019, https://ai4d.ai/wp-content/uploads/2019/05/ai-gov-readiness-report_v08.pdf (access 6
September 2019).
7
purportedly result in disproportionate tactical advantages on the ground. Strategic gains
obtained through AI applications are expected to be unevenly distributed and primarily favor
those countries ahead in research and development. Catching up may prove too hard to achieve
or come too late to be of any serious significance.
21
The long road to AI governance in peace and security
Even if we concede that the militarization of AI is here to stay, it is also true, but less
obvious, that AI governance in general cannot be rejected altogether: technical standards,
performance metrics, norms, policies, institutions, and other governance tools will probably be
adopted sooner or later.
22
One should expect more calls for domestic legislation on civilian and
commercial applications in many countries, in view of the all-encompassing legal, ethical, and
societal implications of these technologies. In international affairs, voluntary, soft law, or
binding regulations can vary from confidence-building measures, gentlemen’s agreements, and
codes of conduct, including no first-use policies, to multilateral political commitments, regimes,
normative mechanisms, and formal international treaties.
Why is AI governance needed? To many, a do-nothing policy is hardly an option. In a
normative vacuum, the practice of states may push for a tacit acceptance of what is considered
‘appropriate’ from an exclusively military point of view, regardless of considerations based
upon law and ethics.
23
Take, for that matter, a scenario in which nothing is done and AI gets
weaponized anyway. Perceived hostility increases distrust among great powers and even more
investments are channeled to defense budgets. Blaming each other will not help, since
unfettered and armed AI is available in whatever form and shape to friend and foe alike. The
logic of confrontation turns into a self-fulfilling prophecy. If left unchecked, the holy grail of this
arms race may end up becoming a relentless quest for artificial general intelligence (AGI), a
challenging prospect for the future of humanity, which is already raising fears in some quarters
of an existential risk looming large.
24
Payne argued that changes in the psychological element underpinning deterrence are
among the most striking features of the AI revolution in strategic affairs: ‘Removing emotion
from nuclear strategy was not ultimately possible; artificial intelligence makes it possible, and
therein lies its true radicalism and greatest risk’.
25
In other words, loosely unleashed, AI has the
power to raise uncertainty to the highest degrees, thickening Clausewitz’s fog of war rather than
dissipating it. The situation would become untenable if a non-biological AGI were ever
21
In Payne’s view, this trend would favor ‘existing advanced industrial societies such as the US, Europe and
perhaps China. These societies will see their military power enhanced relative to others, well beyond the
enhancements already realized through the information revolution in military affairs’. Payne, op. cit., p. 15-16.
22
A good starting point is Ryan Calo, Artificial intelligence policy: a primer and roadmap, 8 August 2017,
https://www.ssrn.com/abstract=3015350 (access 8 September 2019).
23
Cf. Ingvild Bode and Hendrik Huelss, Autonomous weapons systems and changing norms in international
relations, Review of International Studies, vol. 44, part 3, 2018, p. 393413.
24
For an overview of ongoing AGI projects, cf. Seth D. Baum, A survey of artificial general intelligence projects for ethics,
risk, and policy, Global Catastrophic Risk Institute, Working Paper 17-1, November 2017,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741 (access 11 July 2018).
25
Payne, op. cit., p. 2.
8
deployed for military purposes, virtually unaffected by typically human cognitive heuristics,
perceptions, and biases.
Compounded with a free-for-all security environment, in such a brave new world the
Global South would be exposed to all sorts of vulnerabilities, lagging behind (again) in economic,
scientific, and technological development, as well as becoming an open ground for data-
predation and cyber-colonization, further exacerbating inequalities among nations,
disempowerment, and marginalization, as Pauwels suggested. Small, tech-taking developing
countries may well turn into data-reservoirs and testbeds for dual-use technologies, precisely
because they lack technical expertise, scale, and scientific knowledge to take effective
countermeasures against tech-leading powers.
26
Fortunately, all these troubling scenarios are not forcibly inescapable and should ideally
be minimized through responsible governance strategies. What does it mean? A broad
definition of AI policymaking strategy has been proposed as ‘a research field that analyzes the
policymaking process and draws implications for policy design, advocacy, organizational
strategy, and AI governance as a whole’.
27
Specifically on security issues, Maas singled out four
distinct rationales for preventing, channeling, or containing the proliferation, production,
development, or deployment of military technologies: ethics, legality, stability, or safety. From
his analysis of lessons learned from arms control of nuclear weapons, he concluded inter alia
that far from being inevitable, the proliferation of powerful technologies such as military AI
might be slowed or halted through the institutionalization of norms’.
28
Norms and other approaches to mitigate risks are one of the possible responses to the
negative side of AI technology. A recent study identified several of these unsettling aspects of
AI: increased risk of war or a first strike; disruption in deterrence and strategic parity; flawed
data and computer vision; data manipulation; ineffective crisis management; unexpected results;
failure in human-machine coordination; backlash in public perception; inaccuracy in decision-
making; and public sector-private sector tensions.
29
The current deficit in explainability on how
neural networks reach a given outcome is likewise raising uneasiness: AI’s black box opacity
could increase the sense of insecurity rather than provide strategic reassurance.
Old-established military doctrines may be discredited by software innovations, hacking,
malware, or cyber-attacks, in a manner that strategic superiority is never fully achieved or
sustained. This uncertainty could prove extremely difficult to cope with and give no guarantee
of security against adversarial or malicious attempts to interfere with defense algorithms.
Searching for predictability by means of norm-setting, therefore, is not just a question of
inducing responsible behavior among states or protecting the weak from the dominance of the
powerful. Rather, it is a matter of building commonly accepted rules for all and minimum
26
Eleonore Pauwels, The new geopolitics of converging risks: the UN and prevention in the era of AI, Centre for Policy
Research, United Nations University, 2 May 2019, p. 35, https://cpr.unu.edu/the-new-geopolitics-of-converging-
risks-the-un-and-prevention-in-the-era-of-ai.html (access 2 May 2019).
27
Brandon Perry and Risto Uuk, AI governance and the policymaking process: key considerations for reducing AI
risk, Big Data and Cognitive Computing, vol. 3, issue 2, June 2019, p. 3.
28
Matthijs M. Maas, How viable is international arms control for military artificial intelligence? Three lessons from
nuclear weapons, Contemporary Security Policy, vol. 40, n. 3, 2019, p. 287.
29
Cf. AI and the military: forever altering strategic stability, op. cit., p. 9.
9
standards to avert strategic uncertainty, undesirable escalations, and unforeseen crises spinning
out of control.
As illustrated by nuclear weapons, retaliation by the enemy usually has a curbing effect
against the actual use of certain arms in war. This self-restraint mechanism is all the more
conspicuous when there is almost certainty that reprisals will be devastating. It might happen
that, in order to manage rivalries, defuse and ‘pre-de-escalate’ tensions, some military powers
may embrace limitations in the use of autonomous weapons to outlaw certain practices, protect
civilians and other sensitive targets from attacks, or simply to avoid future vulnerability in the
event of a situation where two-way deterrence renders void the strategic advantage supposed
to accrue from a first strike. Certainly, there is no doomsday clock for autonomous weapons at
the moment, inasmuch as ‘mutual assured destruction’ does not apply to them (yet). This should
not serve as a relief though. On the contrary, if AI-enhanced weapons may be seen as tactically
effective for specific missions, the threshold to utilize them will be considerably lower, thus
posing a direct threat against those countries that lack powerful means to deter aggression.
Facing the danger of unsafe AI systems without proper oversight, demands will grow
stronger to set in motion international cooperation to avoid mutual harm. Danzig stressed this
point when referring to pathogens, AI systems, computer viruses, and radiation released by
accident: ‘Agreed reporting systems, shared controls, common contingency plans, norms, and
treaties must be pursued as means of moderating our numerous mutual risks’.
30
Smart machines
running wild on a mass scale are a commander’s worst nightmare. Their far-reaching
destabilizing fallout could increase turbulence rather than provide more security to states.
Human control is key. It can be argued that under normal circumstances no country would
willingly relinquish ultimate command and control of its nuclear arsenal to an AI agent, unless
forced to do.
31
By the same token, you do not wish your AI weapons to be manipulated by your
enemy to, say, engage your own troops or destroy your own military facilities. Great powers
might have little choice but begin discussions over ‘whether some applications of AI pose
unacceptable risks of escalation or loss of control’ and take measures to improve safety, as
pointed out by Scharre.
32
Particularly in the case of a dual-use technology broadly available through open source
and other means, preventing large-scale manufacturing of autonomous weapons seems to be a
credible alternative if states join forces to curb proliferation. Take the case of mini-drones loaded
with shaped charge and deployed in their thousands as swarms of deadly flying devices (a
theoretically low-cost weapon requiring almost no infrastructure or bulky logistics). The
proliferation of such machines in the hands of non-state actors and terrorists may create a
challenge to legitimate governments everywhere. Action would be required to ensure that AI
30
Richard Danzig, Technology roulette: managing loss of control as many militaries pursue technological superiority,
Washington, DC: CNAS, June 2018, p. 2, https://www.cnas.org/publications/reports/technology-roulette (access
6 August 2019).
31
In a desperate, escalate-or-lose situation, for instance, when total defeat is imminent, a decision-maker could
perhaps go for it purely for the sake of survival. See also Mark Fitzpatrick, Artificial intelligence and nuclear
command and control, Survival, vol. 61, n. 3, 2019, p. 81-92.
32
Paul Scharre, Killer apps: the real dangers of an AI arms race, Foreign Affairs, vol. 98, n. 3, May-June 2019, p. 135-
145. See also, from the same author, Army of none: autonomous weapons and the future of war. New York: W. W.
Norton & Company, 2018.
10
weapons can be traceable for the sake of accountability, i.e. any such weapon should be ‘mapped
to the corresponding legally accountable owner’.
33
Enforcing traceability, however, will compel
states to negotiate and implement governance measures collectively.
Sometimes finding partial solutions to specific problems (e.g. traceable AI weapons) may
help pave the way to gradually lock in progress in other areas. As discussed in a conference
organized by the Foresight Institute to examine AGI and great-power coordination, ‘general
skepticism prevails about the chances of success for any effort to engage national actors in a
conversation about decreased application of AI in the military. Strong incentives for
militarization of AI are inevitable in the face of perceptions about potential AI militarization by
other nations’. Against this background, the participants agreed, it would be wise to ‘encourage
early cooperation on concrete issues with lower stakes to create precedent and infrastructure for
later cooperation’.
34
Would the United Nations be able to come to the rescue?
What role for the United Nations?
In his address to the General Assembly in September 2018, UN Secretary-General
António Guterres warned that ‘multilateralism is under fire precisely when we need it most’.
But among the many pressing problems that call for a multilateral approach to be properly
addressed, he highlighted two ‘epochal challenges’ in particular: climate change, which is of
course of great concern but not covered in this article, and risks associated with advances in
technology. Indeed, ‘rapidly developing fields such as artificial intelligence, blockchain and
biotechnology have the potential to turbocharge progress towards the Sustainable Development
Goals’, Guterres said. On the other hand, new technologies also present serious perils along the
way, from mass economic unemployment to cybercrime and malicious use of digital tools.
35
Guterres cautioned against the weaponization of AI and the possibility of a dangerous
arms race in this domain, including the development of lethal autonomous weapons systems,
going so far as to contend that ‘the prospect of machines with the discretion and power to take
human life is morally repugnant’. Less oversight over these weapons could severely
compromise efforts to contain threats, prevent escalation, and ensure compliance with
international humanitarian and human rights law. He urged Member States to use the United
Nations as a platform to draw global attention to these crucial matters and to nurture a digital
future that is safe and beneficial for all’.
36
33
Kesavan Athimoolam, Solving the artificial intelligence race: mitigating the problems associated with the AI race,
Paper finalist in the ‘Solving the AI Race’ challenge, GoodAI, Prague, 2018, p. 21,
https://mirror.goodai.com/judges/Solving_the_AI_race_KA.pdf (access 25 May 2019).
34
Allison Duettmann et al. Artificial general intelligence: coordination & great powers, San Francisco, CA: Foresight
Institute, White paper, 2018, p. 6, https://foresight.org/wp-content/uploads/2018/11/AGI-Coordination-Great-
Powers-Report.pdf (access 18 March 2019).
35
Address of the UN Secretary-General to the 73rd General Assembly, 25 September 2018,
https://www.un.org/sg/en/content/sg/speeches/2018-09-25/address-73rd-general-assembly (access 27
September 2018).
36
See also Current developments in science and technology and their potential impact on international security and
disarmament efforts, Report of the UN Secretary-General, General Assembly, A/73/177, 17 July 2018,
https://www.un.org/disarmament/publications/library/73-ga-sg-report (access 10 February 2019).
11
To lead by example and engage the UN Secretariat in this enterprise, the Secretary-
General launched his own Strategy on New Technologies, with the objective of defining how
the UN system will support the use of these technologies to accelerate the achievement of the
2030 Sustainable Development Agenda and to facilitate their alignment with the values
enshrined in the UN Charter, the Universal Declaration of Human Rights, and the norms and
standards of international law. Among the pledges and commitments to pursue this strategy
were deepening the UN’s internal capacities and exposure to emerging technologies; increasing
understanding, advocacy, and dialogue; supporting dialogue on normative and cooperation
frameworks; and enhancing UN system support to government capacity development.
37
In the same vein, the High-Level Panel on Digital Cooperation was established by the
Secretary-General in 2017 to promote dialogue and look into proposals to build trust and
cooperation between countries and other stakeholders, the private sector, research centers, civil
society, and academia. In its report, released in June 2019, the Panel envisaged several potential
roles for the UN to add value in the digital transformation: as a convener; providing a space for
debating values and norms; generating standard setting; holding multi-stakeholder or bilateral
initiatives on specific issues; developing the capacity of Member States; ranking, mapping, and
measuring cybersecurity; and making available arbitration and dispute-resolution mechanisms.
The report put forward various recommendations, such as adopting by 2020 a ‘Global
Commitment for Digital Cooperation’ to consolidate in a single political document shared
values, principles, understandings, and objectives regarding the governance of cyberspace. Also,
it did not shy away from making principled statements of ethical significance, the stipulation
that ‘life and death decisions should not be delegated to machines’ being a case in point.
38
Most of the UN initiatives on new technologies are focused upon the implementation of
the Sustainable Development Goals, such as the annual ECOSOC’s Science, Technology, and
Innovation Forum, which had its fourth edition in May 2019. On the AI front, the flagship UN
platform for global dialogue with the wider public is the AI for Good Global Summit, hosted
every year in Geneva by the International Telecommunications Union (ITU) in partnership with
other UN agencies, the XPrize Foundation (organization offering incentivize prize competitions)
and the Association for Computing Machinery (ACM).
39
This interdisciplinary event brings
together speakers from governments, industry, academia, media, and the research community
to discuss how AI can be utilized to achieve inter alia results in ending poverty, alleviating
hunger, promoting health, and identifying development solutions.
The UN Interregional Crime and Justice Research Institute (UNICRI) established in 2017
a Center for Artificial Intelligence and Robotics in The Hague, with the aim of disseminating
information, undertaking training activities, and promoting public awareness. The Center has
37
UN Secretary-General’s strategy on new technologies, New York, September 2018, p. 3-5,
https://www.un.org/en/newtechnologies/images/pdf/SGs-Strategy-on-New-Technologies.pdf (access 5
November 2018).
38
The age of digital interdependence: report of the UN Secretary-General’s High-Level Panel on Digital Cooperation,
New York, 10 June 2019, p. 4-5, https://digitalcooperation.org/report (access 10 June 2019).
39
The fourth Summit was held on 28-31 May 2019. The report of the 2018 AI for Good Global Summit - Accelerating
Progress towards the SDGs (Geneva, 15-17 May 2018) is available at: https://www.itu.int/en/ITU-
T/AI/2018/Pages/default.aspx (access 14 March 2019).
12
notably been active in cybercrime, law enforcement, criminal justice, counter-terrorism, and
malicious use of AI.
40
Also important is UNESCO’s humanistic approach on ethics, policy, and
capacity building in response to new emerging challenges related to AI technologies, including
philosophical reflections on what it means to be human in the face of disruptive technologies
from the ethical perspective.
41
The New York-based Centre for Policy Research of the United Nations University (UNU)
has been conducting foresight analyses, developing collaborative research projects with external
partners, and convening policy seminars and events. In 2018, the Centre launched an on-line
platform on AI and Global Governance to foster cross-disciplinary insights and ‘inform existing
debates from the lens of multilateralism’, as a tool for Member States, multilateral agencies,
funds, programs, and other stakeholders.
42
The UN Institute for Disarmament Research (UNIDIR) has a research project on the
weaponization of increasingly autonomous technologies since 2013, anchored in the idea that
mitigating or reducing potential harms caused by AI will be crucial to harness ‘AI for Good’.
43
More broadly, UNIDIR’s objectives in studying the relationship between AI and international
security aim at further clarifying functional concerns in the development and future deployment
of autonomous weapons systems, including the risk of accidents; manipulation and other
vulnerabilities in weaponization; changes in the nature of conflict and their impact upon
strategic stability; commercial development of AI-related technologies and its links with the
defense industry; access to sensitive technologies and their use to conduct asymmetric warfare;
and the structural effects of the AI revolution on geopolitics and power dynamics in the
international system.
44
In turn, the UN Office for Disarmament Affairs (UNODA), both in New York and in
Geneva, has been working to implement the Secretary-General’s Agenda for Disarmament
(Securing Our Common Future), which includes the consideration of possible challenges posed
by new weapon technologies. One of the action points of the Agenda seeks to support efforts by
Member States to elaborate new measures, including through political or legally binding
arrangements, to ensure that humans remain at all times in control over the use of force’.
45
Other
initiatives include the UN Innovation Network, connecting a collaborative community of
innovators within the UN System, as well as a myriad of projects developed by UN agencies to
apply AI tools in their daily practice in the field.
40
Center for Artificial Intelligence and Robotics, UNICRI,
http://www.unicri.it/in_focus/on/UNICRI_Centre_Artificial_Robotics (access 19 August 2018).
41
Playing a key role in this regard, UNESCO’s World Commission on the Ethics of Scientific Knowledge and
Technology (COMEST) is an advisory body established in 1998 to formulate ethical principles that could provide
policy advice for decision-makers on ethical issues related to science and technology. See Report of COMEST on
robotics ethics, Paris, 2017, https://unesdoc.unesco.org/ark:/48223/pf0000253952 (access 12 March 2019).
42
AI & Global Governance, Centre for Policy Research, UNU, https://cpr.unu.edu/tag/artificial-intelligence
(access 4 June 2019).
43
UNIDIR publications for this project are available at: http://www.unidir.org/programmes/security-and-
technology/the-weaponization-of-increasingly-autonomous-technologies-phase-iii (access 3 May 2019).
44
United Nations activities on artificial intelligence (AI), Geneva: International Telecommunication Union, 2018, p. 40-
41, http://www.itu.int/pub/S-GEN-UNACT-2018-1 (access 5 April 2019).
45
Id., p. 48.
13
Despite the relative lack of clarity on the way forward, the future role of the UN will be
inextricably linked to the global governance of AI. Research and policy proposals on this topic
are beginning to shed light upon the likelihood of international cooperation on transformative
AI-related issues, incentives needed for the parties to reach meaningful agreements, proper
conditions for compliance, and costs of defection or unilateral, non-cooperative measures.
46
Proposals range from informal mechanisms in narrowly-focused domains to much more
ambitious, institutionalized forums, such as the establishment of an international regulatory
agency, to be named, for example, the International Artificial Intelligence Organization (IAIO),
aimed at setting standards and benchmarks across numerous areas to be regulated. However,
even the proponents of such broad-spectrum organization concede that reaching a workable
international consensus on this idea seems rather a remote possibility in the short term.
47
Another proposition to consider is creating a new body modeled on the UN
Intergovernmental Panel on Climate Change (IPCC) to provide policymakers with technical,
neutral assessments, subject to review by states, underlining the opportunities, implications, and
potential risks of AI, always through evidence-based research by the tech and scientific
communities.
48
A case can be made for an Intergovernmental Panel on Artificial Intelligence’
(IPAI), as Miailhe did, to gather a large interdisciplinary group of experts with a mandate to
collect, organize, and analyze credible and up-to-date information on AI challenges.
49
As
technologies become more and more sophisticated and indecipherable to the layman, sound
technical advice will be in high demand if intergovernmental negotiations are to be launched to
tackle these thorny issues, separating myth, hype, and misinformation from what is in tune with
the body of evidence and human knowledge available so far. Politicians obviously have no need
to know technicalities about reinforcement learning, multilayer perceptrons, convolutional
neural networks, or Boltzmann machines. But as AI puzzles and their impact upon society
become increasingly relevant in the real world, AI researchers cannot deny that suitable
communication with the political leadership will be of paramount importance to guide decision-
making in the right direction.
One key challenge is to keep the major players engaged, so that AI governance can be
instrumental in providing stability to safeguard the system in everyone’s interest. Even though
some proposals may require more time for maturation, others could be implemented in a more
46
Allan Dafoe, AI governance: a research agenda, Future of Humanity Institute, University of Oxford, 2017, p. 46,
https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf (access 2 December 2018).
47
Olivia J. Erdelyi and Judy Goldsmith, Regulating artificial intelligence: proposal for a global solution, Paper presented
at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, New Orleans, 13 February 2018, p. 6,
http://www.aies-conference.com/2018/accepted-papers (access 24 May 2019).
48
In December 2018, Canada and France jointly announced their intention to create an ‘International Panel on AI’,
a bilateral initiative to ‘support and guide the responsible adoption of AI that is human-centric and grounded in
human rights, inclusion, diversity, innovation and economic growth’. How this G2AI initiative will attract more
countries (other G7 or European Union members?) and connect in the future with a truly global Intergovernmental
Panel remains to be seen. Cf. Mandate for the International Panel on Artificial Intelligence,
https://pm.gc.ca/en/news/backgrounders/2018/12/06/mandate-international-panel-artificial-intelligence
(access 2 May 2019).
49
Nicolas Miailhe, AI & global governance: why we need an intergovernmental panel for artificial intelligence, Articles &
insights, Centre for Policy Research, UNU, 20 December 2018, https://cpr.unu.edu/ai-global-governance-why-we-
need-an-intergovernmental-panel-for-artificial-intelligence.html (access 11 April 2019).
14
straightforward manner, without the need for a fully-fledged, previous commitment from
governments on the parameters and preconditions deemed appropriate for establishing new
criteria at the international level. Advisory and non-binding settings involving states, the private
sector, and civil society could be a first step if focused upon information gathering, independent
analyses, and recommendations geared at prevention rather than regulation per se.
Traditional, institutionally-based intergovernmental diplomacy seems too slow if
compared with the staggering pace of technological innovation. Informal, ad-hoc, plurilateral
initiatives spurred by some like-minded countries (‘coalitions of the willing’) may at times
promote added value in governance, but they usually lack universal appeal and raise suspicious
of their agendas in the eyes of states left outside these groups. A better solution to escape short-
term paralysis is for the UN, with the authority and legitimacy conferred to it by Member States,
to take the lead and offer a collective space, open to all, strictly voluntary, to encourage
cooperation and work on prevention, capacity-building, and normative recommendations.
A good example along these lines would be the creation of a Global Foresight
Observatory’ on the convergence of AI with other emerging technologies, under the auspices of
the UN, as envisaged in the report of UNU’s Centre for Policy Research, i.e. a multi-stakeholder,
inclusive platform to foster cooperation in technological and political preparedness for
responsive innovation.
50
This sort of initiative fits well with what the UN can do even in hard-
knock situations. Trying to be too ambitious from the very beginning may backlash and lead
nowhere. It is essential not to allow resistance from becoming lethargic and continue exploring
alternatives to take concrete steps in AI governance in a robust and comprehensive manner.
Multilateralism takes up lethal autonomous weapons systems
Lethal autonomous weapons systems (LAWS) are a particular case-study in the larger
field of AI applications in warfare. On the international agenda for a few years by now, they
have also attracted media attention and involvement of civil society in campaigns to ban ‘killer
robots’. Rather than providing a full account of the state of affairs in LAWS, my goal is to briefly
highlight key points on where we might be heading to and what it means for the Global South.
Many studies scrutinize great-power competition and possible implications for AI in
deterrence and early-warning involving nuclear powers, particularly the United States, Russia,
and China.
51
The most immediate risk, nonetheless, does not lie in future high-intensity, all-out
wars. Taking as benchmarks current patterns of international conflict, LAWS are more likely to
be deployed in low-intensity, small-scale conflicts, proxy wars, intrastate conflagrations, urban
warfare, or asymmetrical combat situations. And, that being the case, people, groups, or
organizations in developing countries and poor regions may be among the first targets when
weaponized, self-learning AI systems make their appearance on the battlefield.
As a matter of fact, albeit not fully autonomous, armed robots have already been sent to
fight alongside troops, such as in Iraq, where the US Army experimented with a tele-operated
50
Pauwels, op. cit., p. 53.
51
Vincent Boulanin (ed.), The impact of artificial intelligence on strategic stability and nuclear risk, vol. I, Euro-Atlantic
perspectives, Stockholm: SIPRI, 2019, https://www.sipri.org/sites/default/files/2019-05/sipri1905-ai-strategic-
stability-nuclear-risk.pdf (access 6 May 2019).
15
mini-tank called SWORDS (Special Weapons Observation Reconnaissance Detection System) to
patrol streets armed with a M249 light machine gun.
52
In a similar fashion, the SMSS (Squad
Mission Support System), a ground vehicle manufactured by Lockheed Martin, was deployed
in Afghanistan to provide transport and logistic support to US soldiers.
53
Also worth recalling are the countless drone strikes in Pakistan, Yemen, Somalia, and
other countries during the ‘war on terror’ campaign. As in the case of drones, LAWS may
become the weapon of choice to conduct killings from afar in future conflicts or in counter-
insurgency and counter-terrorism operations, under dubious allegations of ‘anticipatory self-
defense’. In hybrid wars and gray zone conflicts, replacing ‘boots on the ground’, robots may be
deployed in covert or semi-covert operations to conceal the identity of the sponsor, allowing for
plausible denial, including leadership targeting (‘decapitation’), extrajudicial executions, and
intimidation of groups or entire populations. Haas and Fischer warned that ‘machine autonomy
will contribute to a further expansion of targeted killings and to their spread beyond military
counter-terrorism operations in the non-state context’.
54
Some contend that LAWS will be more precise while using force, thus preventing civilian
fatalities and collateral damage normally associated with human error. It is not clear, however,
to what extent AI systems will be able to correctly distinguish between combatants and non-
combatants in volatile environments or how they will react when facing adversarial strategies
to hack or fool the machine and induce it to make fatal mistakes (e.g. striking the wrong targets).
How to avoid knock-on effects in the face of inaccurate predictions or unanticipated outcomes?
We are still far from building any AI flexible enough to understand the broader context in real-
life situations and reliably adapt its behavior under changing circumstances, in order to make
decisions on the spot that put in danger human lives. Gains in speed can drastically accelerate
decision-making in war and be detrimental to much-needed room for diplomacy and time-out
to prevent minor skirmishes from escalating precipitously. Again, dehumanizing warfare will
increase the likelihood that political leaders will authorize resort to force to settle disputes, since
their troops would be ‘safe’.
There is no consensual definition for LAWS from the point of view of their operation,
lethality, and applications. This category could include any armed system designed to be
potentially lethal and capable of carrying out the process of target selection and ‘pulling the
trigger’ without human supervision. In military jargon, deploying a fully autonomous weapons
means that battle management, command, control, communications, and intelligence (BMC3I)
will not be always in the hands of commanders during all stages of the decision-making loop.
The long-term risk we face is the loss of human control over the use of force. As Roff put it, ‘the
52
SWORDS was effectively grounded after a series of incidents in which it began to behave unpredictably,
swinging its gun in chaotic directions, according to an account of one of the incidents. Patrick Tucker, US Army
now holding drills with ground robots that shoot, Defense One, 8 February 2018,
https://www.defenseone.com/technology/2018/02/us-army-now-holding-drills-ground-robots-shoot/145854/
(access 13 May 2019).
53
Sometimes called a ‘robotic mule’, but with proved capacity to operate in autonomy mode, the SMSS was selected
by the US Army in 2011 for a first-of-its-kind military assessment in Afghanistan,
https://www.armyrecognition.com/us_army_wheeled_and_armoured_vehicle_uk (access 13 August 2019).
54
Michael Carl Haas and Sophie-Charlotte Fischer, The evolution of targeted killing practices: autonomous
weapons, future conflict, and the international order, Contemporary Security Policy, vol. 38, n. 2, 2017, p. 300.
16
ability to create targeting lists using military doctrine and targeting processes is inherently
strategic, and handing this capability over to a machine undermines existing command and
control structures and renders the use for humans redundant’.
55
States can no longer merely elude or circumvent the matter. Today’s most pivotal
multilateral discussion on AI, peace and security has been taking place in Geneva through the
open-ended Group of Governmental Experts (GGE) on emerging technologies in the area of
LAWS, formally created in 2017 under the UN Convention on Certain Conventional Weapons
(CCW). As decisions must be made by consensus, diverging views hamper further progress.
Except from a few more active delegations, the number of experts from the Global South taking
the floor and making proposals is remarkably low. Meetings move around lengthy debates on
methodology, definitions, and whether states should negotiate principles or regulations (if any)
on LAWS. Military powers that are actively pursuing ways to mobilize AI capabilities for their
advantage have been opposing the introduction of restrictions on this technology. In 2018, a
proposal was put on the table for the GGE to start negotiating a legally binding instrument to
ensure meaningful human control over critical functions in LAWS.
56
Yet, no agreement was
reached and the question was deferred on the grounds that embarking upon negotiations of a
legal instrument would be ‘premature’.
57
In 2019, the GGE had only seven days in total to cover all items on its agenda: a) the
potential challenges posed by LAWS to international humanitarian law (IHL); b)
characterization of the systems under consideration in order to promote a common
understanding on concepts; c) further consideration of the human element in the use of lethal
force; aspects of human-machine interaction in the development, deployment, and use of
emerging technologies in the area of LAWS; d) review of potential military applications of
related technologies in the context of the GGE’s work; and e) possible options for addressing
humanitarian and international security concerns ‘without prejudging policy outcomes and
taking into account past, present and future proposals’.
58
These topics were further detailed in a program of work, in which some questions helped
states navigate through the process, chaired by North Macedonia. Compliance with the
principles of IHL (humanity, distinction, proportionality, necessity, precaution) and the Martens
Clause on the ‘dictates of the public conscience’ were both emphasized in the discussions, in
particular if human supervision (the ability to intervene and abort) would be deemed sufficient
for such compliance during the operation of a weapon that can autonomously select and attack
targets. Also highly contentious was finding clarity on what sort of outcome should arise from
the GGE, namely a legally binding instrument, a political declaration, guidelines, principles or
codes of conduct, or improving implementation of existing legal requirements, including legal
55
Heather M. Roff, The strategic robot problem: lethal autonomous weapons in war, Journal of Military Ethics, vol.
13, n. 3, 2014, 211-227, Abstract.
56
Working paper submitted by Austria, Brazil, and Chile, Geneva, 30 August 2018, CCW/GGE.2/2018/WP.7,
https://www.unog.ch (access 15 November 2018).
57
Critics of the proposal claimed that it was not ‘realistic’. Before any remarkable progress in concepts behind
autonomy and weapons systems, they argued, any preemptive ban on LAWS would be ‘impractical’. Slijper et al.,
op. cit., passim.
58
Provisional agenda submitted by the Chairperson, Geneva, 25 March 2019, CCW/GGE.1/2019/1,
https://www.unog.ch (access 18 June 2019).
17
reviews of weapons (in conformity with article 36 of the Protocol of Geneva I, states have the
obligation to determine by a legal review whether any new weapon, means, or method of
warfare will not be prohibited by international law). It was accepted from the outset, though,
that these options were not necessarily mutually exclusive in view of ‘the common goal of
ensuring compliance with IHL and maintaining human responsibility for the use of force’.
59
The GGE has slowly been building upon areas of convergence, some of them captured in
the possible guiding principles’ agreed in 2018, which included a few noteworthy
recommendations: IHL continues to apply fully to all weapons systems, including the potential
development and use of LAWS; human responsibility for decisions on the use of weapons
systems must be retained since ‘accountability cannot be transferred to machines’;
accountability should be ensured in accordance with applicable international law, within a
responsible chain of human command and control; ‘legal reviews continue to be necessary’ to
safeguard respect to international law; physical security, appropriate non-physical safeguards
(including cybersecurity against hacking or data spoofing), the risk of acquisition by terrorist
groups, and the risk of proliferation should be considered; risk assessments and mitigation
measures should be part of the design, development, testing, and deployment cycle of emerging
technologies in any weapons systems; consideration should be given to the use of LAWS in
upholding compliance with IHL and other applicable international legal obligations; in crafting
potential policy measures, ‘LAWS should not be anthropomorphized’; CCW discussions should
not hamper progress in or access to peaceful uses of intelligent autonomous technologies; and,
finally, the CCW offers an appropriate framework for dealing with the issue, aiming at ‘striking
a balance between military necessity and humanitarian considerations’.
60
These commonalities
are welcome, but they are non-binding principles and as such remain on shaky ground.
In August 2019, the GGE held its last meeting of the year and, following long back-and-
forth informal consultations, failed to recommend a mandate to negotiate a legally binding
instrument (e.g. a Protocol VI to the CCW). Rather, a substantially diluted report was agreed
late at night as a compromise for further deliberations until the next CCW Review Conference.
61
References in previous drafts on ‘human control’, international human rights law and
international criminal law were deleted in the final version. The GGE shall continue to meet in
Geneva for a number of days to be defined over the two-year period between 2020 and 2021, in
order to ‘explore and agree on possible recommendations’. Meetings are to be clustered into
three work streams (legal, technological, and military), taking into account the interaction
between them and ‘bearing in mind ethical considerations’. The GGE recommended that the
CCW should endorse the guiding principles affirmed in 2018, but conclusions arising out of its
59
Provisional programme of work submitted by the Chairperson, Geneva, 19 March 2019, CCW/GGE.1/2019/2,
https://www.unog.ch (access 18 June 2019).
60
Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal
Autonomous Weapons Systems, Geneva, 23 October 2018, CCW/GGE.1/2018/3, p. 4, https://www.unog.ch
(access 3 April 2019).
61
The GGE is mandated to make recommendations to the Meeting of the High Contracting Parties of CCW, held
annually by November. There is every five years a CCW Review Conference to take stock and review the status of
the Convention and all its five Protocols. The last one (Fifth Review Conference) took place in 2016, in Geneva. The
Sixth Review Conference is scheduled for 2021.
18
work will be used ‘as a basis for the clarification, consideration [and development] of aspects of
the normative and operational framework’ on LAWS only by 2021.
62
This outcome frustrated those who expected more from the GGE at that juncture. What
comes next defies easy predictions. Admittedly, such diplomatic processes are sensitive to
political susceptibilities and need to receive reassurance to prevent pushbacks. States do have a
universal obligation to respect IHL, irrespective of how innovative and revolutionary any
technology can be. More than that, as previously noted, LAWS can have non-negligible
destabilizing effects if or when deployed, including worrying scenarios of unpredictability and
lack of control over what they could do without human intervention to deactivate them in case
of a higher political decision or a fundamental change of circumstances. If anything, strategic
prudence would advise that runaway AIs on a futuristic battleground is in no country’s interest,
much less so in densely populated, urban warfare situations.
Realistically speaking, amid growing polarization and frictions on several political
hotspots, the international security environment does not seem presently conducive to sweeping
global agreements in a very short time.
63
Unilateralism and skepticism regarding arms control
render political commitments more troublesome and undermine attempts at negotiating
multilateral solutions. But if no guardrails are in place and disregard to IHL remains
widespread, what are the expected consequences? All in all, despite difficulties, the need still
persists for states to come to terms and deliver results.
Take the example of cybersecurity, on the UN agenda at least since 1998. The Group of
Governmental Experts on developments in the field of information and telecommunications in
the context of international security reached a deadlock in 2017. As a result, regulation of
cyberspace was suspended in a state of limbo, including norms of international law and how
they should apply in cyberwarfare.
64
Still, the compound risks of systemic disruption and
structural instability brought about by lawlessness in cyberspace could not be ignored for so
long. States resorted again to the UN to provide the locus to address digital governance. In
December 2018, the General Assembly established two different processes: following the
adoption of a Russian draft resolution in the First Committee, it constituted an Open-Ended
Working Group, comprised of the entire UN membership, to convene for the first time in
September 2019; also, a US proposal was approved to establish another GGE, comprising 25
members, with its first formal meeting scheduled to December 2019.
65
Analogies to be drawn with similar AI dilemmas are no coincidence. Some of the
recommendations on confidence building and cooperative measures in the cyberspace, for
62
Facing last-minute disagreements, the GGE was unable to reach consensus on some key points, such as the
number of meetings the Group will hold (20, 25 or 30 days) and the expression ‘and development’, left in brackets
on purpose. Cf. Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the
Area of Lethal Autonomous Weapons Systems, Geneva, 21 August 2019, CCW/GGE.1/2019/CRP.1/Rev.2,
https://www.unog.ch (access 24 August 2019).
63
Richard Gowan, Muddling through to 2030: the long decline of international security cooperation, Articles &
insights, Centre for Policy Research, UNU, 24 October 2018, https://cpr.unu.edu/muddling-through-to-2030-the-
long-decline-of-international-security-cooperation.html (access 5 March 2019).
64
Anders Henriksen, The end of the road for the UN GGE process: the future regulation of cyberspace, Journal of
Cybersecurity, vol. 5, issue 1, 2019, p. 1-9.
65
UNODA, https://www.un.org/disarmament/ict-security (access 20 June 2019).
19
instance, can be applied to the AI domain, such as inviting states to create consultative
frameworks on a voluntary basis, strengthening mechanisms to address security incidents and
emergency responses, or establishing ‘regular institutional dialogue with broad participation
under the auspices of the United Nations’.
66
Interestingly enough, one of the key takeaways
from a recent Cyber Stability Conference held by UNIDIR was the almost unanimous call for
more engagement from states that normally do not take an active part in the deliberations.
67
Again, these all-important issues concern each and every relevant actor and its consideration
should not be confined to a few major players.
Conclusion
From a long-term perspective, human civilization might plausibly be living in a
‘technological transformation trajectory’, in which radical breakthroughs in science and
technology head us in a different direction yet to be clearly grasped.
68
AI governance on closer
inspection is not just about imposing restrictions, but encouraging prevention and foresight as
well. And if we hope to be in a position to start taking consequential measures on AI policy at
the international level anytime soon, it is advisable to acknowledge that, before risks can be
mitigated, ‘they must first be understood’.
69
Applying AI as a general-purpose technology in the military is not the same as
weaponizing it or handing over control to machines. Where we draw the line of what is
‘inevitable’ remains a choice humans must make, but one does not need to wait for ideal,
Goldilocks conditions to frame the problem in search for a desirable outcome in a reasonable
timespan. This crucial task must not be left solely to leading tech countries. Not necessarily
because new voices may bring (or not) insights that others had failed to consider. Actually, when
stakes are too high and are likely to entail worldwide externalities, all should get involved and
partake in discussions concerning our shared future, championing inclusiveness and more
diverse representation in a plurality of settings, including both geographical and gender balance.
The more people join the conversation the better, possibly with UN support to bridge the gap.
Global IR can also help to bring more diversity of views to the table and expand the
opportunities for scholarly dialogue. The relationship between AI and arms control remains
understudied in the IR discipline, insofar as real-world AI applications are still being shaped by
the rapidly evolving technology landscape. But if international security will be transformed in
so many ways, Global South leaders and scholars cannot afford to stand idle while others make
decisions and allow the militarization of AI to go deeper, unimpeded, and ever more deadly.
66
Cf. summary of GGE recommendations prepared by UNIDIR’s security and technology program,
http://www.unidir.org/files/medias/pdfs/gge-recommendations-confidence-building-and-cooperative-
measures-eng-0-836.pdf (access 25 June 2019).
67
UNIDIR's Annual Cyber Stability Conference, New York, 6 June 2019,
http://www.unidir.org/programmes/security-and-technology/2019-cyber-stability-conference (access 25 June
2019).
68
Seth D. Baum et al. Long-term trajectories of human civilization, Foresight, vol. 21, n. 1, 2019, p. 53-83.
69
Allan Dafoe and Remco Zwetsloot, Thinking about risks from AI: accidents, misuse, and structure, Lawfare, 11
February 2019, p. 1, https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure
(access 14 June 2019).
20
Developing countries should not be relegated to the role of spectators, technology-takers, or
(even worse) victims. It is incumbent upon all states to step forward and find rational pathways
to promote safe, beneficial, and friendly AI, rather than dangerous, inimical, and bellicose AI.
* * *
... As a result, many concerns have been expressed about the fact that certain authorities and rulers of governments are misled by these perks, and hence ignore all of the safety problems that come with them (Sisson, 2020). Some military advisors even tie the employment of artificial intelligence in the military to the possibility of a nuclear war (Garcia, 2019). They even go so far as to say that the militarization of AI is irrevocable (Garcia, 2019). ...
... Some military advisors even tie the employment of artificial intelligence in the military to the possibility of a nuclear war (Garcia, 2019). They even go so far as to say that the militarization of AI is irrevocable (Garcia, 2019). ...
... The idea that states are incorporating AI into their defense systems to "strengthen their military" and "improve the security of their nations" is regarded with skepticism (Garcia, 2019). Many analysts feel that the deployment of AI in the military by some AI-leading countries is purely to ensure their position at the top of the power pyramid. ...
Article
Full-text available
Artificial Intelligence has become an essential part of our lives. It has without a doubt made life much easier and practical whether on a global scale such as economical and political development or on a smaller scale such as our day-to-day lives. The last half-decade marked the beginning of the development of robots that have the ability to perform human tasks. Originally, it was expected that the main disadvantage of AI technology would be related to the loss of work prospects since humans would be replaced by machines that are able to operate in a more efficient matter. While this concern is certainly valid, there is a far more severe issue at hand which is none other than the military's use of AI. Governments, journalists, and tech leaders argue that developing and using intelligent and autonomous weaponry is unquestionably a fatal mistake that might lead to catastrophic consequences; it is particularly problematic if these technologies end up in the hands of the wrong individuals. These experts and scholars argued that these weapons could eventually lead to more damage and destruction and could possibly result in a third world war. Thus, they attempted to explore the moral and, most significantly, legal ramifications of these autonomous weapons. As a result, a new formula had to be introduced; this formula is Meaningful Human Control. In fact, the ultimate goal is to make Meaningful Human Control a legal requirement under International Law in order to allow it to serve as a potential solution for a number of moral and legal challenges that these fully autonomous weapons raise. This paper seeks to understand and elaborate on the benefits and the challenges that come with the integration of meaningful human control in the use of autonomous weapons systems.
... As can be seen from the definitions, AI is not inherently a weapon. But, in recent years, it has been greatly "weaponised" and "militarised" (Burton & Soare, 2019;Garcia, 2018Garcia, , 2019Hynek & Solovyeva, 2022;Kozyulin, 2019) and has multiple applications in conducting military operations, which lewis et al. (2016) call "the algorithm of war". Moreover, King (2023, p. 100) believes that artificial intelligence is "on the brink of transforming warfare just as gunpowder, tanks, airplanes, and the atomic bomb did in previous eras". ...
Article
Full-text available
This paper examines the escalating technological and geopolitical competition between the United States (US) and China, focusing on their respective advancements in artificial intelligence (AI) and its implications for national security and military applications. The study explores the strategic initiatives undertaken by both nations, analysing their investments, innovations, and policy measures. The paper uses the concept of an arms race to evaluate factors including military spending, strategic actions, public perception, and mutual antagonism to determine whether the current AI rivalry qualifies as an "AI arms race". The analysis reveals a complex dynamic of mutual antagonism and strategic manoeuvring, highlighting the significant role of AI in shaping modern military capabilities and geopolitical relations. The findings suggest that while the US-China AI competition exhibits many characteristics of an arms race, such as strategic rivalry and technological investments, it lacks the consistent high annual increases in military spending typically associated with traditional arms races. Nonetheless, the paper concludes that the intense focus on AI for military purposes indicates a critical strategic competition that can evolve into a more pronounced arms race influenced by ongoing technological advancements and geopolitical shifts.
... The problem extends into ethics and global and business policy. As AI becomes increasingly used, it is crucial for discussions around AI, peace, and general security to involve diverse international voices, with particular emphasis on engaging scholars and practitioners from the Global South to create a balanced, inclusive dialogue (Garcia 2019). Moreover, successful integration requires anticipation of how AI will interact with human decision-makers, predicting potential outcomes, user expectations and assumptions, and identifying how best to present relevant information to leaders in numerous fields. ...
Article
Full-text available
In the emerging literature on artificial intelligence (AI) and leadership, there is increasing recognition of the importance played by advanced technologies in decision-making. AI is viewed as the next frontier to improve decision-making processes and as a result enhance human decision-making in general. However, existing literature lacks studies on how AI, operating as a “warrior” or innovator in business, can in turn enhance leadership reflexivity, and thereby improve decision-making outcomes. This study is aimed at addressing this gap by drawing on the reflexivity perspective and existing research on AI and leadership to examine the integration of the concepts of warrior AI with leadership reflexivity to improve decision-making. The study used a systematic literature review to identify and map articles using specified inclusion and exclusion criteria to achieve this. Selected articles were included for in-depth analysis to address the issue under investigation. The study explored the potential benefits of blending advanced AI with reflective leadership strategies, offering insights into how organizations can optimize their decision-making processes through this innovative approach. A comprehensive literature review was thus the foundation for our investigation into how warrior AI may enhance human decision-making especially under high-stress conditions by providing real-time data analysis capabilities, pattern recognition skills, and predictive simulations. Our work emphasizes how leadership reflexivity plays a critical role in assessing AI-driven recommendations to ensure ethical soundness and contextual appropriateness of the decisions being taken. Based on our findings, we suggest that integrating AI capabilities with reflective leadership practices can lead to more effective and adaptable decision-making frameworks, particularly when swift yet well-informed action is necessary. This study adds to the existing body of knowledge by illustrating that, with the aid of a flow diagram, the integration of warrior AI into the reflective process can potentially amplify the benefits of AI, offering data-driven insights for leaders to reflect upon, thereby reinforcing the decision-making process with a more rigorous, ethical, and nuanced approach in alignment with organizational objectives and societal values. It is recommended that leadership actively engage in discussions regarding ethical AI use, ensuring alignment with organizational values and ethics. Ultimately, this study contributes valuable insights to discussions around AI and leadership by underscoring the significance of maintaining a balanced relationship between machine efficiency and human wisdom.
... Garcia[202] points out in his analysis of autonomous weapons research and development that at least seven countries (United States, China, Russia, United Kingdom, France, Israel, and South Korea) stand out for their substantial engagement in the development of autonomous weapons. ...
Thesis
Full-text available
The critical inquiry pervading the realm of Philosophy, and perhaps extending its influence across all Humanities disciplines, revolves around the intricacies of morality and normativity. Surprisingly, in recent years, this thematic thread has woven its way into an unexpected domain, one not conventionally associated with pondering "what ought to be": the field of artificial intelligence (AI) research. Central to morality and AI, we find "alignment", a problem related to the challenges of expressing human goals and values in a manner that artificial systems can follow without leading to unwanted adversarial effects. More explicitly and with our current paradigm of AI development in mind, we can think of alignment as teaching human values to non-anthropomorphic entities trained through opaque, gradient-based learning techniques. This work addresses alignment as a technical-philosophical problem that requires solid philosophical foundations and practical implementations that bring normative theory to AI system development. To accomplish this, we propose two sets of necessary and sufficient conditions that, we argue, should be considered in any alignment process. While necessary conditions serve as metaphysical and metaethical roots that pertain to the permissibility of alignment, sufficient conditions establish a blueprint for aligning AI systems under a learning-based paradigm. After laying such foundations, we present implementations of this approach by using state-of-the-art techniques and methods for aligning general-purpose language systems. We call this framework Dynamic Normativity. Its central thesis is that any alignment process under a learning paradigm that cannot fulfill its necessary and sufficient conditions will fail in producing aligned systems.
... And given that only a little over half of the documents try to specify their subject matter, it's understandable why confusion persists in the regulatory domain. 15 Garcia [202] points out in his analysis of autonomous weapons research and development that at least seven countries (United States, China, Russia, United Kingdom, France, Israel, and South Korea) stand out for their substantial engagement in the development of autonomous weapons. ...
Preprint
Full-text available
The critical inquiry pervading the realm of Philosophy, and perhaps extending its influence across all Humanities disciplines, revolves around the intricacies of morality and normativity. Surprisingly, in recent years, this thematic thread has woven its way into an unexpected domain, one not conventionally associated with pondering "what ought to be": the field of artificial intelligence (AI) research. Central to morality and AI, we find "alignment", a problem related to the challenges of expressing human goals and values in a manner that artificial systems can follow without leading to unwanted adversarial effects. More explicitly and with our current paradigm of AI development in mind, we can think of alignment as teaching human values to non-anthropomorphic entities trained through opaque, gradient-based learning techniques. This work addresses alignment as a technical-philosophical problem that requires solid philosophical foundations and practical implementations that bring normative theory to AI system development. To accomplish this, we propose two sets of necessary and sufficient conditions that, we argue, should be considered in any alignment process. While necessary conditions serve as metaphysical and metaethical roots that pertain to the permissibility of alignment, sufficient conditions establish a blueprint for aligning AI systems under a learning-based paradigm. After laying such foundations, we present implementations of this approach by using state-of-the-art techniques and methods for aligning general-purpose language systems. We call this framework Dynamic Normativity. Its central thesis is that any alignment process under a learning paradigm that cannot fulfill its necessary and sufficient conditions will fail in producing aligned systems.
Article
Full-text available
Over the past decade, rapid progress in artificial intelligence (AI) has transformed a range of areas, from medicine to strategic games and communication technologies, from art and culture to everyday office work. It would be naïve to assume that this evolution does not permeate and alter international affairs. Building on, and solidifying, a thriving yet still fragmented emerging literature on "AI IR," this forum gathers several critical diagnoses of the way AI technologies impact on various areas of international relations. Introducing new concepts and charting emerging empirical realities, contributors explore how AI advances, such as autonomous lethal systems, synthetic imagery and text, or intelligent systems, are already creating new landscapes of violent and nonviolent international interactions. Yet, behind their distinct takes, contributions together stress the need to correctly locate and evaluate specific sites of AI impact, thus offering a nuanced appraisal scrutinizing grand declarations of an "AI revolution" in global politics.
Article
Over the past decade, rapid progress in artificial intelligence (AI) has transformed a range of areas, from medicine to strategic games and communication technologies, from art and culture to everyday office work. It would be naïve to assume that this evolution does not permeate and alter international affairs. Building on, and solidifying, a thriving yet still fragmented emerging literature on “AI IR,” this forum gathers several critical diagnoses of the way AI technologies impact on various areas of international relations. Introducing new concepts and charting emerging empirical realities, contributors explore how AI advances, such as autonomous lethal systems, synthetic imagery and text, or intelligent systems, are already creating new landscapes of violent and nonviolent international interactions. Yet, behind their distinct takes, contributions together stress the need to correctly locate and evaluate specific sites of AI impact, thus offering a nuanced appraisal scrutinizing grand declarations of an “AI revolution” in global politics.
Chapter
Full-text available
As artificial intelligence (AI) becomes increasingly integral to modern society, its profound implications are coming to the forefront of discussions. This research paper investigates the perspective of Generation Z on the multifaceted societal and ethical impacts of AI. Gen Z is the first generation to fully embrace AI across all facets of life. Therefore, understanding their attitudes, concerns, and expectations towards AI is imperative for cultivating a responsible, adaptable, and ethically conscious society in the AI-driven era. This study addresses a significant research gap by exploring Gen Z’s perceptions of the challenges associated with AI, such as issues related to privacy, data security, transparency, bias, public fear and more. It also examines the impact of AI on employment dynamics, specifically on job displacement and the necessity for reskilling in the face of AI-driven automation. The paper adopts a global perspective, acknowledging the variations in perception influenced by cultural, economic, and historical factors. Leveraging a sample size of approximately 200–250 respondents aged 18–25 years, the research aims to provide a comprehensive view of Gen Z’s viewpoints on AI’s ethical and societal ramifications. Findings emphasize the need for transparent and accountable AI systems, as Gen Z is uncomfortable with the ambiguity in AI algorithms. Concerns about privacy and data security highlight the necessity for robust safeguards. They also advocate for strategies to address job displacement and ensure harmonious coexistence between humans and AI. In education, Gen Z sees AI as transformative, endorsing personalized learning. They stress the importance of regulatory frameworks to combat AI bias. They recognize AI’s potential to enhance human connections and combat social isolation. The study’s findings contribute to policy discussions, educational strategies, and business practices, offering insights into how to harness AI’s benefits while mitigating its potential pitfalls.
Article
The debate about lethal autonomous weapons systems (LAWS) characterises them as future problems in need of pre-emptive regulation, for example, through codifying meaningful human control. But autonomous technologies are already part of weapons and have shaped how states think about human control. To understand this normative space, I proceed in two steps: first, I theorise how practices of designing, of training personnel for, and of operating weapon systems integrating autonomous technologies have shaped normativity/normality on human control at sites unseen. Second, I trace how this normativity/normality interacts with public deliberations at the Group of Governmental Experts (GGE) on LAWS by theorising potential dynamics of interaction. I find that the normativity/normality emerging from practices performed in relation to weapon systems integrating autonomous technologies assigns humans a reduced role in specific use of force decisions and understands this diminished decision-making capacity as ‘appropriate’ and ‘normal’. In the public-deliberative process, stakeholders have interacted with this normativity by ignoring it, engaging in distancing or positively acknowledging it – rather than scrutinising it. These arguments move beyond prioritising public deliberation in norm research towards exploring practices performed at sites outside of the public eye as productive of normativity. I theorise this process via international practice theories, critical security studies and Science and Technology scholarship to draw out how practices shape normativity, presenting ideas of oughtness and justice, and normality, making something appear normal via collective, repeated performances.
Article
Full-text available
Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. Findings Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible. Originality/value Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue.
Article
Full-text available
In June 2017, the fifth and so far last of the UN Group of Governmental Experts (UN GGE) were unable to agree on a consensus report that would have brought additional clarity to how international law regulates cyberspace. The article discusses why the UN GGE process seemed to have now reached a dead-end. It argues that the discussion about how Information and Communication Technology (ICT) should be regulated is as much about strategy, politics and ideological differences as it is about law. For the time being, states have too diverging interests and normative preferences for consensus on anything but the most basic of legal findings to arise. The article also offers some suggestions about what the future holds with regard to the regulation of cyberspace. It argues that the collapse of the UN GGE process is likely to lead to a shift away from ambitious global initiatives and towards regional agreements between “like-minded states”. In turn, we may well see the gradual emergence of a fragmented international normative structure for ICT. It is also likely that nonstate actors will begin to play a more central role in the efforts to bring legal clarity to the governance of ICT.
Article
With increasing ubiquity of artificial intelligence (AI) in modern societies, individual countries and the international community are working hard to create an innovation-friendly, yet safe, regulatory environment. Adequate regulation is key to maximize the benefits and minimize the risks stemming from AI technologies. Developing regulatory frameworks is, however, challenging due to AI's global reach, agency problems present in regulation, and the existence of widespread misconceptions about the very notion of regulation. This paper makes three claims: (1) Based on interdisciplinary insights, we show that AI-related challenges cannot be tackled effectively without sincere international coordination supported by robust, consistent domestic, regional, and international governance arrangements. (2) Against this backdrop, we propose the establishment of an international AI governance framework to spearhead initiatives to create a consistent, global enabling regulatory environment, which is necessary for the successful and responsible adoption of AI technologies. To facilitate the practical implementation of our recommendation, we provide a simplified impact assessment on regulatory architecture and governance design options, appropriate to the scope of the paper. (3) We draw attention to communication challenges, which we believe are underestimated barriers hindering contemporary efforts to develop AI regulatory regimes. We argue that a fundamental change of mindset regarding the nature of regulation is necessary to remove these, and put forward some recommendations on how to achieve this.
Article
The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process if we are to gain a fuller picture of the consequences of creating or fielding lethal autonomous robots. This paper argues that once we look to how militaries actually create military objectives, and thus identify potential targets, we face an additional problem: the Strategic Robot Problem. The ability to create targeting lists using military doctrine and targeting processes is inherently strategic, and handing this capability over to a machine undermines existing command and control structures and renders the use for humans redundant. The Strategic Robot Problem provides prudential and moral reasons for caution in the race for increased autonomy in war.
AI & global governance: why we need an intergovernmental panel for artificial intelligence, Articles & insights
  • Nicolas Miailhe
Nicolas Miailhe, AI & global governance: why we need an intergovernmental panel for artificial intelligence, Articles & insights, Centre for Policy Research, UNU, 20 December 2018, https://cpr.unu.edu/ai-global-governance-why-weneed-an-intergovernmental-panel-for-artificial-intelligence.html (access 11 April 2019).
Facing last-minute disagreements, the GGE was unable to reach consensus on some key points, such as the number of meetings the Group will hold (20, 25 or 30 days) and the expression 'and development
Facing last-minute disagreements, the GGE was unable to reach consensus on some key points, such as the number of meetings the Group will hold (20, 25 or 30 days) and the expression 'and development', left in brackets on purpose. Cf. Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, 21 August 2019, CCW/GGE.1/2019/CRP.1/Rev.2, https://www.unog.ch (access 24 August 2019).
Muddling through to 2030: the long decline of international security cooperation, Articles & insights
  • Richard Gowan
Richard Gowan, Muddling through to 2030: the long decline of international security cooperation, Articles & insights, Centre for Policy Research, UNU, 24 October 2018, https://cpr.unu.edu/muddling-through-to-2030-thelong-decline-of-international-security-cooperation.html (access 5 March 2019).
summary of GGE recommendations prepared by UNIDIR's security and technology program
  • Cf
Cf. summary of GGE recommendations prepared by UNIDIR's security and technology program, http://www.unidir.org/files/medias/pdfs/gge-recommendations-confidence-building-and-cooperativemeasures-eng-0-836.pdf (access 25 June 2019).
Thinking about risks from AI: accidents, misuse, and structure
  • Allan Dafoe
  • Remco Zwetsloot
Allan Dafoe and Remco Zwetsloot, Thinking about risks from AI: accidents, misuse, and structure, Lawfare, 11