Article

On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such a ban might be problematic for various reasons. It argues in favour of a theoretical foundation for such a ban based on human rights and humanitarian principles that are not only moral, but also legal ones. In particular, an implicit requirement for human judgement can be found in international humanitarian law governing armed conflict. Indeed, this requirement is implicit in the principles of distinction, proportionality, and military necessity that are found in international treaties, such as the 1949 Geneva Conventions, and firmly established in international customary law. Similar principles are also implicit in international human rights law, which ensures certain human rights for all people, regardless of national origins or local laws, at all times. I argue that the human rights to life and due process, and the limited conditions under which they can be overridden, imply a specific duty with respect to a broad range of automated and autonomous technologies. In particular, there is a duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy in each and every case. I argue that it would be beneficial to establish this duty as an international norm, and express this with a treaty, before the emergence of a broad range of automated and autonomous weapons systems begin to appear that are likely to pose grave threats to the basic rights of individuals.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... "A legitimate lethal decision process must also meet requirements that the human decision-maker involved in verifying legitimate targets and initiating lethal force against them be allowed sufficient time to be deliberative, be suitably trained and well informed, and be held accountable and responsible" (Asaro, 2012) Autonomous systems are attractive for militaries in part because of their capacity to increase mass and operational effectiveness in accordance with human intent and values. Autonomous systems can operate in real time, but may be particularly effective when operating in different time scales to human cognition, such as in high tempo and in slow tempo operations. ...
... If T1 is true, and real time decisions in war are speeding up, but human cognition remains at the same speed-then surely the inevitable outcome of ever-increasing speeds is a loss of human awareness, understanding, agency and autonomy over machines and hence loss of moral and legal responsibility. This point is one of many made by those concerned by introduction of AI and autonomy on the battlefield (Asaro, 2012;Sharkey, 2012, p. 116). ...
Preprint
Full-text available
21st Century war is increasing in speed, with conventional forces combined with massed use of autonomous systems and human-machine integration. However, a significant challenge is how humans can ensure moral and legal responsibility for systems operating outside of normal temporal parameters. This chapter considers whether humans can stand outside of real time and authorise actions for autonomous systems by the prior establishment of a contract, for actions to occur in a future context particularly in faster than real time or in very slow operations where human consciousness and concentration could not remain well informed. The medical legal precdent found in 'advance care directives' suggests how the time-consuming, deliberative process required for accountability and responsibility of weapons systems may be achievable outside real time captured in an 'advance control driective' (ACD). The chapter proposes 'autonomy command' scaffolded and legitimised through the construction of ACD ahead of the deployment of autonomous systems.
... We argue, however, that explanatory efforts only provide a limited value for achieving ethical AI. In cases when an algorithmic decision has already led to detrimental effects, e.g., a mistaken diagnosis, an unfair decision concerning parole release, or even more radically, a large-scale destruction caused by autonomous systems in warfare (Sparrow 2007;Asaro 2012;Apps 2021), it is important to be able to assign responsibility and legal liability, just as in human decision-making. The explanatory efforts concerning ML, however, do not automatically point to the locus of responsibility for the detrimental decisions. ...
... It may be asked whether it is unfair to use autonomous systems if no one can take moral responsibility for harmful decisions. Some authors have taken the failure to assign moral responsibility as an argument in favor of completely abandoning the use of autonomous systems (Sparrow 2007;Asaro 2012). ...
Article
Full-text available
Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is important to have a principled way of assigning responsibility for such errors. The integral characteristics of ML, however, pose serious difficulties in defining responsibility and regulating ML decision-making. First, we elaborate on these characteristics and their epistemic and ethical implications. We then analyze possible general strategies for resolving the assignment of moral responsibility and show that, due to the specific way in which ML functions, each potential solution is problematic, whether we assign responsibility to humans, machines, or using hybrid models. Then, we shift focus on an alternative approach that bypasses moral responsibility and attempts to define legal liability independently through solutions such as informed consent and the no-fault compensation system. Both of these solutions prove unsatisfactory because they leave too much room for potential abuses of ML decision-making. We conclude that both ethical and legal solutions are fraught with serious difficulties. These difficulties prompt us to re-weigh the costs and benefits of using ML for high-stake decisions.
... The CEO of the system's manufacturer STM maintains that the use of autonomy in the Kargu-2 a clear interest in developing AWS, including weapon systems operating with the support of artificial intelligence (AI) technologies. While civilian applications of AI have raised calls for international regulation, weaponised AI in the form of AWS is a particular source of concern due to the various challenges its development and use create for international security, international humanitarian law (IHL), ethics, as well as the norms of warfare (Altmann & Sauer, 2017;Amoroso & Tamburrini, 2021;Asaro, 2012;Bode & Huelss, 2022;Haas & Fischer, 2017;Heyns, 2016). This article argues that whilst a global regulatory framework for AWS has so far proven challenging to achieve, it is not an impossible endeavour. ...
... First, there are debates about what these trends mean for the application of IHL principles, such as distinction between lawful and unlawful targets, responsibility for violations of international law, or precautions in attack (Bo et al., 2022;Boutin, 2022;Brehm, 2017;Crootof, 2015). Second, experts have been debating the ethical implications of AWS making critical decisions on the use of force, especially in relation to human dignity (Asaro, 2012;Rosert & Sauer, 2019;Sharkey, 2019). Third, there is a variety of security threats associated with potential technical malfunctions and over trust in automated processes, including in the nuclear sphere (Alwardt & Schörnig, 2022;Johnson, 2019;Sharikov, 2018). ...
Article
Full-text available
Technological developments in the sphere of artificial intelligence (AI) inspire debates about the implications of autonomous weapon systems (AWS), which can select and engage targets without human intervention. While increasingly more systems which could qualify as AWS, such as loitering munitions, are reportedly used in armed conflicts, the global discussion about a system of governance and international legal norms on AWS at the United Nations Convention on Certain Conventional Weapons (UN CCW) has stalled. In this article we argue for the necessity to adopt legal norms on the use and development of AWS. Without a framework for global regulation, state practices in using weapon systems with AI-based and autonomous features will continue to shape the norms of warfare and affect the level and quality of human control in the use of force. By examining the practices of China, Russia, and the United States in their pursuit of AWS-related technologies and participation at the UN CCW debate, we acknowledge that their differing approaches make it challenging for states parties to reach an agreement on regulation, especially in a forum based on consensus. Nevertheless, we argue that global governance on AWS is not impossible. It will depend on the extent to which an actor or group of actors would be ready to take the lead on an alternative process outside of the CCW, inspired by the direction of travel given by previous arms control and weapons ban initiatives.
... Analogously, the automation of AI in morally-loaded decision-making may lead to a decrease in our moral abilities (Vallor, 2015). For example, in the context of war, the automation of weapons systems may lead to the dehumanization of victims of war (Asaro, 2012;Heyns, 2017). Similarly, care robots in elderly-, child-, or healthcare settings may reduce our ability to care for one another (van However, forecasting is inherently difficult and although automation by AI may lead to short-term job losses, the concept of technological unemployment has been described as a "temporary phase of maladjustment" (Keynes, 2010). ...
Preprint
Full-text available
This article appears as chapter 21 of Prince (2023, Understanding Deep Learning); a complete draft of the textbook is available here: http://udlbook.com. This chapter considers potential harms arising from the design and use of AI systems. These include algorithmic bias, lack of explainability, data privacy violations, militarization, fraud, and environmental concerns. The aim is not to provide advice on being more ethical. Instead, the goal is to express ideas and start conversations in key areas that have received attention in philosophy, political science, and the broader social sciences.
... Insofar as the use of autonomous LAWS would sever this relation, the question emerges as to whether the use of these systems undermines the dignity of those whom they target (and possibly also those who use them) and lead to a form of morally problematic killing. 56 Finally, questions arise with respect to the impact of LAWS on international stability. On the one side, LAWS may reduce the time span of the hostilities in which states may engage and thus contribute to foster stability. ...
... Much of the debate around Artificial Intelligence (AI) and autonomous systems in military contexts has been on autonomous weapons and targeting systems. There are repeated concerns about whether systems that use AI for targeting and lethal force are consistent with International Humanitarian Law (IHL) (Future of Life Institute, 2021; Russell et al., 2021;Asaro, 2019Asaro, , 2012, while also noting the value of AI and machine learning (ML) systems for rapid discrimination of valid military targets (DoD, 2022a). In contrast, we focus here on uses of AI technologies for military Intelligence, Surveillance, and Reconnaissance (ISR) operations, particularly as they inform human decision-makers. ...
Article
Full-text available
Artificial Intelligence (AI) offers numerous opportunities to improve military Intelligence, Surveillance, and Reconnaissance operations. And, modern militaries recognize the strategic value of reducing civilian harm. Grounded in these two assertions we focus on the transformative potential that AI ISR systems have for improving the respect for and protection of humanitarian relief operations. Specifically, we propose that establishing an interface for humanitarian organizations to military AI ISR systems can improve the current state of ad-hoc humanitarian notification systems, which are notoriously unreliable and ineffective for both parties to conflict and humanitarian organizations. We argue that such an interface can improve military awareness and understanding while also ensuring that states better satisfy their international humanitarian law obligations to respect and protect humanitarian relief personnel.
... This is especially true of the jus in bello requirements of distinction, proportionality, and necessity. Critics of the use of LAWS fear that the systems will be indiscriminate with regard to combatants and non-combatants and that such systems are unable to adequately weigh the military advantage of an attack against the damage because these evaluations are to a large extent context-dependent and thus difficult to determine numerically (Asaro, 2012;Dremliuga, 2020;Egeland, 2016;Van Severen & Vander Maelen, 2021). ...
Article
Full-text available
AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature around the concept of responsibility gaps and different solutions have been devised to close or bridge these gaps. In order to move forward in the research around LAWS and the problem of responsibility, it is important to increase our understanding of the different perspectives and discussions in this debate. This paper attempts to do so by disentangling the various arguments and providing a critical overview. After giving a brief outline of the state of the technology of LAWS, I will review the debates over responsibility gaps using three differentiators: those who believe in the existence of responsibility gaps versus those who do not, those who hold that responsibility gaps constitute a new moral problem versus those who argue they do not, and those who claim that solutions can be successful as opposed to those who believe that it is an unsolvable problem.
... These actions could potentially involve the deliberate killing of non-combatant or the use of disproportionate force in a way that is unjust and immoral (Walzer, 2000). As a consequence of this chaotic and unpredictable autonomy and the corresponding likelihood of just war violations, LAWS will almost certainly be involved in "responsibility gaps" (Sparrow, 2007;Asaro, 2012;Santoni and van den Hoven, 2018) where the system does something immoral and yet no person can be held accountable. Thus, LAWS might problematically "off-shore" potential responsibility by having the LAWs "make decisions" where it is genuinely unclear if anyone is truly responsible for the violation. ...
Article
Full-text available
This paper offers a novel understanding of collective responsibility for AI outcomes that can help resolve the “problem of many hands” and “responsibility gaps” when it comes to AI failure, especially in the context of lethal autonomous weapon systems.
... Some have theorized a so-called responsibility gap (Matthias, 2004), while others have opposed this view (Tigard, 2020). According to Tigard, the recent literature is polarized between "techno-optimists" (e.g., Santoro et al., 2008;Hanson 2009;Rahwan 2018;Nyholm 2018) and "techno-pessimists" (e.g., Sharkey, 2010;Asaro, 2012;Char et al., 2018;Danaher, 2016). Some authors believe that the current legal system is adequate to regulate machine learning liability (Amidei, 2019). ...
Article
Full-text available
In recent years, the need for regulation of robots and Artificial Intelligence, together with the urgency of reshaping the civil liability framework, has become apparent in Europe. Although the matter of civil liability has been the subject of many studies and resolutions, multiple attempts to harmonize EU tort law have been unsuccessful so far, and only the liability of producers for defective products has been harmonized so far. In 2021, by publishing the AI Act proposal, the European Commission reached the goal to regulate AI at the European level, classifying smart robots as ”high-risk systems”. This new piece of legislation, albeit tackling important issues, does not focus on liability rules. However, regulating the responsibility of developers and manufacturers of robots and AI systems, in order to avoid a fragmented legal framework across the EU and an uneven application of liability rules in each Member State, is still an important issue that raises many concerns in the industry sector. In particular, deep learning techniques need to be carefully regulated, as they challenge the traditional liability paradigm: it is often not possible to know the reason behind the output given by those models, and neither the programmer nor the manufacturer is able to predict the AI behavior. For this reason, some authors have argued that we need to take liability away from producers and programmers when robots are capable of acting autonomously from their original design, while others have proposed a strict liability regime. This article explores liability issues about AI and robots with regards to users, producers, and programmers, especially when the use of machine learning techniques is involved, and suggests some regulatory solutions for European lawmakers.
... The most fundamental principled objection to the development and use of LAWS is that removing human control from the process of targeting and killing human beings would in some way show disrespect for humankind, or would be a failure to recognize and acknowledge human dignity (Asaro, 2012). The objection turns on a claim that respect for human dignity requires some kind of active recognition of the humanity of the target, a recognition of which machines are inherently incapable. ...
Article
Full-text available
The potential for the use of artificial intelligence in developing lethal autonomous weapons systems (LAWS) has received a good deal of attention from ethicists. Lines of argument in favor of and against developing and deploying LAWS have already become hardened. In this paper, I examine one strategy for skirting these familiar positions, namely to base an anti-LAWS argument not on claims that LAWS inevitably fail to respect human dignity, but on a different kind of respect, namely respect for public opinion and conventional attitudes (which Robert Sparrow claims are strongly anti-LAWS). My conclusion is that this sort of respect for conventional attitudes does provide some reason for actions and policies, but that it is actually a fairly weak form of respect, that is often override by more direct concerns about respect for humanity or dignity. By doing this, I explain the intuitive force of the claim that one should not disregard public attitudes, but also justify assigning a relatively weak role when other kinds of respect are involved.
... This would imply that machines wouldn't be responsible for their deeds and even if they were, "there is clearly no point in putting a robot in jail" (Heyns 2016, 12). Furthermore, some experts argue that international law implicitly requires humans to make the decisions (Asaro 2013;Boulanin 2016;Heyns 2016), which has never been necessary to make explicit as the decision over (lethal) use of force was always made in human-to-human interactions (Heyns 2016). This latent argument supports the need for the notion of 'human control' to be compliant with international (humanitarian) laws. ...
Article
Over the last decade, autonomous weapon systems (AWS), also known as ‘killer robots’, have been the subject of widespread debate. These systems impose various ethical, legal, and societal concerns with arguments both in favor and opposed to the weaponry. Consequently, an international policy debate arose out of an urge to ban these systems. AWS are widely discussed at the Human Rights Council debate, the United Nations General Assembly First Committee on Disarmament and International Security, and at gatherings of the Convention of Conventional Weapons (CCW), in particular the Expert Meetings on Lethal Autonomous Weapons Systems (LAWS). Early skepticism towards the use of AWS brought a potential ban to the forefront of policy making decisions with the support of a campaign to 'Stop Killer Robots' launched by the Human Rights Watch (HRW) in 2013. The movement is supported by Amnesty International, Pax Christi International, and the International Peace Bureau, among others. This campaign has catalyzed an international regulation process on the level of the United Nations (UN). Both a new protocol to the Convention on Conventional Weapons or a new international treaty have been considered. However, a lack of consensus stalls the process, and as such, leaves AWS in a regulatory gray zone.
... Similar to the initiatives taken to protect its Digital Market and EU citizens, looking at the relevant implementation of AI in the military sector, the EU has presented a common position on human control over AI-enabled systems at the UN debate on LAWS (Boulanin et al. 2020). Following the increased global concern related to the adoption of AI in the defence industry and the consequent use of such weapons (Asaro 2013), in 2017 the UN established a Group of Governmental Experts (UNGGE) on Emerging Technologies in the Area of LAWS. With the goal to identify principles and norms formalised in the context of the UN, due also to the proactive role of EU member states, the UNGGE has identified humanitarian principles in the use of LAWS (Cath et al. 2018). ...
Article
Full-text available
EU Digital Sovereignty has emerged as a priority for the EU Cyber Agenda to build free and safe, yet resilient cyberspace. In a traditional regulatory fashion, the EU has therefore sought to gain more control over third country-based digital intermediaries through legislative solutions regulating its internal market. Although potentially effective in shielding EU citizens from data exploitation by internet giants, this protectionist strategy tells us little about the EU’s ability to develop Digital Sovereignty, beyond its capacity to react to the external tech industry. Given the growing hybridisation of warfare, building on the increasing integration of artificial intelligence (AI) in the security domain, leadership in advancing AI-related technology has a significant impact on countries’ defence capacity. By framing AI as the intrinsic functioning of algorithms, data mining and computational capacity, we question what tools the EU could rely on to gain sovereignty in each of these dimensions of AI. By focusing on AI from an EU Foreign Policy perspective, we conclude that contrary to the growing narrative, given the absence of a leading AI industry and a coherent defence strategy, the EU has few tools to become a global leader in advancing standards of AI beyond its regulatory capacity.
... Peter Asaro provides an admirably clear statement of this view when he writes, As a matter of the preservation of human morality, dignity, justice, and law we cannot accept an automated system making the decision to take a human life. And we should respect this by prohibiting autonomous weapon systems (Asaro, 2012). ...
Article
Full-text available
Much of the literature concerning the ethics of lethal autonomous weapons systems (LAWS) has focused on the idea of human dignity. The lion's share of that literature has been devoted to arguing that the use of LAWS is inconsistent with human dignity, so their use should be prohibited. Call this position “Prohibitionism.” Prohibitionists face several major obstacles. First, the concept of human dignity is itself a source of contention and difficult to operationalize. Second, Prohibitionists have struggled to form a consensus about a property P such that (i) all and only instances of LAWS have P and (ii) P is always inconsistent with human dignity. Third, an absolute ban on the use of LAWS seems implausible when they can be used on a limited basis for a good cause. Nevertheless, my main purpose here is to outline an alternative to Prohibitionism and recognize some of its advantages. This alternative, which I will call “Restrictionism,” recognizes the basic intuition at the heart of Prohibitionism - namely, that the use of LAWS raises a concern about human dignity. Moreover, it understands this concern to be rooted in the idea that LAWS can make determinations without human involvement about whom to target for lethal action. However, Restrictionism differs from Prohibitionism in several ways. First, it stipulates a basic standard for respecting human dignity. This basic standard is met by an action in a just war if and only if the action conforms with the following requirements: (i) the action is militarily necessary, (ii) the action involves a distinction between combatants and non-combatants, (iii) noncombatants are not targeted for harm, and (iv) any and all incidental harm to non-combatants is minimized. In short, the use of LAWS meets the standard of basic respect for human dignity if and only if it acts in a way that is functionally isomorphic with how a responsible combatant would act. This approach leaves open the question of whether and under what conditions LAWS can meet the standard of basic respect for human dignity.
... Sharkey and Suchman [18] state that the values of accountability and responsibility are important to consider in the design of Robotic Systems for military operations. Asaro [2] refers to the principles of proportionality and discrimination which are, next to the principles of precaution, humanity and military necessity, captured in International Humanitarian Law. Previous work on values related to AWS, [21,24] studied people's perception on blame, trust, harm, human dignity, confidence, expectations, support, fairness and anxiety by comparing a scenario of the deployment of Human Operated drones to that of AWS. ...
Article
Full-text available
Ethical concerns on autonomous weapon systems (AWS) call for a process of human oversight to ensure accountability over targeting decisions and the use of force. To align the behavior of autonomous systems with human values and norms, the Design for Values approach can be used to consciously embody values in the deployment of AWS. One instrument for the elicitation of values during the design is participative deliberation. In this paper, we describe a participative deliberation method and results of a value elicitation by means of the value deliberation process for which we organized two panels each consisting of a mixture of experts in the field of AWS working in military operations, foreign policy, NGO’s and industry. The results of our qualitative study indicate not only that value discussion leads to changes in perception of the acceptability of alternatives, or options, in a scenario of AWS deployment, it also gives insight in to which values are deemed important and highlights that trust in the decision-making of an AWS is crucial.
... One of the controversies most intensely debated by technology ethicists remains the military application of AI, which includes-but is not limited to-autonomous weapon systems (AWS), i.e., machines designed to independently search for, select, and engage targets. 1 A universal ban on these machines has been advocated by those who believe that the conditions necessary to use AWS ethically either are impossible to devise or cannot realistically be met in practice. 2 Articulating an alternative proposal, we argue that the conditions for ethically using military applications of AI can be conceptually specified as clearly as those relevant to similar nonmilitary technologies; that the decisional processes (including the public discussion) and the research efforts (including the transfer from civilian to military industry) necessary to practically meet such conditions would be hindered by a pre-emptive ban on AWS; and that any such unconditional prohibition would solicit the very same deregulation and uncontrolled proliferation that it was supposed to prevent. To avoid that the prophecy fulfills itself, we recommend that each instance of design, development, and deployment of AWS should be internationally regulated by legal and ethical standards. ...
Article
Full-text available
Both corporate leaders and military commanders turn to ethical principle sets when they search for guidance concerning moral decision making and best practice. In this article, after reviewing several such sets intended to guide the responsible development and deployment of artificial intelligence and autonomous systems in the civilian domain, we propose a series of 11 positive ethical principles to be embedded in the design of autonomous and intelligent technologies used by armed forces. In addition to guiding their research and development, these principles can enhance the capability of the armed forces to make ethical decisions in conflict and operations. We examine the general limitations of principle sets, refuting the charge that such ethical theorizing is a misguided enterprise and critically addressing the proposed ban on military applications of artificial intelligence and autonomous weapon systems.
... In [59], Peter Asaro, philosopher of technology and Co-Founder and Vice-Chair of the ICRAC states that, we should respect human morality, dignity, justice, law and prohibit the AWS. In choosing the weapons and tactics we engage in armed conflict; we are also making the moral choice about the world we are living in the context of ethics and morality. ...
Thesis
Full-text available
The rise of online devices, online shopping, online gaming, online users, and online teaching has ultimately given rise to online attacks and online crimes. As the cases of COVID-19 seem to increase day by day, so do the online crimes and attacks (as many sectors and organizations went 100% online now). The current technological advancements and the cyber war already coined the ethical issue as well as the rise of internet users and the sudden need of ethical cyber defense. This was the problem on one end, and on the other nation states (some secretly, some openly), are investing in Robot Weapons and Autonomous Weapons Systems. New technologies have combined with countries’ security worries to give rise to a new arms race. Because a country / nation can enter the automated weapons space in a way that is impractical for nuclear weapons, nations are trying to make their presence known in both the offline and online battlefields. My thesis is that it is possible to frame the ethical security model based on the increasing online crimes, robot weapons and online attacks. The main contribution of this dissertation will be to show that there are multiple cyber defense principles, counter measures, and ethical actions to slow down these ongoing threats (which is the first and foremost need in this current online era). Most importantly, the countermeasures and security strategies developed (based on increasing online attacks and rise of AWS) can save billions of dollars (invested in developing autonomous weapons, firewalls & robotics industries for arms race between nation states) and work towards global peace and security.
Chapter
In this article we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapon systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. We build on the comparative analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems. In particular, we identify four key aspects – autonomy; adapting capabilities of AWS; human control; and purpose of use – as the essential factors to define AWS and which are key when considering the related ethical and legal implications.KeywordsAdapting capabilitiesAutonomyAutonomous artificial agentsAutonomous weapons systemsArtificial intelligenceDefinitionHuman controlLethal autonomous weapons systems
Article
One major area of concern in relation to the use of autonomous weapon systems is that it involves humans giving some, if not all, control over a weapon system to a form of computer. This idea relates to the concerns that a computer’s ability to autonomously operate weapon systems puts the control of these systems beyond the bounds of the armed forces. This article examines the role that the concept of meaningful human control plays in the ongoing discourse, describes current perspectives of what meaningful human control entails, and reviews its value in the context of the analysis of AWS presented in this article. Within this article, as is the case in the wider debate, the term meaningful human control is used to describe a quality that is perceived to be essential for a given attack to be considered to be compliant with international humanitarian law rules. It does not denote a specific class of weapon systems that permit or require a minimum level of human control; rather, it infers that a weapon that is used in an attack that is legally compliant with international humanitarian law rules would essentially incorporate a meaningful level of human control.
Article
Full-text available
The weapons that have self-controlling capacity and are equipped with the technology to independently choose and destroy a target are called autonomous weapons. Presently, autonomous weapon technology is developed to contribute to the defence and offensive capacities of states and restructure their armies. However, there is a common concern that the quality of autonomous weapons to make decisions in the international arena independent of humans may cause a global security problem. In this respect, the United Nations (UN) supports disarmament by holding meetings and issuing reports to ensure that these weapons are controlled while under development. The present article intends to clarify the activities of the UN which aim to control autonomous weapon technology. The first part of the article defines autonomous weapons in detail and then evaluates their possible benefits and threats. Later on, the article provides an outline of the disarmament endeavours with regard to autonomous weapons. The final part, on the other hand, discusses the disarmament activities of the UN as to autonomous weapons. In this sense, the official documents of the UN were selected as primary sources in revealing the global aspect of the disarmament struggles concerning autonomous weapons. This article, therefore, uses the document analysis method. In consequence of the document analysis, it was concluded that more data were required to establish a consensus as to the performance of a wider disarmament activities under the UN regarding autonomous weapons.
Chapter
Full-text available
Article
The Geneva Conventions and Additional Protocols that regulate the law of armed conflict are insufficient to interpret autonomous weapon systems, which are among the modern weapon technologies that will be actively used by armies in the near future. This article focuses on autonomous weapon systems, which are not yet subject to regulation in terms of international law and which are still under debate with regard to prohibition in the global arena, and examines whether they can be an alternative for autonomous weapon systems by examining the regulations previously prepared on landmines, incendiary weapons, and cluster munitions through legal transplants. As a result of the conclusions reached in the article, recommendations will be shared on the content and legal status of the international humanitarian law manuals that should be prepared for autonomous weapon systems.
Article
Uluslararası insancıl hukuk kuralları gereği bir silah sisteminin dizayn edilmesi, geliştirilmesi, üretilmesi, satın alınması süreçlerinin hepsi için geçerli olacak bir denetim yükümü mevcuttur. Bu denetim ile söz konusu silahın normal ve beklenen kullanımının uluslararası hukuk kuralları aykırılık teşkil edip etmeyeceğinin incelenmesi gerekmektedir. Bu denetim, otonom silah sistemleri için de hukukî bir zorunluluktur. Ancak bu sistemlerin kendisine özgü özelliklerinden ötürü denetimin başarı vaad edip etmediğinin ayrıca ele alınması gerekmektedir. Bu çalışma bu konuya hasredilmiştir. Öngörülebilirlik ve güvenilirlik gibi vasıfların eksikliği veya yokluğu nedeniyle bu denetimin istenen seviyede icrasının zorluğu çalışmanın temel önermesini teşkil etmektedir.
Article
Full-text available
This article focuses on the application of autonomous weapons (AWs) in defensive systems and, consequently, assesses the conditions of the legality of employing such weapons from the perspective of the right to self-defence. How far may humans exert control over AWs? Are there any legal constraints in using AWs for the purpose of self-defence? How does their use fit into the traditional criteria of self-defence? The article claims that there are no legal grounds to exclude AWs in advance from being employed to exercise the right to self-defence. In general, the legality of their use depends on how they were pre-programmed by humans and whether they were activated under proper circumstances. The article is divided into three parts. The first discusses how human control over AWs affects the legality of their use. Secondly, the article analyses the criteria of necessity and proportionality during the exercise of the right to self-defence in the context of the employment of AWs. Finally, the use of AWs for anticipatory, pre-emptive or preventive self-defence is investigated.
Article
Full-text available
The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of “excessiveness”. Some possible approaches to Proportionality compliance are then presented, such as restricting AWS operations to environments lacking civilian presence, using AWS in targeted strikes in which proportionality judgments are pre-made by human commanders, and a ‘price tag’ approach of pre-assigning acceptable collateral damage values to military hardware in conventional attritional warfare. The article argues that application of these three compliance methods would result in AWS’ achieving acceptable Proportionality compliance levels in many combat environments and scenarios, allowing AWS to perform most key tasks in conventional warfare.
Article
Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine blame for any wrongful killing done by autonomous weapons. Analyzing moral responsibility for autonomous weapons starts by recognizing that although they are capable of causing moral consequences, they are neither praiseworthy nor blameworthy in the moral sense. As such, their military role is that of a tool, albeit a rather sophisticated one, and responsibility for their use is roughly analogous to that of existing “smart” weapons. There will likely be some difficulty in managing these systems as they become more intelligent and more prone to unpredicted behavior, but the moral notion of shared responsibility and the legal notion of command responsibility are sufficient to locate responsibility for their use.
Article
Full-text available
This article reflects on securitization efforts with respect to ‘killer robots’, known more impartially as autonomous weapons systems (AWS). Our contribution focuses, theoretically and empirically, on the Campaign to Stop Killer Robots, a transnational advocacy network vigorously pushing for a pre-emptive ban on AWS. Marking exactly a decade of its activity, there is still no international regime formally banning, or even purposefully regulating, AWS. Our objective is to understand why the Campaign has not been able to advance its disarmament agenda thus far, despite all the resources, means and support at its disposal. For achieving this objective, we challenge the popular assumption that strong stigmatization is the universally best strategy towards humanitarian disarmament. We investigate the consequences of two specifics present in AWS, which set them apart from processes and successes of the campaigns to ban anti-personnel landmines, cluster munitions, and laser-blinding weapons: the complexity of AWS as a distinct weapons category, and the subsequent circumvention of its complexity through the utilization of pop-culture, namely science fiction imagery. We particularly focus on two mechanisms through which such distortion has occurred: hybridization and grafting. These provide the conceptual basis and heuristic tools to unpack the paradox of over-securitization: success in broadening the stakeholder base in relation to the first mechanism and deepening the sense of insecurity in relation to the second one does not necessarily lead to the achievement of the desired prohibitory norm. In conclusion, we ask whether it is not the time for a more epistemically-oriented expert debate with a less ambitious, lowest common denominator strategy as the preferred model of arms control for such a complex weapons category.
Article
Otonom silah sistemleri diğer bir adıyla katil robotlar, yapay zeka ve robotik teknolojisinden faydalanarak herhangi bir insan müdahalesine gerek duymadan hedefleri seçen ve saldırıda bulunan ileri teknoloji silahlardır. Otonom silah sistemleri savaş meydanında pek çok fayda sağlamaktadır. Tehlikeli görevlerde önleyici olarak kullanılarak sivillerin ve askeri kayıpların önüne geçmektedir. Askerlerin silahlı çatışma sırasında sahip olduğu hayal kırıklığı, intikam, öfke, yorgunluk gibi zaaflardan tamamen bağımsız hareket ederek savaşın gidişatını/seyrini kökten değiştirmektedir. Anlamlı insan kontrolü bulunmadan harekete geçen otonom silah sistemleri ise hukuki ve cezai sorumluluğun belirlenmesinde problemler yaratmakta ve bir makine tarafından yaşam ve ölüm kararının verilmesi insan onurunun dokunulmazlığını tartışmaya açmaktadır. Bu çalışma, otonom silah sistemlerinin uluslararası insancıl hukukta yarattığı problemleri, günümüz silahlı çatışmalarından örneklerle inceleyecektir. Bu amaç doğrultusunda; uluslararası insancıl hukuk prensipleri olan ayırt etme , ölçülülük ve önleme ilkeleri detaylı bir şekilde tartışılacaktır. Otonom silah sistemlerinin Martens kaydı doğrultusunda yasaklanıp yasaklanmaması gerekliliği hususu uluslararası arenadaki tartışmalar doğrultusunda incelenecektir.
Article
Full-text available
Efforts to ban Autonomous Weapon Systems were both unsuccessful and controversial. Simultaneously the need to address the detrimental aspects of AWS development and proliferation continues to grow in scope and urgency. The article presents several regulatory solutions capable of addressing the issue while simultaneously respecting the requirements of military necessity and so attracting a broad consensus. Two much stricter solutions – regional AWS bans and adoption of a no first use policy – are also presented as fallback strategies in case achieving AWS’ compliance with the Laws of Armed Conflict proved elusive. Together, the solutions presented form an outline of a flexible regulatory strategy able to adjust to different technological outcomes and providing a sensible compromise to solve the current deadlock on the AWS issue.
Article
Çalışmanın amacı, gelişen ve dönüşen savaş teknolojilerinin savaş hukukuna yansımalarının analizini sunmaktır. Teknolojinin gelişmesiyle yapay zekâ ürünlerinin ve otonom sistemlerin her alanda sıklıkla kullanıldığına şahit olunmaktadır. Bahsi geçen sistemlerin kullanıldığı alanlardan biri de askeri ve savunma alanıdır. Otonom sistemlerin bu alanlarda kullanılması savaş hukukuna nasıl etki edeceği tartışma konusu haline gelmiştir. Bu açıdan bakıldığında yapay zekâ teknolojilerinin askeri alanlarda ya da sıcak çatışma bölgelerinde kullanılması savaş hukukuna nasıl etki eder, savaş hukukunu değiştirir mi ya da değiştirme gücüne erişir mi sorularının altının çizilmesi gerekmektedir. Bu bilgiler ışığında çalışma dahilinde gerekli literatür taraması çerçevesinde otonom sistemlerin savaş hukukuna etkileri tartışılacaktır. Uluslararası çatışma hukuku çatışmayı belirli kurallar çerçevesinde ele alsa da değişen konjonktür otonom sistem karşıtı gruplarda belirsizlik ve endişe yaratmaktadır. Otonom sistemlerin ve kapsamının ne olduğu konusunda bir konsensüs bulunmadığından bu sistemleri uluslararası çatışma hukuku kuralları çerçevesinde değerlendirme ve ele alma konusunda eksiklikler ortaya çıkmaktadır. Örneğin çatışma hukukuna göre taraflar çatışma yöntemlerini seçme konusunda tam özerkliğe sahip değildir. Ancak otonom sistemlerin yasaklanan çatışma silah ve yöntemleri arasında kabul edilip edilmeyeceği konusu net değildir.
Article
Full-text available
Though war is never a good thing, all things considered, there are times when it is arguably justified. Most obviously, providing direct military assistance to a victim of unjust aggression would constitute a rather clear case for military intervention. However, the providing of direct military assistance may in some cases be a prospect fraught with risks and dangers, rendering it politically (and possibly even morally) difficult for states to adequately justify such action. In this article I argue that autonomous weapons systems present a way past this dilemma, providing a method for delivering direct military assistance, but doing so in a way that is less politically overt and hostile than sending one's own combat units to aid a beleaguered state. Thus, sending autonomous weapon systems (AWS) presents an additional forceful measure short of war which states may employ, adding to the political options available for combating unjust aggression, and allowing one to provide direct assistance to victim states without necessarily bringing one's own state into the conflict. In making this argument I draw on the current Russian invasion of Ukraine as a running example.
Article
El presente trabajo tiene por objetivo, en primer lugar, describir y ensayar una posible definición acerca del concepto de armas autónomas en el derecho internacional humanitario (dih), para así desvelar las incompatibilidades existentes entre estas con los principios del dih. En segundo lugar, se pretende demostrar las incompatibilidades entre las armas autónomas y los principios y normas que integran el dih. Además, se expone la responsabilidad que incumbe a los distintos actores frente a la vulneración de los principios y las normas aplicables a este tipo de armas autónomas. Por último, se propone decretar su prohibición para futuros conflictos armados, tanto internacionales como no internacionales. La hipótesis de este trabajo radica en que el uso de las armas autónomas puede desencadenar un mayor número de conflictos en todo el mundo y, lo que es más preocupante, un mayor número de víctimas y de infracciones al dih. Por lo expuesto, y para el cumplimiento de los objetivos planteados, se analizan las doctrinas, jurisprudencias, normas convencionales y consuetudinarias del dih a la luz del uso de las armas autónomas durante los conflictos armados.
Article
The article questions the compliance of autonomous weapons systems with international humanitarian law (IHL). It seeks to answer this question by analysing the application of the core principles of international humanitarian law with regard to the use of autonomous weapons systems. As part of the discussion on compliance, the article also considers the implications of riskless warfare where non-human agents are used. The article presupposes that it is actually possible for AWS to comply with IHL in very broad and general terms. However, there is a need for discussion, acceptance, and institutionalization of the interpretation for classification of AWS as well as expansion of the legal framework to cater to the advanced technology. This interpretation will also include a system for allocating and attributing responsibility for their use. The article's results will demonstrate the legal consequences of developing and employing weapon systems capable of performing important functions like target selection and engagement autonomously and the role of IHL and IHRL in regulating the use of these weapons, particularly in human control over individual assaults.
Article
Autonomous weapon systems are artificial intelligence-based, modern weapon systems that can identify and destroy targets without meaningful human intervention. In this article, human rights violations that may occur in case of widespread use of autonomous weapon systems in law enforcement operations in the near future will be examined and the positive obligations of states will be determined. States' positive human rights obligations in line with the United Nations, Basic Principles on the Use of Force and Firearms by Law Enforcement Officials, can be listed as weapon selection and the duty of precaution, the official training of law enforcement officers, procedural obligation, the right to explanation and the right not to be subject to completely automatic decisions. The research results of this article offer that, in line with the existing case law of human rights courts, autonomous weapon systems cannot comply with the positive obligation on the right to life.
Chapter
Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This chapter provides one such framework. It identifies five principles -- justified and overridable uses; just and transparent systems and processes; human moral responsibility; meaningful human control; reliable AI systems – and related recommendations to foster ethically sound uses of AI for national defence purposes.KeywordsArtificial intelligenceControlDefenceDigital ethicsEthical principlesFairnessJust war theoryResponsibilityReliability
Preprint
Though war is never a good thing, all things considered, there are times when it is arguably justified. Most obviously, providing direct military assistance to a victim of unjust aggression would constitute a rather clear case for military intervention. However, the providing of direct military assistance may in some cases be a prospect fraught with risks and dangers, rendering it politically (and possibly even morally) difficult for states to adequately justify such action. In this article I argue that autonomous weapons systems present a way past this dilemma, providing a method for delivering direct military assistance, but doing so in a way that is less politically overt and hostile than sending one's own combat units to aid a beleaguered state. Thus, sending AWS presents an additional forceful measure short of war which states may employ, adding to the political options available for combating unjust aggression, and allowing one to provide direct assistance to victim states without necessarily bringing one's own state into the conflict. In making this argument I draw on the current Russian invasion of Ukraine as a running example.
Article
Recently, a military robot has autonomously taken the decision to attack enemy troops in the Libyan war without waiting for any human intervention. Russia is also suspected of using such kind of robots in the current war against Ukraine. This news has highlighted the possibility of radical changes in war scenarios. Using a Catholic perspective, we will analyze these new challenges, indicating the anthropological and ethical bases that must lead to the prohibition of autonomous “killer robots” and, more generally, to the overcoming of the just war theory. We will also point out the importance of Francis of Assisi, whom the encyclical Fratelli tutti has proposed again as a model for advancing towards a fraternal and pacified humanity.
Article
A window into the history of international humanitarian law scholarship, the ICRC Library's collections capture over 150 years of debates and developments related to the branch of international law that protects those who do not, or no longer, take part in hostilities. Yet, among the 41,000 references in the Library's collections, only a handful of recent publications focus on how this protection applies to persons with disabilities. In this article, two ICRC reference librarians take stock of this gap in their collections and consider its implications.
Chapter
Computational methods such as machine learning and especially artificial intelligence will lend weapon systems a new quality compared with existing ones with automated/autonomous functions. To regulate weapon systems with autonomous functions with the tools of arms control a new approach is necessary: This must be focused on the human role in decision-making processes. Despite this focus, the enabling technologies involve some specific challenges regarding the scope and verification of regulation. While technology will not solve problems created by the use of technology, it may be able to offer some remedies.
Chapter
This chapter introduces artificial intelligence (AI) and machine learning as major enablers of military innovation, especially regarding autonomy in weapons systems. It discusses the potential of AI where sensing, decision-making and acting are concerned. It also sheds light on the risks involved and questions claims about the effectiveness, reliability and trustworthiness of AI in military settings.
Article
Full-text available
In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. We draw from the comparative analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems. In particular, we identify four key aspects—autonomy; adapting capabilities of AWS; human control; and purpose of use—as the essential factors to define AWS and which are key when considering the related ethical and legal implications.
Article
Blockchain technology has applications that can revolutionize political and economic governance. Although most of the academic literature on blockchain has focused on Bitcoin, there is a need to look at the feasibility of new humanitarian applications. This study will proceed in two steps. First, it surveys current theoretical and practical work on how blockchain can be used to help protect the human rights of migrants and refugees, primarily through creation of digital identities. Then it conducts a critical examination of two major cases: the Building Blocks initiative by the World Food Programme in Jordan and the Rohingya Project. We find that blockchain can be useful in empowering vulnerable individuals, but the empowerment of organizations creates potential human rights risks, such as the infringement of privacy and discrimination. Therefore, adequate safeguards should be in place to ensure that blockchain initiatives meet their true purposes of protecting the most vulnerable groups.
Article
Full-text available
DEHUMANIZATION AND DEPOLITICIZATION The paper takes up the issue of “depoliticization” through dehumanization. The starting point is the belief that phenomena of “politicization” and “political” are relatively well recognized in scientific literature, however the problem of depoliticization have not yet been adequately explored. The concept of depoliticization refers to the conditions, criteria, and mechanisms that are key to reducing or depriving a given phenomenon of its political status. Depoliticization does not mean (or at least does not have to mean) an effective removal of the phenomenon from the political sphere, but rather circumstances or actions whose political impact is not obvious. The article focuses on the issue of depoliticization through dehumanization, and more specifically, on how denial of full humanness of groups allows to reduce their status as a political subject, and thus to recognize their claims or interests as not proper or not adequate to political debate. The issues of relations between the processes of humanization and political subjectification as well as dehumanization and political objectification are also discussed.
Chapter
This chapter provides an introduction to this book (Law and Artificial Intelligence: Regulating AI and Applying it in Legal Practice) and an overview of all the chapters. The book deals with the intersection of law and Artificial Intelligence (AI). Law and AI interact in two different ways, which are both covered in this book: law can regulate AI and AI can be applied in legal practice. AI is a new generation of technologies, mainly characterized by being self-learning and autonomous. This means that AI technologies can continuously improve without (much) human intervention and can make decisions that are not pre-programmed. Artificial Intelligence can mimic human intelligence, but not necessarily so. Similarly, when AI is implemented in physical technologies, such as robots, it can mimic human beings (e.g., socially assistive robots acting like nurses), but it can also look completely different if it has a more functional shape (e.g., like an industrial arm that picks boxes in a factory). AI without a physical component can sometimes be hardly visible to end users, but evident to those that created and manage the system. In all its different shapes and sizes, AI is rapidly and radically changing the world around us, which may call for regulation in different areas of law. Relevant areas in public law include non-discrimination law, labour law, humanitarian law, constitutional law, immigration law, criminal law and tax law. Relevant areas in private law include liability law, intellectual property law, corporate law, competition law and consumer law. At the same time, AI can be applied in legal practice. In this book, the focus is mostly on legal technologies, such as the use of AI in legal teams, law-making, and legal scholarship. This introductory chapter concludes with an overview of the structure of this book, containing introductory chapters on what AI is, chapters on how AI is (or could be) regulated in different areas of both public and private law, chapters on applying AI in legal practice, and chapters on the future of AI and what these developments may entail from a legal perspective.
Article
Full-text available
Written as a comment to Brendan Gogarty’s and Meredith Hagger’s 2008 article entitled The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air, this brief article explores the international humanitarian law implications of the growing trend toward the deployment of autonomous weapon systems. It argues that while technological development has been impressive and continues to advance at a rapid pace, computer technology’s ability to make qualitative determinations is structurally difficult, if not impossible. In light of this, the deployment of fully autonomous weapon systems is illegal, quite apart from the ethical and political challenges that this development presents.
Book
Expounding on the results of the author's work with the US Army Research Office, DARPA, the Office of Naval Research, and various defense industry contractors, Governing Lethal Behavior in Autonomous Robots explores how to produce an "artificial conscience" in a new class of robots, humane-oids, which are robots that can potentially perform more ethically than humans in the battlefield. The author examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement. The book presents robot architectural design recommendations for Post facto suppression of unethical behavior, Behavioral design that incorporates ethical constraints from the onset, The use of affective functions as an adaptive component in the event of unethical action, and A mechanism that identifies and advises operators regarding their ultimate responsibility for the deployment of autonomous systems. It also examines why soldiers fail in battle regarding ethical decisions; discusses the opinions of the public, researchers, policymakers, and military personnel on the use of lethality by autonomous systems; provides examples that illustrate autonomous systems' ethical use of force; and includes relevant Laws of War. Helping ensure that warfare is conducted justly with the advent of autonomous robots, this book shows that the first steps toward creating robots that not only conform to international law but outperform human soldiers in their ethical capacity are within reach in the future. It supplies the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system capable of ethically using lethal force. Ron Arkin was quoted in a November 2010 New York Times article about robots in the military.
Article
Unmanned aircraft (UA) have evolved from simple reconnaissance assets into capable and persistent strike platforms in a short period of time. Looking ahead to the year 2025, what technologies will help the U.S. military reduce the time it takes to find, track, and neutralize a target with UA? The United States can have the greatest impact in accelerating the kill chain by investing in research that advances autonomous UA operations and enables a Mobile Ad-hoc Network (MANET) using UA as communications nodes. This MANET should interface with the Internet to provide maximum warfighter access and relay information via a combination of radio frequency, laser communication, and satellite communication links. As warfighters, we tend to focus more on the kinetic effects such as improving munitions instead of unglamorous but critical tasks such as gathering, analyzing, and distributing vital information to the right person for action. Autonomous UA operations will reduce manpower and bandwidth requirements while an improved airborne communications network will increase situational awareness for warfighters and decrease reliance on satellites. The military often seeks to "revolutionize" warfighting via cutting-edge technologies, but it can often gain more by selectively improving existing technologies to promote autonomy and interoperability with less risk. Ironically, accelerating the kill chain with capable sensor-shooters may be delayed more by political, cultural, and service doctrine biases than technological barriers. UA airspace integration, deconfliction methods, and inter-service command and control still warrant attention. By overcoming both technical and cultural barriers, the United States can accelerate the kill chain and anticipate enemy actions instead of reacting to attacks.
Article
A variety of ethical objections have been raised against the military employment of uninhabited aerial vehicles (UAVs, drones). Some of these objections are technological concerns over UAVs abilities' to function on par with their inhabited counterparts. This paper sets such concerns aside and instead focuses on supposed objections to the use of UAVs in principle. I examine several such objections currently on offer and show them all to be wanting. Indeed, I argue that we have a duty to protect an agent engaged in a justified act from harm to the greatest extent possible, so long as that protection does not interfere with the agent's ability to act justly. UAVs afford precisely such protection. Therefore, we are obligated to employ UAV weapon systems if it can be shown that their use does not significantly reduce a warfighter's operational capability. Of course, if a given military action is unjustified to begin with, then carrying out that act via UAVs is wrong, just as it would be with any weapon. But the point of this paper is to show that there is nothing wrong in principle with using a UAV and that, other things being equal, using such technology is, in fact, obligatory.
Article
Plans to automate killing by using robots armed with lethal weapons have been a prominent feature of most US military forces' roadmaps since 2004. The idea is to have a staged move from `man-in-the-loop' to `man-on-the-loop' to full autonomy. While this may result in considerable military advantages, the policy raises ethical concerns with regard to potential breaches of International Humanitarian Law, including the Principle of Distinction and the Principle of Proportionality. Current applications of remote piloted robot planes or drones offer lessons about how automated weapons platforms could be misused by extending the range of legally questionable, targeted killings by security and intelligence forces. Moreover, the alleged moral disengagement by remote pilots will only be exacerbated by the use of autonomous robots. Leaders in the international community need to address the difficult legal and moral issues now, before the current mass proliferation of development reaches fruition.
Article
While modern states may never cease to wage war against one another, they have recognized moral restrictions on how they conduct those wars. These "rules of war" serve several important functions in regulating the organization and behavior of military forces, and shape political debates, negotiations, and public perception. While the world has become somewhat accustomed to the increasing technological sophistication of warfare, it now stands at the verge of a new kind of escalating technology–autonomous robotic soldiers–and with them new pressures to revise the rules of war to accommodate them. This paper will consider the fundamental issues of justice involved in the application of autonomous and semi-autonomous robots in warfare. It begins with a review of just war theory, as articulated by Michael Walzer [1], and considers how robots might fit into the general framework it provides. In so doing it considers how robots, "smart" bombs, and other autonomous technologies might challenge the principles of just war theory, and how international law might be designed to regulate them. I conclude that deep contradictions arise in the principles intended to govern warfare and our intuitions regarding the application of autonomous technologies to war fighting.
Article
abstract The United States Army's Future Combat Systems Project, which aims to manufacture a ‘robot army’ to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed: the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.
Article
Currently, the concern and interest in the ethical design and regulation of autonomous lethal robotics is growing. Additionally, the design of any tele-operated weapons systems has significant implications for the ethical decision making of users. There are three approaches to designing lethal tele-operated systems have been considered. Here, the potential application to tele-operated systems has been described. The first and second approaches have been already described. The third approach is the new User-Centered Design (UCD) that endorses modeling the users of these systems, and then using these models to motivate design decisions. It is a design strategy based on empirical observations on how users perform tasks, and using a task model in designing interfaces and systems.
Article
The use of unmanned aerial vehicles (UAVs) in the conflict zones of Iraq and Afghanistan for both intelligence gathering and "decapitation" attacks has been heralded as an unprecedented success by U.S. military forces. There is a demand for substantially increased production of Predator MQ-1 and Reaper MQ-9 Digital Object Identifier 10.1109/MTS.2009.931865 drones and funding has been boosted to enable the training of many more operators. But perhaps there is a danger of over-trusting and overreaching the technology, particularly with respect to protecting innocents in war zones. There are ethical issues and pitfalls. It is time to reassess the meanings of discrimination and proportionality in the deployment of UAVs in 21st century warfare.
Article
Today, computer systems terminate Medicaid benefits, remove voters from the rolls, exclude travelers from flying on commercial airlines, label (and often mislabel) individuals as dead-beat parents, and flag people as possible terrorists from their email and telephone records. But when an automated system rules against an individual, that person often has no way of knowing if a defective algorithm, erroneous facts, or some combination of the two produced the decision. Research showing strong psychological tendencies to defer to automated systems suggests that a hearing officer’s check on computer decisions will have limited value. At the same time, automation impairs participatory rulemaking, the traditional stand-in for individualized due process. Computer programmers routinely alter policy when translating it from human language into computer code. An automated system’s opacity compounds this problem by preventing individuals and courts from ascertaining the degree to which the code departs from established rules. Programmers thus are delegated vast and effectively un¬reviewable discretion formulating policy. Professor Citron will be talking about a concept of technological due process that can vindicate the norms underlying last century’s procedural protections. A carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. Her proposal provides a framework of mechanisms capable of enhancing the accuracy of rules embedded in automated decision-making systems
Article
This paper, the third in a series, provides representational and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is based upon extensions to existing deliberative/reactive autonomous robotic architectures, and includes recommendations for (1) post facto suppression of unethical behavior, (2) behavioral design that incorporates ethical constraints from the onset, (3) the use of affective functions as an adaptive component in the event of unethical action, and (4) a mechanism in support of identifying and advising operators regarding the ultimate responsibility for the deployment of such a system. 1.
Mission Statement of the International Committee for Robot Arms Control
  • Jürgen Altmann
  • Asaro
  • Peter
  • Noel Sharkey
  • Robert Sparrow