Chapter

Autonomous weapons systems: Living a dignified life and dying a dignified death

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The ever-increasing power of computers is arguably one of the defining characteristics of our time. Computers affect almost all aspects of our lives and have become an integral part not only of our world but also of our very identity as human beings. They offer major advantages and pose serious threats. One of the main challenges of our era is how to respond to this development: to make sure computers enhance and do not undermine human objectives. The imposition of force by one individual against another has always been an intensely personal affair – a human being was physically present at the point of the release of force and took the decision that it would be done. It is inherently a highly controversial issue because of the intrusion on people’s bodies and even lives. Ethical and legal norms have developed over the millennia to determine when one human may use force against another, in peace and in war, and have assigned responsibility for violations of these norms. Perhaps the most dramatic manifestation of the rise of computer power is to be found in the fact that we are on the brink of an era when decisions on the use of force against human beings – in the context of armed conflict as well as during law enforcement, lethal and non-lethal – could soon be taken by robots. Unmanned or human-replacing weapons systems first took the form of armed drones and other remote-controlled devices, which allowed human beings to be physically absent from the battlefield. Decisions to release force, however, were still taken by human operatives, albeit from a distance. The increased autonomy in weapons release now points to an era where humans will be able to be not only physically absent from the battlefield but also psychologically absent, in the sense that computers will determine when and against whom force is released. The depersonalization of the use of force brought about by remote-controlled systems is thus taken to a next level through the introduction of the autonomous release of force.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Autonomous weapons systems, vehicles and robotics lead current discourse on safety, legal and ethical issues in AI and autonomy due to the high-consequence nature of these systems. The literature on autonomous weapons systems is littered with moral and ethical dilemmas and debate on whether machines should have the capacity -or authority -to make meaningful decisions on the use or release of force and whether these systems are compliant with International Humanitarian Law (IHL) (Heyns, 2016;Sharkey, 2016). Autonomous vehicles and robotics tend to lead discourse on safety with the increased drive for commercial development of driverless cars and robotics technologies. ...
... Autonomous use or release of force by machines is subject to extensive moral, ethical and human rights considerations, guided by questions as to whether machines should have the power to determine life and death, or inflict injury or harm onto humans. Legality of autonomous release of force is also a contested issue, highlighting the limitations of machines to exercise judgement, apply meaningful control and comply with rules of engagement, International Humanitarian Law (IHL), as well as adhere to principles of war such as proportionality (Heyns, 2016;Sharkey, 2016). The capacity for humans to remain 'in the loop' so as to override the system is also incredibly important -both conceptually and operationally -and has a profound impact on the autonomous development of response technologies. ...
Technical Report
The purpose of this project was three-fold. First to provide a thorough articulation of what is meant by the broadly used term ‘Artificial Intelligence’, including terminology, technical dissection, and progressive computational techniques along the spectrum of AI. Second, to gain understanding of how AI is embedded in current security technologies; and third, to provide articulation of the opportunities for technology enhancements using AI developments and the potential risks associated with such development and adoption within security technologies.
... The increased autonomy in weapons has brought warfare to a point where humans are able to be not only physically absent from the battlefield but also psychologically absent, in the sense that computers determine when and against whom to strike in the battlefield. (Heyns, 2016 ...
... Finally, it is argued that even if the correct target is hit and the force used is not excessive, it will remain inherently wrong for a machine to make the determination that such force be used against a human being ie. delegating the arbitrary control over deprivation of the right to life to machines (Heyns, 2016). ...
Preprint
In the course of history, nations have continuously made advancement in the technology of warfare with each advancement getting more lethal and efficient than the last. From bows and arrows through rifles and machine guns to nuclear weapons, military technology is becoming more advanced and deadlier than ever before. Warfare has evolved from man without machines to man with machines and now machines without man. The strongest nations of the world have entered a new arms race with the development and deployment of autonomous weapons systems built on robotic engineering and algorithmic technology. With the use of the techno-ethical approach (an interdisciplinary research method that integrates multiple perspectives from the fields of communication, social sciences, information studies, technology studies, history, political science, legal studies, applied ethics and philosophy to provide insights on technology's effects on society), this paper highlights the ethical dilemma involved in the use of autonomous weapons systems and argues that as long as moral agency lies with human operators or as long as machines lack the capability to form intent or account for the outcome of their deployment, the continued development and use of autonomous weapons create ethical, legal, social, economic and political challenges that the world needs to address in order to avoid catastrophic results.
... As we argue in the rest of this article, the outcome of AWS is predictable only to an extent. Thus, it is conceivable that an AWS used for non-lethal purposes may produce lethal effects (Coleman, 2015;Enemark, 2008aEnemark, , 2008bKaurin, 2010Kaurin, , 2015Heyns, 2016aHeyns, , 2016b. This is because there is "a potential disconnect between the intention behind the use of a weapon and the consequences thereof" (Enemark, 2008b, 201). ...
Article
Full-text available
In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful moral responsibly can only be discharged by human agents who are willing to take a moral gambit: they decide to design/develop/deploy AWS despite the uncertainty about the effects an AWS may produce, hoping that unintended and unwanted or unforeseen outcomes may never occurs, but also accepting to be held responsible if such outcomes will occur. We argue that, while a moral gambit is permissible for the use of non-lethal AWS, this is not the case for the actions of lethal autonomous weapon systems.
... The Handmaid's Tale>가 여성 운동에 영향을 미친 사례는 유명하다 (Armstrong, 2018 (Heyns, 2016;Sharkey, 2016;Suchman, 2020 ...
Article
Full-text available
The acceleration of automation due to advances in artificial intelligence(AI) and algorithms is anticipated to reduce the number of jobs for humans. Underneath this outlook coexists dystopian concerns that humans will be deprived of their opportunities to work and utopian expectations that humans will be liberated from work. Although these two positions assume AI to be an autonomous entity, recent studies criticize that such belief is a mere myth by underlining the interdependent relationship between humans and machines. In this process, discussions are focused on illuminating labor problems facing human workers and finding solutions to them, but they tend to rely on top-down solutions from the outside rather than taking issue with the algorithmic governmentality of AI targeting humans. In order to overcome these limitations, diagnose the present of automation and labor and predict the future of automation and labor, it is demanded to comprehensively understand the current human labor status and future labor aspects, paying attention to the trends in job change caused by AI and automation. In response to this demand, this paper attempts to analyze the aspects of human labor, which has been degraded to a cog in the machine and automated by itself, through the near-future SF movie Sleep Dealer in connection with the current issues. Through a drone pilot and other labor cases in the movie, I will critically examine how AI algorithm governmentality operates to efficiently control and monitor human workers hidden under the guise of corporate mythical narrative strategies and automation. In the subsequent conclusion, as an alternative to this, I will explore the possibility for resistance in the form of a combination of struggle in public areas such as union formation and street protests and network resistance that converts governing algorithms into resistance algorithms.
... From this point of view, autonomous targeting is unacceptable because it "objectifies" human beings, reducing them to algorithmically processed "data points", thereby systematically denying their inherent value as human beings (Moyes 2019, 6). Suppressing human life is ethically and legally justifiable only if it is based on human judgement, for only human decision-making offers a guarantee that the values at stake (human life, physical integrity and so on) can be fully appreciated (Heyns 2016). ...
Article
Full-text available
The ‘weaponisation’ of artificial intelligence and robotics, especially their convergence in autonomous weapons systems (AWS), is a matter of international concern. Debates on AWS have revolved around (i) the identification of hallmarks of AWS with respect to other weapons; (ii) what it is that makes AWS destructive force especially troublesome from a normative standpoint; and (iii) steps the international community can take to allay these concerns. Of particular concern is the need to preserve the ‘human element’ in the use of force. A differentiated approach to this latter issue, which is also principled and prudential, may pave the way to a legally binding instrument to regulate AWS by establishing meaningful human control over all weapons systems.
... In its more complex forms, the argument holds that there is a fundamental human right not to be killed by a machine. From this perspective, human dignity, which is even more fundamental than the right to life, demands that a decision to take human life requires consideration of the circumstances by a human being (Arkin et al. 2012;Heyns 2016). A related claim is that meaningful human control of an autonomous weapon requires that a human must approve the target and be engaged at the moment of combat. ...
... A raíz de la revolución industrial de los siglos XVIII y XIX, los derechos humanos implicaron movilizaciones a los efectos de lograr asegurar determinados mínimos sociales, así como la reducción de obstáculos en la garantía y goce de los derechos de las personas. (Heyns, C. 2016). ...
Article
Full-text available
Se entiende pertinente la consideración de los elementos que son transversales a las TIC por cuanto verifican superposiciones y correlaciones, así como aportan a la visualización de los eventuales conflictos que podrían generarse para la protección de datos personales y a través de este derecho, a la dignidad de las personas. Haciendo foco en la persona y sus derechos se reflexiona acerca de los diversos avances tecnológicos que impactan directamente en la vida de los seres humanos, se aboga por una resignificación de la dignidad humana en su cotidianidad compartida con la tecnología, al tiempo que se reconoce que tal ejercicio se corresponde con una Etica de la Dignidad. La misma debe asumirse desde los Estados, a todos los individuos y sectores sociales en correspondencia con valores universalmente reconocidos en la Carta de la Naciones Unidas.
... On current technology it seems to me that an autonomous system such as Aegis can adequately express the will of the ship's commander as described in [10]. If such systems are to be classified as automated not autonomous, then it should be recognized that some people want to ban offensive "automated" weapons systems as well as offensive "autonomous" ones on moral grounds such as the "dignitarian" argument presented in [5]. This claims that delegating lethal decisions to machinery violates a fundamental human right to dignity even more basic than the right to life. ...
Conference Paper
This short paper provides two partial drafts for a Protocol VI that might be added to the existing five Protocols of the Convention on Certain Conventional Weapons (CCW) to regulate "lethal autonomous weapons systems" (LAWS). Draft A sets the line of tolerance at a "human in the loop" between the critical functions of select and engage. Draft B sets the line of tolerance at a human in the "wider loop" that includes the critical function of defining target classes as well as select and engage. Draft A represents an interpretation of what NGOs such as the Campaign to Stop Killer Robots are seeking to get enacted. Draft B is a more cautious draft based on the Dutch concept of "meaningful human control in the wider loop" that does not seek to ban any system that currently exists. Such a draft may be more likely to achieve the consensus required by the UN CCW process. A list of weapons banned by both drafts is provided along with the rationale for each draft. The drafts are intended to stimulate debate on the precise form a binding instrument on LAWS would take and on what LAWS (if any) should be banned and why.
... The political and academic debates on AWS focus predominantly on how AWS challenge international law (Asaro, 2012;Grut, 2013;Kastan, 2013;Noone and Noone, 2015;Sehrawat, 2017) as well as ethics (Heyns, 2016;Johnson and Axinn, 2013;Leveringhaus, 2016;Sharkey, 2008). While both dimensions are interrelated and arguments for why AWS are legally problematic in terms of International Humanitarian Law (IHL) are also motivated by ethical concerns, such as human dignity and the question of whether machines should ultimately have the decision-making power to end human life, the current debate clearly takes place in the margins of international law. ...
Article
This article considers the role of norms in the debate on autonomous weapons systems (AWS). It argues that the academic and political discussion is largely dominated by considerations of how AWS relate to norms institutionalised in international law. While this debate on AWS has produced insights on legal and ethical norms and sounded options of a possible regulation or ban, it neglects to investigate how complex human‐machine interactions in weapons systems can set standards of appropriate use of force, which are politically normatively relevant but take place outside of formal, deliberative law‐setting. While such procedural norms are already emerging in the practice of contemporary warfare, the increasing technological complexity of AI‐driven weapons will add to their political‐normative relevance. I argue that public deliberation about and political oversight and accountability of the use of force is at risk of being consumed and normalised by functional procedures and perceptions. This can have a profound impact on future of remote‐warfare and security policy. The more control shifts from humans to machines in terms of algorithms or machine‐learning, the less the definition of what use of force ought to be in practices is still subject to legal‐political accountability and authority.
... Christof Heyns articulates the human dignity argument: 'Death by algorithm means that people are treated simply as targets and not as complete and unique human beings'(Heyns 2016).7 The human beings physically in the vehicle have been characterised in the literature primarily as the owners. ...
Article
Full-text available
With their prospect for causing both novel and known forms of damage, harm and injury, the issue of responsibility has been a recurring theme in the debate concerning autonomous vehicles. Yet, the discussion of responsibility has obscured the finer details both between the underlying concepts of responsibility, and their application to the interaction between human beings and artificial decision-making entities. By developing meaningful distinctions and examining their ramifications, this article contributes to this debate by refining the underlying concepts that together inform the idea of responsibility. Two different approaches are offered to the question of responsibility and autonomous vehicles: targeting and risk distribution. The article then introduces a thought experiment which situates autonomous vehicles within the context of crash optimisation impulses and coordinated or networked decision-making. It argues that guiding ethical frameworks overlook compound or aggregated effects which may arise, and which can lead to subtle forms of structural discrimination. Insofar as such effects remain unrecognised by the legal systems relied upon to remedy them, the potential for societal inequalities is increased and entrenched, situations of injustice and impunity may be unwittingly maintained. This second set of concerns may represent a hitherto overlooked type of responsibility gap arising from inadequate accountability processes capable of challenging systemic risk displacement.
Article
Full-text available
Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI "rational" efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining "meaningful" human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights into human-machine interactions to elucidate how AI shapes our capacity to think about future warfare's political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare-the "AI commander problem."
Article
Full-text available
In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of minimal force is an important requirement for considering ethical uses of force. In particular, we distinguish between lethal and non-lethal purposes of use of force and introduce the prospect of non-lethal AWS before reviewing a number of challenges which AWS pose with respect to their non-lethal use. The challenges arise where AWS generate unpredictable outcomes impinging upon the situational awareness required of combatants to ensure that their actions meet the requirement of minimal force. We conclude with a call for further research on the ethical implications of non-lethal uses of AWS as a necessary contribution for assessing the moral permissibility of AWS.
Article
Full-text available
Though war is never a good thing, all things considered, there are times when it is arguably justified. Most obviously, providing direct military assistance to a victim of unjust aggression would constitute a rather clear case for military intervention. However, the providing of direct military assistance may in some cases be a prospect fraught with risks and dangers, rendering it politically (and possibly even morally) difficult for states to adequately justify such action. In this article I argue that autonomous weapons systems present a way past this dilemma, providing a method for delivering direct military assistance, but doing so in a way that is less politically overt and hostile than sending one's own combat units to aid a beleaguered state. Thus, sending autonomous weapon systems (AWS) presents an additional forceful measure short of war which states may employ, adding to the political options available for combating unjust aggression, and allowing one to provide direct assistance to victim states without necessarily bringing one's own state into the conflict. In making this argument I draw on the current Russian invasion of Ukraine as a running example.
Preprint
Though war is never a good thing, all things considered, there are times when it is arguably justified. Most obviously, providing direct military assistance to a victim of unjust aggression would constitute a rather clear case for military intervention. However, the providing of direct military assistance may in some cases be a prospect fraught with risks and dangers, rendering it politically (and possibly even morally) difficult for states to adequately justify such action. In this article I argue that autonomous weapons systems present a way past this dilemma, providing a method for delivering direct military assistance, but doing so in a way that is less politically overt and hostile than sending one's own combat units to aid a beleaguered state. Thus, sending AWS presents an additional forceful measure short of war which states may employ, adding to the political options available for combating unjust aggression, and allowing one to provide direct assistance to victim states without necessarily bringing one's own state into the conflict. In making this argument I draw on the current Russian invasion of Ukraine as a running example.
Article
Over the last decade, autonomous weapon systems (AWS), also known as ‘killer robots’, have been the subject of widespread debate. These systems impose various ethical, legal, and societal concerns with arguments both in favor and opposed to the weaponry. Consequently, an international policy debate arose out of an urge to ban these systems. AWS are widely discussed at the Human Rights Council debate, the United Nations General Assembly First Committee on Disarmament and International Security, and at gatherings of the Convention of Conventional Weapons (CCW), in particular the Expert Meetings on Lethal Autonomous Weapons Systems (LAWS). Early skepticism towards the use of AWS brought a potential ban to the forefront of policy making decisions with the support of a campaign to 'Stop Killer Robots' launched by the Human Rights Watch (HRW) in 2013. The movement is supported by Amnesty International, Pax Christi International, and the International Peace Bureau, among others. This campaign has catalyzed an international regulation process on the level of the United Nations (UN). Both a new protocol to the Convention on Conventional Weapons or a new international treaty have been considered. However, a lack of consensus stalls the process, and as such, leaves AWS in a regulatory gray zone.
Article
Full-text available
The need for normative change is rarely self-evident but requires the sustained efforts of actors to create a demand for action. With emerging technologies such as autonomous weapons systems (AWS), the challenge is even greater given the early stages of development and use of these systems. This places unusual demands on actors to present evidence for the nature, scale and severity of a problem. Suggesting that the epistemic bases of norm-building are poorly understood, the article introduces a practice-theoretical approach to cast light on how international organisations cope with the uncertainty surrounding AWS. The key claim is that the emergence of anticipatory norms depends upon forward-looking epistemic practices that produce knowledge about future governance objects and create a demand for preventive action. Analysing the role of the United Nations Institute for Disarmament Research (UNIDIR), I argue that attempts to de-science-fictionalise the issue rather than futuristic scenarios may proof integral to propel the emergence of anticipatory norms.
Article
The development of new technologies has always found its first application in warfare, from the invention of the bow and arrow, through the discovery of gunpowder, to the use of unmanned aerial vehicles in the “War on Terror.” The “successful” use of drones in the targeted killings of “terrorists” gave additional impetus to the development of new types of autonomous weapons that completely replace soldiers of blood and flesh on the battlefield. Currently, there is significant controversy over fully autonomous weapons that are fully autonomous in carrying out military operations. They can autonomously decide on the use of deadly force against “enemy” human beings. This kind of autonomy causes numerous controversies, not only legal but also ethical. Moreover, it calls into question the very essence of man, i.e., whether the “killer robot” is the next evolutionary stage in the development of the human species or a technological return to barbarism. This paper will analyze some of the above legal and ethical dilemmas that await us in the near future.
Conference Paper
The development and use of Artificial Intelligence (AI) within security technologies presents a number of opportunities and risks for security professionals. The study was undertaken to investigate the use of AI in security technologies within the functional categories of Observing, Detecting, Controlling and Responding technologies, exploring the risks associated with enhanced AI across these technologies. In exploring these risks, the study developed a novel scale to define the level of intelligent autonomy a technology may possess during operation, in addition the degree of human involvement retained at each level across the various stages of the AI operational cycle. The Security Technology Intelligent Autonomy Scale may be used to consider the risks which may emerge at each level, with a weighting towards consequences of outcome.
Article
Full-text available
The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban have intensified along with acts of resistance and blocking coalitions. This article aims to reflect on the prospects and limitations, as well as the ethical and legal intensity, of the emerging regulatory framework. To allow for such an investigation, a power-analytical approach to studying international security regimes is utilized.
Article
Full-text available
Arguments from human dignity feature prominently in the Lethal Autonomous Weapons (LAWS) moral feasibility debate, even though their exists considerable controversy over their role and soundness and the notion of dignity remains under-defined. Drawing on the work of Dieter Birnbacher, I fix the sub-discourse as referring to the essential value of human persons in general, and to postulated moral rights of combatants not covered within the existing paradigm of the International Humanitarian Law in particular. I then review and critique dignity-based arguments against LAWS: argument from faulty targeting process, argument from objectification, argument from underappreciation of the value of human life and the argument from the absence of mercy. I conclude that the argument from the absence of mercy is the only dignity-based argument that is both valid and irreducible to another class of arguments within the debate, and that it offers insufficient justification for a global ban on LAWS.
Article
The emergence of autonomous weapons systems (AWS) is increasingly in the academic and public focus. Research largely focuses on the legal and ethical implications of AWS as a new weapons category set to revolutionize the use of force. However, the debate on AWS neglects the question of what introducing these weapons systems could mean for how decisions are made. Pursuing this from a theoretical-conceptual perspective, the article critically analyzes what impact AWS can have on norms as standards of appropriate action. The article draws on the Foucauldian “apparatus of security” to develop a concept that accommodates the role of security technologies for the conceptualization of norms guiding the use of force. It discusses to what extent a technologically mediated construction of a normal reality emerges in the interplay of machinic and human agency and how this leads to the development of norms. The article argues that AWS provide a specific construction of reality in their operation and thereby define procedural norms that tend to replace the deliberative, normative-political decision on when, how, and why to use force. The article is a theoretical-conceptual contribution to the question of why AWS matter and why we should further consider the implications of new arrangements of human-machine interactions in IR.
Chapter
Full-text available
Als im Jahr 2012 der damalige Bundesminister der Verteidigung Thomas de Maizière ankündigte, den Kauf von bewaffneten Drohnen für die Bundeswehr zu prüfen, waren friedensbewegte Gruppierungen in den Kirchen und in der Gesellschaft mehr als bei anderen Rüstungsprojekten alarmiert. Der Einsatz bewaffneter Drohnen wurde zum einen mit der Praxis tatsächlich oder vermeintlich illegaler gezielter Tötungen von Terroristen im Afghanistan und Pakistan durch die Vereinigten Staaten von Amerika assoziiert, zum anderen gab es intuitive Abwehrreaktionen gegenüber einem Kampfinstrument, das sich im Luftraum anschleicht, Menschen ausspäht und sie mit tödlicher Gewalt überfällt. Solche Gefühle bringen häufig einen moralisch heiklen Kern zum Ausdruck. Moral kann aber auch problematisch sein, wenn sie nicht ihrerseits in der Ethik reflektiert wird.
Technical Report
Full-text available
The present ICRAC Report contributes to move forward the debate on Meaning Human Control (MHC) of Autonomous Weapons Systems (AWS) (i) by filling the MHC placeholder with more precise contents, and (ii) by identifying on this basis some key aspects of any legal instrument enshrining the MHC requirement (such as, e.g., a Protocol VI to the CCW).
Article
Full-text available
One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro in Int Rev Red Cross 94(886):687–709, 2012; Docherty in Shaking the foundations: the human rights implications of killer robots, Human Rights Watch, New York, 2014; Heyns in S Afr J Hum Rights 33(1):46–71, 2017; Ulgen in Human dignity in an age of autonomous weapons: are we in danger of losing an ‘elementary consideration of humanity’? 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher in Autonomous weapons systems: law, ethics, policy, Cambridge University Press, Cambridge, 2016; Pop in Autonomous weapons systems: a threat to human dignity? 2018; Saxton in (Un)dignified killer robots? The problem with the human dignity argument, 2016). This paper critically examines the relationship between human dignity and AWS. Three main types of objection to AWS are identified; (i) arguments based on technology and the ability of AWS to conform to international humanitarian law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect. There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity.
Article
Full-text available
Autonomous weapons systems (AWS) are emerging as key technologies of future warfare. So far, academic debate concentrates on the legal-ethical implications of AWS but these do not capture how AWS may shape norms through defining diverging standards of appropriateness in practice. In discussing AWS, the article formulates two critiques on constructivist models of norm emergence: first, constructivist approaches privilege the deliberative over the practical emergence of norms; and second, they overemphasise fundamental norms rather than also accounting for procedural norms, which we introduce in this article. Elaborating on these critiques allows us to respond to a significant gap in research: we examine how standards of procedural appropriateness emerging in the development and usage of AWS often contradict fundamental norms and public legitimacy expectations. Normative content may therefore be shaped procedurally, challenging conventional understandings of how norms are constructed and considered as relevant in International Relations. In this, we outline the contours of a research programme on the relationship of norms and AWS, arguing that AWS can have fundamental normative consequences by setting novel standards of appropriate action in international security policy.
Article
Full-text available
Autonomous weapons are weapons that, once activated, can without further human intervention select and engage targets. This raises the possibility that computers will determine whether people will live or die. The possible use of autonomous weapons against humans in armed conflict clearly has potential right to life implications. This contribution argues that the right to dignity angle must also be brought into play. The first concern raised by autonomous weapons is ‘can they do it?': Can autonomus targeting conform with the requirements of international humanitarian law, in particular the rules of distinction and proportionality? If machines cannot do proper targeting, such use of force will be ‘arbitrary’ and thus violate the right to life. Moreover, the right to life requires accountability, but it is not clear who is to be held responsible when robots get it wrong. Secondly: ‘Should they do it?' Should robots have the power of life and death over humans? This may violate the rights to life as well as the right to dignity. The question whether there is ‘meaningful human control’ over the release of force is emerging as a helpful tool to distinguish between acceptable and unacceptable autonomous targeting, and I argue that it also makes sense from a human rights perspective. The question that will haunt the debate in the future is: What if technology develops to the point where it is clear that fully autonomous weapons surpass human targeting, and can potentially save many lives? Would human rights considerations in such a case not militate for the use of autonomous weapons, instead of against it? I argue that the rights to life and dignity demand that even under such circumstances, full autonomy in force delivery should not be allowed. The article emphasises the importance placed on the concept of a ‘dignified life’ in the African human rights system.
Article
Full-text available
In this briefing report, we introduce a new concept (war algorithms) that elevates algorithmically-derived choices and decisions to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding autonomous weapon systems. We define war algorithm as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a decision or choice of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. We focus largely on international law because it is the only normative regime that purports, in key respects but with important caveats, to be both universal and uniform. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation, and how those algorithms might already fit within the existing regulatory system established by international law.
ResearchGate has not been able to resolve any references for this publication.