Known colloquially as a killer robot, an autonomous weapon system (AWS), is a robotic weapon. Upon activation, it can decide for itself when and against whom to use force enough to kill. This dissertation will address the issues posed by AWS. The focus will be on AWS that do not feature ‘meaningful human control’ during times of peace and armed conflict. Thus, unless otherwise stated, in this dissertation, all AWS discussed will be those that do not feature meaningful human control.
There are numerous benefits to AWS. For example, this technology has the potential to save the lives of soldiers charged with menial, dangerous tasks. Furthermore, AWS does not tire, become angry or frustrated and so on. Consequently, civilian lives may be saved by their use also. Additionally, AWS leaves a digital footprint that can effectively track events and bring criminals to justice, and AWS cannot wilfully commit a crime itself.
Nonetheless, AWS may make going to war far too easy and they pose a severe risk to human rights, including the right to life and dignity and the right to a remedy for a victim. The use of force is a key concern. Does AWS comply to international regulations concerning the use of force? Is the technology, a machine with the power of life and death over human beings, compatible with the right to dignity? A gap in accountability may be created in particular by AWS that do not feature meaningful human control and this could then impact the rights of victims to seek the protection of international law.
The legal duty of states under Article 36 of the Additional Protocol I to the Geneva Conventions to review new weapons will be investigated in this dissertation to identify a suitable legal reply to AWS. This duty will also be examined to assess to what extent AWS aligns with recognised standards. According to Article 36, it is required that new weapons be assessed to identify if they are acceptable in relation to several standards, including the human rights system, and whether they result in needless suffering. To begin, this dissertation asserts that AWS that are fully autonomous or have no meaningful human control are not, in fact, strictly weapons. These so-called ‘robot combatants’ should be dealt with carefully by the international community. After the elements of Article 36 are understood in detail, it is proposed here that it is appropriate to accept AWS that do not feature meaningful human control.
Regulations of International Humanitarian Law, including precaution, distinction, proportionality rules, are also used to examine AWS. Given that these rules were written to apply to humans and not to machines, which by their very nature cannot exert human judgement, machines will typically fail to satisfy the rules. In addition, the limits of the technology as it exists in the present day and the vague definitions of IHL terms mean that these definitions cannot be transformed into computer code.
In addition, the gap in responsibility created by AWS has the potential to have a negative impact on the rights of victims to pursue a remedy due to the question over who should be held accountable for the actions of AWS. The different types of accountability acknowledged in international law, including command responsibility, corporate, individual and state responsibility, are reviewed in relation to the difficulties posed by AWS. This discussion investigates current proposals for how to resolve these difficulties, including the concept of split responsibility and the argument that command responsibility can be applied to AWS. However, these solutions are found to be impracticable and defective.
This dissertation supports the findings of scholars who argue that meaningful human control can resolve the difficulties associated with AWS. However, international law offers no definition of this term, so jurisprudence concerning the concept of ‘control’ as a means of determining accountability is used to inform a definition in this dissertation. Tests, which include the strict control test and the effective control test, are discussed to examine ideas around ‘dependence’ and ‘control’, which are central to accountability. It is concluded that meaningful human control over a system of weapons can only exist when a human being is responsible for the functions of the system that relate to the selection of a kill target and the decision to execute an action. That is, human input is required for the completion of the most important functions of a weapons system. If that input is absent, the system should be incapable of carrying out these functions.