UK Armed Forces Personnel and the Legal Framework For Future Operations

Further written evidence from Air Commodore (Retd) Bill Boothby, Doktor Iuris, former Deputy Director of Legal Services (RAF)

Author: Weapons and the Law of Armed Conflict (2009); The Law of Targeting (2012); Conflict law: The Influence of New Weapons Technology, Human Rights and Emerging Actors (forthcoming 2014).

I have been asked to provide some further written evidence to the House of Commons Defence Select Committee. I have been asked to address the core rules of the law of armed conflict relating, respectively, to targeting and to weaponry, to explain how the law of armed conflict caters for remotely piloted aircraft, to explain what legal difference it makes when autonomous and automated weapons are concerned and to address certain core legal issues concerning the use of cyber capabilities in armed conflicts. I will take each topic in turn and will keep the discussion as succinct as possible. It must be understood that important matters of detail will inevitably be sacrificed in order to achieve appropriate brevity.

Central to the law of targeting is the notion that not all methods of attacking the enemy are lawful. There are limits to what is permissible. [1] Certain core principles and rules lie at the heart of the law of targeting. First among these is the principle that requires the parties to an armed conflict to maintain a distinction at all times between persons and objects that may lawfully be made the object of attack and those that must be respected and protected [2] . To put the matter in the briefest of terms, in armed conflicts attacks can be lawfully directed at members of the armed forces or, in the case of armed conflicts internal to a state, against fighters. Attacks may also be directed against objects that contribute effectively to the enemy’s military action and the attack of which therefore advantages the attacker militarily. Civilians and objects that do not come within the description in the previous sentence and which are therefore known as civilian objects are generally protected from attack unless, in the case of civilians, they take a direct part in the hostilities. In armed conflicts between states, there are rules as to doubt the detail of which lies beyond the scope of the current submission. [3]

Some persons and objects are granted specific, sometimes enhanced, protection, for example medical and religious personnel, medical facilities, medical transports, cultural property, certain installations containing dangerous forces, journalists etc. Targeting law recognises that civilians and civilian objects may be injured, killed or damaged as a result of lawfully directed attacks and that fact will not per se render the lawfully directed attack unlawful. However attacks that are of a nature to strike civilians and civilian objects and military objectives and combatants without distinction and that are thus indiscriminate are prohibited [4] , and one example of such indiscriminate attacks would be an attack which is expected to cause excessive injury or damage to civilians or civilian objects by reference to the anticipated military advantage [5] . The treaty law addressing armed conflicts internal to a state is less well developed than that relating to international armed conflicts, but the legally binding custom of states recognises many of those principles and rules as applicable in such intra-state conflicts, and such customary rules bind all states irrespective of their participation in particular treaties. [6] Having set these rules, targeting law then requires attackers to take all practically possible precautions to fulfil those requirements while also requiring that precautions be taken by the parties to the conflict against the effects of attacks. [7]

Weapons law, by contrast, addresses the weapons that it is lawful for a party to an armed conflict to possess and use, providing that some are prohibited while the use of others is restricted. States are obliged legally to review new weapons they study, develop or acquire before fielding them to determine whether their use would be prohibited in some or all circumstances [8] . That review must apply the law by which the relevant state is bound. All states are prohibited to use weapons that are of a nature to cause superfluous injury or unnecessary suffering [9] , i.e. weapons that, when used for their designed or intended purpose, inevitably cause injury or suffering which exceeds that required to achieve the generic military advantage the weapon is designed to afford. Equally, all states are prohibited from using weapons that are indiscriminate by nature, for example because the weapon does not permit the damaging effects to be controlled or directed [10] . States that are party to Additional Protocol I, such as the UK, must not use weapons that are intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment. In addition, there are ad hoc rules prohibiting or restricting the use of particular weapon types. So, for example, poisons and poisoned weapons [11] , explosive anti-personnel bullets [12] , bullets that expand or flatten easily in the human body [13] , asphyxiating gases [14] , chemical [15] and biological weapons [16] , anti-personnel landmines [17] and cluster munitions [18] are prohibited, and the use of incendiary weapons is restricted [19] .

Remotely piloted aircraft, or drones as they are colloquially called, are subject to the same body of targeting and weapons law as other weapon systems, such as manned attack aircraft. The lawfulness of drone attacks during armed conflicts is determined by applying the same rules as very briefly summarised above. Conflicts that do not amount to either an international or an intra-state armed conflict, for example in the latter case because the required degree of sustained violence is not maintained, are regulated by domestic and human rights law. This means that, for example for a state that is party to the European Convention on Human Rights, lethal force may only be used in the very limited circumstances that are consistent with respect for the right to life as interpreted pursuant to that treaty.

So, when drone strikes are undertaken in places where an armed conflict is under way, and where lawful targets are engaged in a discriminating way with all required precautions being complied with, the attack will prima facie be lawful. Where no armed conflict is taking place, domestic and human rights law must be complied with. The use of lethal force against an individual coming within the jurisdiction of a state, such as the UK, that is party to the European Convention will only be permissible if the procedural and other safeguards associated with the right to life are complied with, including that the use of force was absolutely necessary, that it was strictly proportionate, that the operation was carefully planned and that the circumstances have been appropriately investigated. It is the responsibility of those who plan, order and undertake drone strike operations to ensure that the relevant legal rules are complied with.

Autonomy and automation of attack decisions are the subject of significant current research. [20] It is evident however that there is no current internationally agreed interpretation of e.g. the precise meaning of autonomy. My current view is that autonomy can most sensibly be seen as something of an absolute in which it is the machine that, by understanding higher level intent and by perceiving its environment, itself decides on appropriate action without human oversight or control. Its individual actions may not be predictable. [21] This interpretation of autonomy is not universally shared. I consider that reaching an internationally agreed interpretation of terminology is a necessary precursor to a sensible international discussion of the acceptability of such technologies. [22] For the time being, however, it would seem sensible to regard autonomy as an absolute state in which the weapon system learns its own lessons, modifies its behaviour accordingly and in which its behaviours are not constrained by human involvement. All lesser forms of mechanical decision-making would then be classed as automation, so there will be ‘degrees of automation’ but not ‘degrees of autonomy’.

Weapon reviews must determine whether a new weapon system being reviewed would breach existing law in some or all circumstances. Existing law is therefore the criterion against which new autonomous or automated systems must be judged. The weapons law rules mentioned earlier in this paper must be applied. In addition, and because a person is not involved in the relevant decisions, it will be necessary to determine whether the weapon is capable of undertaking the precautions that the law requires [23] . While object recognition technology may enable such systems to verify that an object to be attacked is a lawful target, it is likely to be considerably more difficult to verify that a person is a lawful target as opposed, for example, to a civilian or a person, whether combatant or directly participating civilian, who has been rendered hors de combat and who therefore is protected from attack. The precautions rules also require attackers to do everything practically possible to verify that the planned attack will not breach the discrimination rule. Accordingly, attackers must, for example, satisfy themselves that the expected civilian injuries and damage will not be excessive in relation to the anticipated military advantage. While there is, in my opinion, no legal requirement that human beings take particular decisions, evaluative decisions of this sort are for the foreseeable future likely to prove challenging for automated or autonomous weapon systems. It should be understood that this is only one example of the kinds of legal issue that such technologies may be expected to raise. The important point is that it is the technology that must be made to comply with existing legal obligations; absent new ad hoc treaty arrangements, it is not for the law to be re-interpreted to accommodate peculiarities of the new, emerging technology.

Following cyber operations involving Estonia in 2007 and Georgia in 2008, the NATO Cooperative Cyber Defence Centre of Excellence in Tallinn, Estonia, initiated a process that led to the preparation of a Manual addressing the law applicable to Cyber Warfare. Experts from NATO states worked for between three and four years to produce the Manual published earlier this year. The Manual does not have the status of law as such. It is, rather, the best assessment of the Experts as to what the law is. The lawfulness of the resort to the use of cyber force and the lawfulness of particular kinds of cyber operation undertaken during an armed conflict are both addressed in the Manual. While it is appreciated that certain states, notably Russia and China, do not necessarily agree with all of the Manual’s conclusions, the document represents, it is suggested, a useful first step in seeking to clarify the law that applies to military operations in this new, man-made environment of cyberspace.

Briefly, the Tallinn Experts concluded that the notion of a cyber attack makes legal sense. ‘Attack’ is defined in law in terms of a use of violence against an adversary, whether in offence or defence. [24] The Tallinn Experts concluded that the ‘violence’ requirement is met by acts that have violent consequences, that is, that occasion death, injury, damage or destruction. [25] If, as I believe, this conclusion is correct it follows that the bulk of targeting law, very briefly referred to earlier, applies to cyber operations having these consequences and the Manual explains how the relevant rules can be so applied.

Similarly, for the Tallinn Experts cyber weapons are cyber means of warfare that are by design, use or intended use capable of causing injury, death, damage or destruction. By the same token, therefore, if this notion of cyber weapons is accepted, it follows that the law of weaponry is capable of being applied to cyber capabilities designed, intended or used to have those consequences and the Manual explains how the relevant rules can sensibly be so applied.

You asked me about the inter-relationship between the law of armed conflict and human rights law. I explained that international courts, notably the International Court of Justice (ICJ) in its Nuclear Weapons Advisory Opinion and in the Palestinian Wall case, and the European Court of Human Rights have addressed the issue. I pointed out the ICJ determination that human rights law applies throughout armed conflict, but that whether the right to life is breached must sometimes be determined by reference to law of armed conflict norms, sometimes by reference to human rights law norms and sometimes by reference to both sets of norms. In my view, the difficulty lies in enabling commanders and personnel to know in advance which norms will apply to which activity. I drew attention to the European Court cases of Al Jedda and Al Skeini and commented that there are aspects of these judgments that seem hard to reconcile with the practical needs of military operations. I expressed the view that the UK and other states might wish to consider issuing appropriately worded statements asserting national positions when national interpretations of the law diverge from the judgments of such courts as the European Court of Human Rights.

This is of necessity a very abbreviated discussion of complex issues, but it is hoped that it is nevertheless useful.

 December 2013


[1] Additional Protocol I, 1977 (API) article 35(1).

[2] API, article 48.

[3] API, articles 50(1) and 52(3).

[4] API, article 51(4).

[5] API, article 51(5)(b).

[6] See ICRC Customary Humanitarian Law Study, 2005.

[7] API, articles 57 and 58 respectively.

[8] API, article 36.

[9] API, article 35(2).

[10] API, article 51(4)(b) and (c).

[11] Hague Regulations 1907, article 23(a).

[12] See UK Manual on the Law of Armed Conflict, para. 6.10.

[13] Hague Declaration 3, 1899; the prohibition applies in armed conflicts between states and, except in limited circumstances, the same rule applies as customary law to intra-state armed conflicts.

[14] Geneva Gas Protocol, 1925.

[15] Chemical Weapons Convention, 1993.

[16] Biological Weapons Convention, 1972.

[17] Ottawa Convention, 1997.

[18] Cluster Munitions Convention, 2008.

[19] Protocol III to Conventional Weapons Convention, article 2.

[20] See for example R C Arkin, Governing Lethal Behaviour in Autonomous Robots (2009).

[21] For a full definition see UK MOD Joint Doctrine Note 2/11, The UK Approach to Unmanned Aircraft Systems (March 2011), paragraph 205.

[22] In this regard, the weapons that Human Rights Watch refers to in its call for a global ban would seem to me to include both autonomous and automated weapons as referred to in the present paper.

[23] The relevant precautions are set out in API, article 57.

[24] API, article 49(1) and Tallinn Manual, rule 30.

[25] Tallinn Manual, paragraph 3 of commentary accompanying rule 30.

Prepared 9th January 2014