Should AWS be designed to comply with ROE?
There is international consensus that Artificial Intelligence (AI) enabled and autonomous weapon systems (AWS), including AI decision support systems (AI DSS) must be designed to be capable of use in compliance with International Humanitarian Law (IHL) and international law. This is clear from
(a) the 2019 United Nations Group of Governmental Experts on Lethal Autonomous Weapons concluded a set of Guiding Principles that commenced with:
‘The potential use of weapons systems based on emerging technologies in the area of lethal autonomous weapons systems must be conducted in accordance with applicable international law, in particular IHL and its requirements and principles, including inter alia distinction, proportionality and precautions in attack;’
(b) the Blueprint for Action agreed by 60 States at the 2024 Responsible AI in the Military Domain in South Korea.
‘AI capabilities in the military domain must be applied in accordance with applicable national and international law.’
(c ) and more recently, the UN General Assembly passed with the support of 166 States Resolution 79/62 affirming that:
‘.. international law, including the Charter of the United Nations, international humanitarian law, international human rights law and international criminal law, applies in relation to autonomous weapons systems,’
In practice, an AWS must be capable of complying with international law but also national policy and operational factors expressed in the form of Rules of Engagement (ROE).
What are ROE?
The 2022 Newport Rules of Engagement Handbook describes ROE as ‘rules, issued by competent authorities, to military forces and associated groups and forces, and to other organized armed groups, that regulate the use of force and other activities that may be considered to be provocative’.
Irrespective of their function, if an AWS enables or performs a function associated with the use of force, they will need to operate within a national or multinational ROE designed for the specific military operation in which the AWS is deployed.
An ROE often reflects a number of considerations including international and national law requirements, national policy and diplomacy and operational requirements. In many cases, they place restrictions on the use of force beyond what is required by IHL. This often reflects the nature of the military operation ranging from humanitarian assistance and disaster relief (HADR), to peace keeping and armed conflict.
Should an AWS be designed to operate in compliance with ROE?
To enable it to operate in a military operation, an AWS should be designed to allow the military to alter its autonomous functionality to reflect the ROE for the operation in which it is deployed. For example, an AWS drone may be used in a disarmed mode to assist in the search for survivors in a HADR mission. The need to adjust the AWS functionality may also be required when the nature of an operation changes. For example, an armed conflict may shift to a peace keeping mission following an armistice.
In this way, the AWS should be capable of data input reflecting the specific ROE rules including the self-defence of others, offensive use of force, the application of criteria including hostile act and hostile intent in addition to the IHL principles of distinction, proportionality and precautions in attack.
In addition, an ROE often describes requirements relating to the escalation of force including the use of warning shots, positive identification and other escalation of force measures. These relate to the use of force in both offense and self-defence which an AWS may need to be capable of performing in compliance with the ROE.
Warning shots are fired in the vicinity of a person, vessel, or aircraft as a signal to immediately cease activity or comply with other instructions, but not intended to cause damage or injury. The use of warning shots varies amongst States and in different domains (e.g. on land v’s in a maritime environment). Some States prohibit the use of warning shots to avoid unintended harm to civilians and damage to civilian property. An AWS system should be capable of either performing warning shots or prevented from doing so, depending on the ROE applied by the military in which it is operating.
Positive identification (PID) are a reasonable certainty that an object of attack is a legitimate military target in accordance with the law of armed conflict e.g. Article 57 of AP I. In many cases, an ROE will require PID to be established by specific means such as intelligence sources (human and electronic information) and in specific combinations e.g. a minimum number of independent sources. An AWS must be capable of programming to enable compliance with ROE PID requirements or require direct human control to meet these requirements.
Escalation of Force measures are often required by an ROE in the use of self-defensive force. This is particularly the case in a non-international armed conflict or peace-keeping mission where threat persons or objects are not easily identified. Escalation of force measures often include visual or verbal warning, physical barrier or restraints, warning shots (where authorised) before lethal force is applied. If an AWS was designed to perform as sentry function, an ROE may require it to perform escalation of force measures, or operate within a human-machine team capable of doing so.
Other operational documents
An ROE is often part of an Operational Order (OPORD) which will include other operational documents specific to the mission. These include a targeting directive and detention policy. Such documents are consistent with the ROE and provide further fidelity on who the military may use force authorised by the ROE.
Where an ROE approves the use of force in offense, i.e. an attack, a targeting directive may include the measure to avoid the unlawful use of force including no-strike lists and restricted- strike lists. Such lists reflect protected objects under IHL or objects designed as protected by the ROE for policy or operational reasons. A targeting directive may also require certain assessed risks of civilian casualties, whilst assessed as proportionate to the military advantage, to be approved by certain levels of authorities. For example, a formation commander may be required to approve an anticipated civilian casualty of more than 2 civilians. These targeting control measures must be capable of input into an AWS operating system.
If an ROE authorises the deprivation of liberty of civilians or detention of prisoners of war (PW) an OPORD will generally contain an annex describing the requirements for these. This may include the level of force authorised to effect a detention and to prevent the escape of a PW. An AWS may be designed to perform a security overwatch role which will require it to operate within the ROE during the conduct of detention operations or maintain security overwatch of detainees or PWs.
Concluding remarks
The use of AI enabled and AWS, including AI DSS, must be in compliance with IHL and the constraints imposed by national or multinational ROE. ROE are usually specific to a military operation and so AWS should be capable of input of ROE rules and restrictions to ensure they operate in accordance with the law, national policy and operational requirements. This will require careful consideration in the design and development of such systems to ensure their autonomous functions can be programmed to a specific operational deployment.
The ability to input ROE into an AWS operating system will be considered by a national legal review process. As encouraged by the Lawful by Design initiative, the legal review should not be the first time IHL and other operational requirements, including compliance with ROE, should be considered from the earliest stages of R&D.