United States, Use of Autonomous Weapons
The question of the use of autonomous weapons in armed conflicts raises many questions under international humanitarian law (IHL). Over the years, States have begun developing policies on this matter, to compliance with IHL. This case deals with the United States’ newly established directive regarding the use of autonomous weapons. This directive outlines the definition that will be applied to determine different categories of lethal autonomous weapons systems (LAWS) and specifies how the use of these weapons will comply with IHL.
Acknowledgments
Case prepared by Cyrielle Danzin, Master student at Université Paris-Panhtéon-Assas, under the supervision of Professor Julia Grignon (Head of Assas International Law Clinic and Visiting professor at Laval University).
A. UNITED STATES OF AMERICA DEPARTMENT OF DEFENSE DIRECTIVE ON AUTONOMY IN WEAPON SYSTEMS
1.1. APPLICABILITY.
a. This directive applies to:
(1) OSD, the Military Departments, the Office of the Chairman of the Joint Chiefs of Staff (CJCS) and the Joint Staff, the Combatant Commands, the Office of Inspector General of the Department of Defense, the Defense Agencies, the DoD Field Activities, and all other organizational entities within the DoD.
(2) The design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that are capable of automated target selection.
(3) The application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.
b. This directive does not apply to:
(1) Autonomous or semi-autonomous cyberspace capabilities.
(2) Unarmed platforms, whether remotely operated or operated by onboard personnel, and whether autonomous or semi-autonomous.
(3) Unguided munitions.
(4) Munitions manually guided by the operator (e.g., laser- or wire-guided munitions).
(5) Mines.
(6) Unexploded explosive ordnance.
(7) Autonomous or semi-autonomous systems that are not weapon systems.
1.2. POLICY.
a. Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.
(1) Systems will go through rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E) in accordance with Section 3. Training, doctrine, and tactics, techniques, and procedures (TTPs) applicable to the system in question will be established. These measures will provide sufficient confidence that autonomous and semi-autonomous weapon systems:
(a) Function as anticipated in realistic operational environments against adaptive adversaries taking realistic and practicable countermeasures.
(b) Complete engagements within a timeframe and geographic area, as well as other relevant environmental and operational constraints, consistent with commander and operator intentions. If unable to do so, the systems will terminate the engagement or obtain additional operator input before continuing the engagement.
(c) Are sufficiently robust to minimize the probability and consequences of failures.
[…]
b. Persons who authorize the use of, direct the use of, or operate autonomous and semi- autonomous weapon systems will do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE). The use of AI capabilities in autonomous or semi-autonomous weapons systems will be consistent with the DoD AI Ethical Principles, as provided in Paragraph 1.2.f.
[…]
f. The design, development, deployment, and use of AI capabilities in autonomous and semi- autonomous weapon systems will be consistent with the DoD AI Ethical Principles and the DoD Responsible Artificial Intelligence Strategy and Implementation Pathway. The DoD AI Ethical Principles, as adopted in the February 21, 2020 Secretary of Defense Memorandum, are:
(1) Responsible.
DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
(2) Equitable.
The DoD will take deliberate steps to minimize unintended bias in AI capabilities.
(3) Traceable.
The DoD’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedures and documentation.
(4) Reliable.
The DoD’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.
(5) Governable.
The DoD will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
SECTION 2: RESPONSIBILITIES
[…]
2.3. USD(R&E).
The USD(R&E):
[…]
b. Oversees establishment of science and technology and research and development priorities for autonomy in weapon systems, including the development of new methods of V&V and T&E and the establishment of minimum thresholds of risk and reliability for the performance of autonomy in weapon systems.
[…]
2.6. GENERAL COUNSEL OF THE DEPARTMENT OF DEFENSE (GC DOD).
In accordance with DoDD 5000.01, DoDD 2311.01, DoDD 5145.01, and, where applicable, DoDD 3000.03E, the GC DoD provides for guidance on, and coordination of, significant legal issues in autonomy in weapon systems. The GC DoD also coordinates on the review of the legality of weapon systems submitted in accordance with Paragraph 1.2.c.
[…]
2.12. COMBATANT COMMANDERS.
The Combatant Commanders:
[…]
b. Employ autonomous and semi-autonomous weapon systems with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable ROE, in accordance with Paragraph 1.2.b, and employ AI capabilities in autonomous and semi-autonomous weapon systems consistent with the DoD AI Ethical Principles and the DoD Responsible Artificial Intelligence Strategy and Implementation Pathway, in accordance with Paragraph 1.2.f.
[…]
SECTION 4: GUIDELINES FOR REVIEW OF CERTAIN AUTONOMOUS WEAPON SYSTEMS
4.1. Autonomous weapon systems intended to be used in a manner that falls outside the policies in Paragraphs 1.2.d.(1) through 1.2.d.(4) must be approved by the USD(P), USD(R&E), and VCJCS before formal development and by the USD(P), USD(A&S), and VCJCS before fielding. If the weapon system in question is to be developed and then fielded by DoD, it will need to undergo both reviews and receive approvals. A review is not needed if the weapon system is covered by a previous approval for formal development or fielding. Requests for senior review and approval should be submitted to USD(P), attention to the Director of the Emerging Capabilities Policy Office.
[…]
c. Before a decision to enter formal development, the USD(P), USD(R&E), and VCJCS will verify that:
(1) The system design incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment over the use of force in the envisioned planning and employment processes for the weapon.
(2) The system is designed to complete engagements within a timeframe and geographic area, as well as other applicable environmental and operational parameters, consistent with commander and operator intentions. If unable to do so, the system will terminate engagements or obtain additional operator input before continuing the engagement.
(3) The combination of the system’s design and concept of employment (e.g., its target selection and engagement logic and other relevant processes or measures) accounts for risks to non-targets, consistent with commander and operator intent.
[…]
d. Before fielding, the USD(P), USD(A&S), and VCJCS will verify that:
(1) System capabilities, human-machine interfaces, doctrine, TTPs, and training have been demonstrated to allow commanders and operators to exercise appropriate levels of human judgment over the use of force and to employ systems with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and ROE that are applicable or reasonably expected to be applicable.
[…]
B. UNITED STATES OF AMERICA CONGRESSIONAL RESEARCH SERVICE DEFENSE PRIMER: U.S. POLICY ON LETHAL AUTONOMOUS WEAPON SYSTEMS
[1] Lethal autonomous weapon systems (LAWS) are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system. Although these systems are not yet in widespread development, it is believed they would enable military operations in communications-degraded or -denied environments in which traditional systems may not be able to operate.
[…]
[3] Developments in both autonomous weapons technology and international discussions of LAWS could hold implications for congressional oversight, defense investments, military concepts of operations, treaty-making, and the future of war.
U.S. Policy
[4] Then-Deputy Secretary of Defense Ashton Carter issued DOD’s policy on autonomy in weapons systems, Department of Defense Directive (DODD) 3000.09 (the directive), in November 2012. DOD has since updated the directive—most recently in January 2023.
[5] Definitions. There is no agreed definition of lethal autonomous weapon systems that is used in international fora. However, DODD 3000.09 provides definitions for different categories of autonomous weapon systems for the purposes of the U.S. military. These definitions are principally grounded in the role of the human operator with regard to target selection and engagement decisions, rather than in the technological sophistication of the weapon system.
[6] DODD 3000.09 defines LAWS as “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator.” This concept of autonomy is also known as “human out of the loop” or “full autonomy.” The directive contrasts LAWS with human-supervised, or “human on the loop,” autonomous weapon systems, in which operators have the ability to monitor and halt a weapon’s target engagement. Another category is semi-autonomous, or “human in the loop,” weapon systems that “only engage individual targets or specific target groups that have been selected by a human operator.” Semi-autonomous weapons include so-called “fire and forget” weapons, such as certain types of guided missiles, that deliver effects to human-identified targets using autonomous functions.
[7] The directive does not apply to autonomous or semi-autonomous cyberspace capabilities; unarmed platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; unexploded explosive ordnance; or autonomous or semi-autonomous systems that are not weapon systems, nor subject them to its guidelines.
[8] Role of human operator. DODD 3000.09 requires that all systems, including LAWS, be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” As noted in an August 2018 U.S. government white paper, “‘appropriate’ is a flexible term that reflects the fact that there is not a fixed, one-size-fits-all level of human judgment that should be applied to every context. What is ‘appropriate’ can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system.”
[9] Furthermore, “human judgment over the use of force” does not require manual human “control” of the weapon system, as is often reported, but rather broader human involvement in decisions about how, when, where, and why the weapon will be employed. This includes a human determination that the weapon will be used “with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.”
[…]
[16] In addition, approximately 30 countries and 165 nongovernmental organizations have called for a preemptive ban on LAWS due to ethical concerns, including concerns about operational risk, accountability for use, and compliance with the proportionality and distinction requirements of the law of war. The U.S. government does not currently support a ban on LAWS and has addressed ethical concerns about the systems in a March 2018 white paper, “Humanitarian Benefits of Emerging Technologies in the Area of Lethal Autonomous Weapons.” The paper notes that “automated target identification, tracking, selection, and engagement functions can allow weapons to strike military objectives more accurately and with less risk of collateral damage” or civilian casualties.
[…]
DISCUSSION
1. (Document A, paras 1.2.b, 2.12.b and 4.1.d.(1); Document B, paras [9] and [16]) What are the three cardinal rules which govern the conduct of hostilities under international humanitarian law? Do these rules apply equally in International Armed Conflicts (IACs) and Non-International Armed Conflicts (NIACs)? (P I, Arts 48, 52(1)-(2), 57 and 58; P II, Art. 13; CIHL, Rules 7, 8, 9, 10, 14 and 15)
2. (Document A, paras 1.1.a and b; Document B, para [1]) What are Lethal Autonomous Weapon Systems (LAWS)? According to the United States’ policy, what does this category of weapon encompass?
3. Considering the rules of distinction, proportionality, and precautions, what are the challenges associated with the use of LAWS in armed conflicts? (P I, Arts 48, 52(1)-(2), 57 and 58; P II, Art. 13; CIHL, Rules 7, 8, 9, 10, 14 and 15) What are the benefits and drawbacks of using of such weapons?
4. (Document A, paras 1.2.a, b and f, 2.6, 2.12.b; Document B, paras [8-1] and [16]) Which types of weapons are prohibited under IHL? The documents stress the inherent unpredictability of LAWS, does this challenge the possibility of complying with IHL? How? (P I, Arts 35(2) and 51(4); CIHL, Rules 70 and 71) What guarantees does the United States' policy paper provide to address these concerns?
5. Does IHL regulate the choice of weapons to be used by parties to an armed conflict? (PI, Arts 36 and 57(2)(a)(ii); CIHL, Rule 17)
6. (Document A, para 4.1.d.(1); Document B, para [8]) Does IHL require that decisions be made through a chain of command? What challenges does the use of LAWS pose in terms of determining responsibilities in cases of violations of IHL? (P I Art. 85; CIHL, Rule 158; Rome Statute, Art. 28)