Libya, The Use of Lethal Autonomous Weapon Systems

An autonomous weapon might have been deployed in Libya, where it targeted fleeing fighters. The attack raises concerns about the lawfulness of the deployment of LAWS in armed conflicts.

This case may be read together with this case: Autonomous Weapons Systems, available at: https://casebook.icrc.org/case-study/autonomous-weapon-systems

Acknowledgments

Case prepared by Petra Rešlová, Master's student at the Charles University in Prague and exchange student at the University of Geneva, under the supervision of Professor Marco Sassòli (University of Geneva) and Professor Julia Grignon (Laval University).

N.B. As per the disclaimer, neither the ICRC nor the authors can be identified with the opinions expressed in the Cases and Documents. Some cases even come to solutions that clearly violate IHL. They are nevertheless worthy of discussion, if only to raise a challenge to display more humanity in armed conflicts. Similarly, in some of the texts used in the case studies, the facts may not always be proven; nevertheless, they have been selected because they highlight interesting IHL issues and are thus published for didactic purposes.

A. FINAL REPORT OF THE UN PANEL OF EXPERTS ON LIBYA

[Source: UN Security Council, Final report of the Panel of Experts on Libya established pursuant to Security Council resolution 1973 (2011), S/2021/229, 8 March 2021, references partially omitted, available at: https://undocs.org/Home/Mobile?FinalSymbol=S%2F2021%2F229&Language=E&DeviceType=Desktop&LangRequested=False]

[…]

Summary

The military conflict triggered by the attack on Tripoli by armed groups affiliated with Khalifa Haftar on 4 April 2019 dominated the first half of 2020. Throughout and beyond the armed confrontation, Haftar Affiliated Forces (HAF) and the Government of National Accord continued to receive increasing support from State and non-State actors. […] The Government of National Accord regained control of the western coast in April 2020, pushed HAF away from the environs of Tripoli by early in June 2020 and shifted the battle lines to the central region of Sirte and Jufrah by July 2020. Throughout August and into October 2020, ceasefire negotiations between both parties’ military commanders were held under the auspices of the United Nations Support Mission in Libya (UNSMIL). […] On 23 October 2020, UNSMIL announced the terms of a ceasefire agreement that the Libyan parties had signed, although their commitment to its implementation remains questionable. […]

[…]

63. On 27 March 2020, the [Libyan] Prime Minister, Faiez Serraj, announced the commencement of Operation PEACE STORM, 46 which moved GNA-AF to the offensive along the coastal littoral. […] The GNA-AF breakout of Tripoli was supported with Firtina T155 155mm self-propelled guns […] and T-122 Sakarya multi-launch rocket systems […] firing extended range precision munitions against the mid-twentieth century main battle tanks and heavy artillery used by HAF. Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 […] and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability. The unmanned combat aerial vehicles and the small drone intelligence, surveillance and reconnaissance capability of HAF were neutralized by electronic jamming from the Koral electronic warfare system.

64. The concentrated firepower and situational awareness that those new battlefield technologies provided was a significant force multiplier for the ground units of GNA-AF, which slowly degraded the HAF operational capability. The latter’s units were neither trained nor motivated to defend against the effective use of this new technology and usually retreated in disarray. Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems, which were proving to be a highly effective combination in defeating the […] Pantsir S-1 surface-to-air missile systems. These suffered significant casualties, even when used in a passive electro-optical role to avoid GNA-AF jamming. With the Pantsir S-1 threat negated, HAF units had no real protection from remote air attacks.

65. The introduction by Turkey of advanced military technology into the conflict was a decisive element in the often unseen, and certainly uneven, war of attrition that resulted in the defeat of HAF in western Libya during 2020. Remote air technology, combined with an effective fusion intelligence and intelligence, surveillance and reconnaissance capability, turned the tide for GNA-AF in what had previously been a low-intensity, low-technology conflict in which casualty avoidance and force protection were a priority for both parties to the conflict. […]

[…]

B. HAVE AUTONOMOUS ROBOTS STARTED KILLING IN WAR?

[Source: James Vincent, “Have autonomous robots started killing in war?”, 3 June 2021, The Verge, references partially omitted, available at: https://www.theverge.com/2021/6/3/22462840/killer-robot-autonomous-drone-attack-libya-un-report-context]

[1] It’s the sort of thing that can almost pass for background noise these days: over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: “The Age of Autonomous Killer Robots May Already Be Here.”

[…]

WHAT’S THE ACTUAL NEWS HERE?

[2] The source of all these stories is a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War, covering a period from October 2019 to January 2021. […] To save you time: it is an extremely thorough account of an extremely complex conflict, detailing various troop movements, weapon transfers, raids and skirmishes that took place among the war’s various factions, both foreign and domestic.

[3] The paragraph we’re interested in, though, describes an offensive near Tripoli in March 2020, in which forces supporting the UN-backed Government of National Accord (GNA) routed troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). Here’s the relevant passage in full:

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

[4] The Kargu-2 system that’s mentioned here is a quadcopter built in Turkey: it’s essentially a consumer drone that’s used to dive-bomb targets. It can be manually operated or steer itself using machine vision. A second paragraph in the report notes that retreating forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and that the HAF “suffered significant casualties” as a result.

[…] What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions” (we’ll get to those later), and that the quadcopters had been programmed to work offline. But whether the attacks took place without connectivity is unclear.

[…]

[5] Let’s be clear: by itself, the UN does not say for certain whether drones autonomously attacked humans in Libya last year, though it certainly suggests this could have happened. The problem is that even if it did happen, for many experts, it’s just not news.

THE PROBLEM OF DEFINING “KILLER ROBOTS”

[6] The reason why some experts took issue with these stories was because they followed the UN’s wording, which doesn’t distinguish clearly between loitering munitions and lethal autonomous weapons systems or LAWS (that’s policy jargon for killer robots).

[7] Loitering munitions, for the uninitiated, are the weapon equivalent of seagulls at the beachfront. They hang around a specific area, float above the masses, and wait to strike their target — usually military hardware of one sort or another (though it’s not impossible that they could be used to target individuals).

[8] The classic example is Israel’s IAI Harpy, which was developed in the 1980s to target anti-air defenses. The Harpy looks like a cross between a missile and a fixed-wing drone, and is fired from the ground into a target area where it can linger for up to nine hours. It scans for telltale radar emissions from anti-air systems and drops onto any it finds. The loitering aspect is crucial as troops will often turn these radars off, given they act like homing beacons.

[…]

[9] Jack McDonald, a lecturer at the department of war studies at King’s College London, says the distinction between the two terms is controversial and constitutes an unsolved problem in the world of arms regulation. “There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions,’” he tells The Verge. “This is a huge, long-running thing. And it’s because the line between something being autonomous and being automated has shifted over the decades.”

[10] So is the Harpy a lethal autonomous weapons system? A killer robot? It depends on who you ask. IAI’s own website describes it as such, calling it “an autonomous weapon for all weather,” and the Harpy certainly fits a makeshift definition of LAWS as “machines that target combatants without human oversight.” But if this is your definition, then you’ve created a very broad church for killer robots. Indeed, under this definition a land mine is a killer robot, as it, too, autonomously targets combatants in war without human oversight.

ARTIFICIAL INTELLIGENCE MAKES IT WORSE

[11] If killer robots have been around for decades, why has there been so much discussion about them in recent years, with groups like the Campaign To Stop Killer Robots pushing for regulation of this technology in the UN? And why is this incident in Libya special?

[…]

[12] “Loitering munitions typically respond to radar emissions, [and] a kid walking down the street isn’t going to have a high-powered radar in their backpack,” Kallenborn tells The Verge. “But AI targeting systems might misclassify the kid as a soldier, because current AI systems are highly brittle — one study showed a change in a single pixel is sufficient to cause machine vision systems to draw radically different conclusions about what it sees. An open question is how often those errors occur during real-world use.”

[13] This is why the incident in Libya is interesting, says Kallenborn, as the Kargu-2 system mentioned in the UN report does seem to use AI to identify targets. According to the quadcopter’s manufacturer, STM, it uses “machine learning algorithms embedded on the platform” to “effectively respond against stationary or mobile targets (i.e. vehicle, person etc.)”. […]

[14] But should we trust a manufacturers’ demo reel or brochure? And does the UN report make it clear that machine learning systems were used in the attack?

[15] Kallenborn’s reading of the report is that it “heavily implies” that this was the case, but McDonald is more skeptical. “I think it’s sensible to say that the Kargu-2 as a platform is open to being used in an autonomous way,” he says. “But we don’t necessarily know if it was.” In a tweet, he also pointed out that this particular skirmish involved long-range missiles and howitzers, making it even harder to attribute casualties to any one system.

WHAT’S NEXT FOR LAWS AND THE LAW?

[16] What we’re left with is, perhaps unsurprisingly, the fog of war. Or more accurately: the fog of LAWS. We can’t say for certain what happened in Libya and our definitions of what is and isn’t a killer robot are so fluid that even if we knew, there would be disagreement.

[17] For Kallenborn, this is sort of the point: it underscores the difficulties we face trying to create meaningful oversight in the AI-assisted battles of the future. Of course the first use of autonomous weapons on the battlefield won’t announce itself with a press release, he says, because if the weapons work as they’re supposed to, they won’t look at all out of the ordinary. “The problem is autonomy is, at core, a matter of programming,” [sic] he says. “The Kargu-2 used autonomously will look exactly like a Kargu-2 used manually.”

[18] Elke Schwarz, […] who’s affiliated with the International Committee for Robot Arms Control, tells The Verge that discussions like this show we need to move beyond “slippery and political” debates about definitions and focus on the specific functionality of these systems. What do they do and how do they do it?

[…]

[19] Schwarz says that despite the myriad difficulties, in terms of both drafting regulation and pushing back against the enthusiasm of militaries around the world to integrate AI into weaponry, “there is critical mass building amongst nations and international organizations to push for a ban for systems that have the capacity to autonomously identify, select and attack targets.”

[20] Indeed, the UN is still conducting a review into possible regulations for LAWS, with results due to be reported later this year. […]

[…]

C. AUTONOMOUS WEAPONS: THE ICRC RECOMMENDS ADOPTING NEW RULES

[Source: ICRC, Statement of the International Committee of the Red Cross delivered at the Convention on Certain Conventional Weapons (CCW) before the Group of Governmental Experts on Lethal Autonomous Weapons Systems, 3 August 2021, available at: https://www.icrc.org/en/document/autonomous-weapons-icrc-recommends-new-rules]

 

[1] The International Committee of the Red Cross (ICRC) welcomes the resumption of work by the Group of Governmental Experts (GGE) at this critical moment in multilateral deliberations on autonomous weapon systems […].

[2] […] It is the ICRC's view that an urgent and effective international response is needed to address the serious risks posed by autonomous weapon systems, as highlighted by many states and civil society organizations over the past decade.

[3] These risks stem from the process by which autonomous weapon systems function. It is the ICRC's understanding that these weapons, after initial activation, select and apply force to targets without human intervention, in the sense that they are triggered by their environment based on a "target profile", which serves as a generalized approximation of a type of target.

[4] The user of an autonomous weapon system does not choose the specific target, nor the precise time or place that force is applied. This process risks the loss of human control over the use of force and it is the source of the humanitarian, legal, and ethical concerns.

[5] These concerns are significant when one considers that autonomy in the critical functions of selecting and applying force could be integrated into any weapon system.

[6] The central challenge with autonomous weapon systems resides in the difficulty of anticipating and limiting their effects. From a humanitarian perspective, they risk harming those affected by armed conflict, both civilians and combatants hors de combat, and they increase the risk of conflict escalation. From a legal perspective, they challenge the ability of persons who must apply the rules of international humanitarian law (IHL) during the planning, decision and execution of attacks to comply with their obligations.

[7] From an ethical perspective, this process of functioning risks effectively substituting human decisions about life and death with sensor, software and machine processes. This raises ethical concerns that are especially acute when autonomous weapon systems are used to target persons directly.

[8] Today, autonomous weapon systems are highly constrained in their use; they are mostly used against certain types of military objects, for limited periods of time, in restricted areas where civilians are not present and with close human supervision.

[9] However, current trends in the expanded development and use of autonomous weapon systems exacerbate core concerns dramatically. In particular, there is military interest in their use against a wider range of targets, over larger areas and for longer periods of time, in urban areas where civilians would be most at risk and with reduced human supervision and capacity for intervention and deactivation.

[10] Worryingly, the use of artificial intelligence and machine learning software to control the critical functions of selecting and applying force is being increasingly explored, which would exacerbate the already difficult task that users have in anticipating and limiting the effects of an autonomous weapon system.

[…] [11] The ICRC recommends that states adopt new, legally binding rules to regulate autonomous weapon systems to ensure that sufficient human control and judgement is retained in the use of force. It is the ICRC's view that this will require prohibiting certain types of autonomous weapon systems and strictly regulating all others.

[12] First, unpredictable autonomous weapon systems should be expressly ruled out, notably because of their indiscriminate effects. This would best be achieved with a prohibition on autonomous weapon systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted and explained.

[13] Secondly, the use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons directly as opposed to against objects.

[14] Thirdly, the design and use of non-prohibited autonomous weapon systems should be regulated, including through a combination of limits on the types of target, such as constraining them to objects that are military objectives by nature; limits on the duration, geographical scope and scale of use, including to enable human judgement and control in relation to a specific attack; limits on situations of use, such as constraining them to situations where civilians or civilian objects are not present; and imposing a requirement for human–machine interaction, notably to ensure effective human supervision and timely intervention and deactivation.

[15] It is the ICRC's understanding that these proposed prohibitions and restrictions are in line with current military practice in the use of autonomous weapon systems.

[…]

[16] It is encouraging, therefore, that there is an increasing convergence of views among states that certain autonomous weapon systems should be prohibited or otherwise excluded from development and use, and that others should be regulated or otherwise limited in their development and use.

[17] These proposals reflect widely held views: a recognition of the need to ensure human control and judgement in the use of force; an acknowledgement that ensuring such control and judgement requires effective limits on the design and use of autonomous weapon systems; and an increasing confidence that such limits can be articulated at international level.

[18] It is also encouraging that many states have shown a readiness to clarify how IHL already constrains autonomous weapon systems and some have proposed the further sharing of current military practice.

[19] The ICRC is convinced that international limits on autonomous weapon systems must take the form of new, legally binding rules. New rules are required because of the seriousness of the risks, the necessity to clarify how existing IHL rules apply and the need to develop and strengthen the legal framework in line with ethical and rule of law issues and humanitarian considerations.

[…]

[20] Considering current military developments in how autonomous weapon systems are being used and deployed, the ICRC urges the High Contracting Parties to the CCW to take action now towards the adoption of new rules.

[…]

D. REPORT OF THE SPECIAL RAPPORTEUR CONSIDERING LETHAL AUTONOMOUS ROBOTICS

N.B. The Case Autonomous Weapons Systems, available at: https://casebook.icrc.org/case-study/autonomous-weapon-systems also reproduces extracts from this Report, which are only in small parts the same.

[Source: Human Rights Council, Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, A/HRC/23/47, 9 April 2013, references partially omitted, available at: https://documents-dds-ny.un.org/doc/UNDOC/GEN/G13/127/76/PDF/G1312776.pdf?OpenElement]

 […]

III. Lethal autonomous robotics and the protection of life

26. For societies with access to it, modern technology allows increasing distance to be put between weapons users and the lethal force they project. For example, UCAVs [unmanned combat aerial vehicles], commonly known as drones, enable those who control lethal force not to be physically present when it is deployed, but rather to activate it while sitting behind computers in faraway places, and stay out of the line of fire.

27. Lethal autonomous robotics (LARs), if added to the arsenals of States, would add a new dimension to this distancing, in that targeting decisions could be taken by the robots themselves. In addition to being physically removed from the kinetic action, humans would also become more detached from decisions to kill – and their execution.

[…]

31. Some argue that robots could never meet the requirements of international humanitarian law (IHL) or international human rights law (IHRL), and that, even if they could, as a matter of principle robots should not be granted the power to decide who should live and die. These critics call for a blanket ban on their development, production and use. To others, such technological advances – if kept within proper bounds – represent legitimate military advances, which could in some respects even help to make armed conflict more humane and save lives on all sides. According to this argument, to reject this technology altogether could amount to not properly protecting life.

[…]

36. As with UCAVs and targeted killing, LARs raise concerns for the protection of life under the framework of IHRL as well as IHL. The Special Rapporteur recalls the supremacy and non-derogability of the right to life under both treaty and customary international law. Arbitrary deprivation of life is unlawful in peacetime and in armed conflict.

[…]

C. The use of LARs during armed conflict

63. A further question is whether LARs will be capable of complying with the requirements of IHL. To the extent that the answer is negative, they should be prohibited weapons. However, according to proponents of LARs this does not mean that LARs are required never to make a mistake – the yardstick should be the conduct of human beings who would otherwise be taking the decisions, which is not always a very high standard.

64. Some experts have argued that robots can in some respects be made to comply even better with IHL requirements than human beings. Roboticist Ronald Arkin has for example proposed ways of building an “ethical governor” into military robots to ensure that they satisfy those requirements.

65. A consideration of a different kind is that if it is technically possible to programme LARs to comply better with IHL than the human alternatives, there could in fact be an obligation to use them – in the same way that some human rights groups have argued that where available, “smart” bombs, rather than less discriminating ones, should be deployed.

66. Of specific importance in this context are the IHL rules of distinction and proportionality. The rule of distinction seeks to minimize the impact of armed conflict on civilians, by prohibiting targeting of civilians and indiscriminate attacks. In situations where LARs cannot reliably distinguish between combatants or other belligerents and civilians, their use will be unlawful.

67. There are several factors that will likely impede the ability of LARs to operate according to these rules in this regard, including the technological inadequacy of existing sensors, a robot’s inability to understand context, and the difficulty of applying of IHL language in defining non-combatant status in practice, which must be translated into a computer programme. It would be difficult for robots to establish, for example, whether someone is wounded and hors de combat, and also whether soldiers are in the process of surrendering.

68. The current proliferation of asymmetric warfare and non-international armed conflicts, also in urban environments, presents a significant barrier to the capabilities of LARs to distinguish civilians from otherwise lawful targets. This is especially so where complicated assessments such as “direct participation in hostilities” have to be made. […]

69. Yet humans are not necessarily superior to machines in their ability to distinguish. In some contexts technology can offer increased precision. For example, a soldier who is confronted with a situation where it is not clear whether an unknown person is a combatant or a civilian may out of the instinct of survival shoot immediately, whereas a robot may utilize different tactics to go closer and, only when fired upon, return fire. Robots can thus act “conservatively” and “can shoot second.” Moreover, in some cases the powerful sensors and processing powers of LARs can potentially lift the “fog of war” for human soldiers and prevent the kinds of mistakes that often lead to atrocities during armed conflict, and thus save lives.

70. The rule of proportionality requires that the expected harm to civilians be measured, prior to the attack, against the anticipated military advantage to be gained from the operation. This rule, described as “one of the most complex rules of international humanitarian law,” [Human Rights Watch, Losing Humanity: The Case Against Killer Robots (2012), p. 31] is largely dependent on subjective estimates of value and context-specificity.

71. Whether an attack complies with the rule of proportionality needs to be assessed on a case-by-case basis, depending on the specific context and considering the totality of the circumstances. The value of a target, which determines the level of permissible collateral damage, is constantly changing and depends on the moment in the conflict. […] The inability to “frame” and contextualize the environment may result in a LAR deciding to launch an attack based not merely on incomplete but also on flawed understandings of the circumstances. It should be recognized, however, that this happens to humans as well.

72. Proportionality is widely understood to involve distinctively human judgement. The prevailing legal interpretations of the rule explicitly rely on notions such as “common sense”, “good faith” and the “reasonable military commander standard.” It remains to be seen to what extent these concepts can be translated into computer programmes, now or in the future.

[…]

74. In view of the above, the question arises as to whether LARs are in all cases likely (on the one hand) or never (on the other) to meet this set of cumulative standard. The answer is probably less absolute, in that they may in some cases meet them (e.g. in the case of a weapons system that is set to only return fire and that is used on a traditional battlefield) but in other cases not (e.g. where a civilian with a large piece of metal in his hands must be distinguished from a combatant in plain clothes). Would it then be possible to categorize the different situations, to allow some to be prohibited and others to be permitted? […]

D. Legal responsibility for LARs

75. Individual and State responsibility is fundamental to ensure accountability for violations of international human rights and international humanitarian law. Without the promise of accountability, deterrence and prevention are reduced, resulting in lower protection of civilians and potential victims of war crimes.

[…]

77. The composite nature of LAR technology and the many levels likely to be involved in decisions about deployment result in a potential accountability gap or vacuum. Candidates for legal responsibility include the software programmers, those who build or sell hardware, military commanders, subordinates who deploy these systems and political leaders.

78. Traditionally, criminal responsibility would first be assigned within military ranks. Command responsibility should be considered as a possible solution for accountability for LAR violations. Since a commander can be held accountable for an autonomous human subordinate, holding a commander accountable for an autonomous robot subordinate may appear analogous. Yet traditional command responsibility is only implicated when the commander “knew or should have known that the individual planned to commit a crime yet he or she failed to take action to prevent it or did not punish the perpetrator after the fact.” [Protocol I additional to the Geneva Conventions, 1977, arts. 86 (2) and 87] It will be important to establish, inter alia, whether military commanders will be in a position to understand the complex programming of LARs sufficiently well to warrant criminal liability.

[…]

80. The question of legal responsibility could be an overriding issue. If each of the possible candidates for responsibility identified above is ultimately inappropriate or impractical, a responsibility vacuum will emerge, granting impunity for all LAR use. If the nature of a weapon renders responsibility for its consequences impossible, its use should be considered unethical and unlawful as an abhorrent weapon.

81. A number of novel ways to establish legal accountability could be considered. One of the conditions that could be imposed for the use of LARs is that responsibility is assigned in advance. […] A system of “splitting” responsibility between the potential candidates could also be considered. In addition, amendments to the rules regarding command responsibility may be needed to cover the use of LARs. In general, a stronger emphasis on State as opposed to individual responsibility may be called for, except in respect of its use by non- state actors.

[…]

G. Taking human decision-making out of the loop

89. It is an underlying assumption of most legal, moral and other codes that when the decision to take life or to subject people to other grave consequences is at stake, the decision-making power should be exercised by humans. The Hague Convention (IV) requires any combatant “to be commanded by a person”. The Martens Clause, a longstanding and binding rule of IHL, specifically demands the application of “the principle of humanity” in armed conflict. Taking humans out of the loop also risks taking humanity out of the loop.

90. According to philosopher Peter Asaro, an implicit requirement can thus be found in IHL for a human decision to use lethal force, which cannot be delegated to an automated process. Non-human decision-making regarding the use of lethal force is, by this argument, inherently arbitrary, and all resulting deaths are arbitrary deprivations of life.

[…]

92. Even if it is assumed that LARs – especially when they work alongside human beings – could comply with the requirements of IHL, and it can be proven that on average and in the aggregate they will save lives, the question has to be asked whether it is not inherently wrong to let autonomous machines decide who and when to kill. The IHL concerns raised in the above paragraphs relate primarily to the protection of civilians. The question here is whether the deployment of LARs against anyone, including enemy fighters, is in principle acceptable, because it entails non-human entities making the determination to use lethal force.

93. This is an overriding consideration: if the answer is negative, no other consideration can justify the deployment of LARs, no matter the level of technical competence at which they operate. While the argument was made earlier that the deployment of LARs could lead to a vacuum of legal responsibility, the point here is that they could likewise imply a vacuum of moral responsibility.

94. This approach stems from the belief that a human being somewhere has to take the decision to initiate lethal force and as a result internalize (or assume responsibility for) the cost of each life lost in hostilities, as part of a deliberative process of human interaction. This applies even in armed conflict. Delegating this process dehumanizes armed conflict even further and precludes a moment of deliberation in those cases where it may be feasible.

Machines lack morality and mortality, and should as a result not have life and death powers over humans. This is among the reasons landmines were banned.

[…]

Discussion

I. Classification of the situation and Applicable Law

1. (Document A, Summary; Document B, paras [1]-[3])

    1. According to information in Documents A and B, how would you classify the situation in Libya in 2019-2020? Under which conditions is IHL applicable to a “civil war”? (GC I-IV, Common Arts 2 and 3; P I, Art. 1; P II, Art. 1)
    2. Does the “increasing support from State and non-State actors” mentioned in the UN Report change the classification of the situation? Why/why not?
    3. Does the conclusion of a ceasefire agreement mean that IHL is no longer applicable? Why/why not?

II.  Targeting

2. (Document A, paras 63-64)

  1. Which IHL general rules govern targeting? What is a lawful target? Do these rules apply equally in IACs and NIACs? (P I, Arts 48, 52(1)-(2), 57 and 58; P II, Art. 13; CIHL, Rules 7, 8, 9, 10, 14 and 15)
  2. Considering that IHL is applicable to the situation, what is the status of the HAF (Haftar Affiliated Forces) fighters? Are they combatants? Are they civilians? Under what circumstances can they be lawfully targeted? (P I, Arts 50(1)-(2), 51(1)-(3); P II, Art. 13; CIHL Rules 3, 5 and 6)
  3. Was it lawful to target the retreating units of HAF?
  4. Did Kargu-2 (the autonomous weapon system used) have to be able to determine whether some HAF fighters have surrendered? Whether they were wounded? (GC I-IV, Common Art. 3; P I, Arts 40 and 41; CIHL Rules 46 and 47)

III. Use of Lethal Autonomous Weapon  

3. (Document C, paras [8]-[15]; Document D, para. [80])

  1. Which types of weapons are prohibited under IHL? The documents stress the inherent unpredictability of lethal autonomous weapons systems (“LAWS”). Does this challenge the possibility to comply with IHL? How? (P I, Arts 35(2) and 51(4); CIHL, Rules 70-71)
  2. Can it be reliably determined whether the employment of LAWS could be prohibited in some or all circumstances? Are States obliged to have certainty in this regard? (P I, Art. 36)

4. (Document B, paras [6]-[16], [20]; Document C, paras [3]-[5]; Document D, paras 26, 94) What are the defining features of autonomy in weapon systems? How does Kargu-2 differ from drones? From loitering munitions?

5. (Document A, paras 63-64; Document C, paras [8]-[10]) According to the ICRC, under what conditions are AWS currently being used? Do you think that the use of Kargu-2 in this case corresponds to those conditions?

6. (Document A, paras 64-65; Document D, paras 64-69)

  1. Does IHL regulate the choice of weapons to be used by parties to an armed conflict? If it is true that AWS could comply with IHL better than humans, is there an obligation to acquire them? And if acquired, does IHL impose a duty to use them? (P I, Art. 57(2)(a)(ii); CIHL, Rule 17)
  2. What were the benefits of using advanced technology such as LAWS in this case? What could be the benefits in general?
  3. To determine the legality of using LAWS, should their performance in terms of distinction, proportionality, and precautions be compared with that of average soldiers, with that of ideal soldiers or independently of that of human beings?
  4. May one take into account that human beings can deliberately violate the rules they were instructed to follow? Can machines deliberately violate the rules they were programmed to follow?

 7. (Document C, paras [9], [13]; Document D, paras 63, 66-69)

  1. What are the preconditions for a lawful use of LAWS in hostilities, should they autonomously carry out the whole targeting process? (P I, Arts 48, 51(2) and 52(2); P II, Art. 13; CIHL, Rule 1)
  2. (Document A, paras 63-64; Document B paras [3]-[5]) Is the UN Report clear about whether STM Kargu-2 was used in an autonomous mode? Does it claim that there were any casualties resulting from the deployment of this weapon system?
  3. What challenges do programmers of autonomous weapons face when it comes to complying with the rule of distinction? Why is the ICRC proposing a ban on autonomous weapons targeting persons?
  4. In your opinion, is it required by IHL that LAWS never make a mistake? What is the yardstick of their efficiency?

8. (Document C, para. [10]; Document D, paras 70-72)

  1. What are the elements of the proportionality analysis under IHL? From what perspective must the proportionality evaluation be made, ex ante or ex post? (P I, Arts 51(5)(b); CIHL, Rule 14)
  2. In your opinion, is the proportionality evaluation inherently subjective or does it only involve the objective application and comparison of parameters? Are there some benefits to be gained if the proportionality assessment was objectivised? What advantages and disadvantages do you see in attributing fixed values to specific categories of targets and casualties and programming those into autonomous weapon systems prior to the attack?
  3. Is it more difficult for an autonomous system to evaluate the military advantage aspect of the proportionality evaluation or the anticipated risks for civilians and civilian objects? (P I, Arts 51(5)(b); CIHL, Rule 14)

IV. Responsibility under IHL

9. (Document C, para. [6]; Document D, paras 75-81)

  1. Does IHL require that accountability for violations is ensured? Which category of violations of IHL must be investigated and potentially prosecuted? (P I Art. 85; CIHL, Rule 158)
  2. To which conduct can IHL rules be applicable? If Kargu-2 “committed” violations of IHL, could the software programmers and hardware retailers possibly be held accountable? The commander or soldier who decided to deploy them? Even if he or she trusts that the system was programmed to respect IHL? Who else could be held accountable? (P I, Art. 36; CIHL, Rule 151)
  3. In your opinion, is command responsibility the adequate mode of liability that should be applied to hold individuals responsible for violations of IHL committed through acts of AWS? Do you see any issues with the analogy between human subordinates and autonomous weapon systems? (P I, Art. 86; CIHL, Rule 153)

10. (Document C, paras [4], [11], [14], [17]-[20]; Document D, paras 89-94)

  1. The documents introduced call for ensuring sufficient human control over the use of force. In your opinion, is it a requirement already embedded in IHL, or does it need to be newly adopted? (P I, Art. 1(2), preamble to HR)
  2. Should the decision to kill a certain number of people, or the decision to kill a particular human being be reserved only to humans? In the latter case, do existing weapons such as artillery, missiles or aerial bombs comply with this requirement?
  3. If philosophy considers it morally wrong to delegate a decision to use lethal force to an automated process, what relevance could it have for law de lege lata and de lege ferenda?