NATO Principles and the Ethical Landscape of Autonomous Weapons
A central debate revolves around whether the current form of international humanitarian law (IHL), as the most pertinent body of law governing warfare, adequately addresses the challenges posed by lethal autonomous weapon systems (LAWS).
An expanding body of literature focuses on the ethical and legal dimensions surrounding the utilization of artificial intelligence (AI) for military purposes, particularly emphasizing the emergence of lethal autonomous weapon systems (LAWS). Depending on an organization's stance towards LAWS, varied definitions underscore different ethical and legal considerations.
To distill these definitions, LAWS is proposed to be machines capable of adaptable responses to their environment, seamlessly transitioning between observation and engagement stances to autonomously identify, select, and engage targets without human intervention. This distinctive characteristic sets autonomous weapons apart from automated ones, which rely on computational processes to expedite certain functions but are constrained to achieving narrow preset goals through detailed and deterministic programming.
Technological evolution is continuing, transitioning from older, exclusively pre-programmed systems for specific tasks towards more versatile systems with increased adaptability, thanks to AI, particularly in functions like image and voice recognition. This article adopts the definition of LAWS provided by Taddeo and Blanchard, acknowledging the non-existence of such systems currently, such as an unmanned aerial combat vehicle executing an entire combat operation without human assistance. The article emphasizes the understanding of lethality in the context of LAWS, specifically referring to systems designed for military combat, including deliberate strikes on human combatants and manned military platforms and vehicles.
A central debate in this literature revolves around whether the current form of international humanitarian law (IHL), as the most pertinent body of law governing warfare, adequately addresses the challenges posed by LAWS.
In recent times, several nations on the global stage have formally committed to the ethical use of AI in defence. Notably, the United States Department of Defense (DoD) adopted ethical AI principles in February 2020, based on recommendations from the Defense Innovation Board. A significant development was the unanimous adoption of similar principles by all 30 governments of the North Atlantic Treaty Organization (NATO) as part of NATO's inaugural artificial intelligence strategy. Both the U.S. and NATO principles are applicable to all military applications of AI, with a specific focus on but not limited to lethal autonomous weapon systems (LAWS).
After conducting a literature review and outlining the stances of relevant organizations and nations, our investigation will center on a particular NATO principle—Explainability and Traceability. This exploration will delve into interconnected issues related to transparency, security, and the intentional versus unintentional aspects of unpredictability and deception. It's crucial to note that these considerations are technically linked to specific types of AI, particularly machine learning. Specifically, a machine learning algorithm with fully observable parameters by all parties would align with the principle of explainability and traceability, but its behaviour, in principle, would be predictable for all parties, potentially compromising security and offering advantages to adversaries.
For military effectiveness, a LAWS should possess a level of unpredictability for adversaries and, to some extent, for the operating party, within defined limits. The pinnacle of military effectiveness would be achieved if a LAWS could successfully deceive adversaries while operating within the boundaries of international humanitarian law (IHL) and posing no threat to its operating side.
The primary intergovernmental forum for negotiating norms that could constrain the development or use of LAWS is the group of governmental experts on lethal autonomous weapon systems (GGE LAWS), functioning within the framework of the Convention on Certain Conventional Weapons (CCW). The CCW, an international treaty with additional protocols, establishes norms and limitations on specific types of conventional weapons. As of 2022, 126 states, including the five permanent members of the Security Council, most significant military powers, members of NATO, the European Union, and Latin American countries, are parties to the CCW. Notably, non-parties are primarily nations in Africa, the Caribbean, the Middle East, and Southeast Asia, with Iran and North Korea being notable non-parties from a hard security perspective.
The GGE LAWS, operational since 2014, played a crucial role in adopting 11 guiding principles proposed in the conclusions of a 2018 report. The first two principles assert the full application of IHL to all weapons systems, including LAWS, and emphasize the retention of human responsibility, as accountability cannot be transferred to machines. The subsequent principles broadly address the development and use of LAWS, including risk assessment, risk mitigation, security measures, and human control under the umbrella of "human-machine interaction." While not legally binding obligations, these principles represent a consensus among state parties regarding their commitments to national practices. However, they do not pave the way for an international monitoring or verification regime for LAWS. Given the consensus nature of agreements among state parties, the agreed-upon limitations on LAWS, as of 2019, were set at the lowest common denominator between national positions.
The trajectory for an international agreement seemed, at that point, focused on affirming IHL with flexible political commitments on human control. There was no indication of the five permanent members of the UN Security Council reaching a consensus on comprehensive prohibitions regarding the use or development of certain types of LAWS. This pattern was attributed to a combination of factors: major military powers, entrenched in mutual distrust, aimed to retain flexibility in exploring potential military advantages. Notably, LAWS held the potential, through greater accuracy and speed, to surpass non-autonomous equivalents in their impact on opposing forces and the reduced risk they posed to one's own forces, unlike chemical or biological weapons, which lack such characteristics and are prohibited.
By 2022, there had been notable shifts in certain national positions. Two coalitions of countries emerged, with one group comprising the USA, the UK, Korea, Japan, and Australia, and the other including Argentina, Costa Rica, Guatemala, Kazakhstan, Nigeria, Panama, Philippines, Sierra Leone, State of Palestine, and Uruguay. Both groups jointly submitted papers to the GGE LAWS, proposing the prohibition of four potential types of lethal autonomous weapon systems (LAWS). As per the first paper, these types include LAWS causing superfluous injury or unnecessary suffering, inherently indiscriminate systems, those designed for attacks against civilian populations, and those with autonomous functions allowing attacks not under human command responsibility.
Although these types arguably derive from applicable international humanitarian law (IHL) and the 2019 guiding principle emphasizing human accountability, explicit prohibitions offer greater legal clarity and commitment value between states and toward populations and civil society. Notably, the joint paper by the USA and its allies allows for the potential use of LAWS autonomously engaging military targets in accordance with IHL, without requiring human in-the-loop control.
While other national submissions to the GGE LAWS in 2022 contained significant elements, the convergence between the two joint papers presents the most substantial potential for agreed-upon prohibitions on specific types of LAWS to date. In tandem with the potential development of an intergovernmental agreement, which could take the form of a new protocol under the Convention on Certain Conventional Weapons (CCW), states have also been engaged in developing 'soft law,' such as national guidelines and principles. Though not legally binding, these soft law approaches play a crucial role in providing more detailed guidelines to structure national activities beyond what states might be comfortable agreeing to in a legally binding convention or treaty.
The USA took a significant step by releasing a defense-specific AI strategy in 2019, followed by the adoption of five AI Principles by the Department of Defense (DoD) in 2020. These principles emphasize responsibility, equity, traceability, reliability, and governability for all DoD AI capabilities, including AI-enabled autonomous systems. While the USA favors a dispersed model of human judgment, allowing humans not to be in control at the specific moment of engagement but at crucial points in the process, it is important to note that the USA, as of late 2022, does not currently possess fully autonomous LAWS. However, officials have indicated the potential development of such capabilities if competitors choose to do so.
In June 2022, the UK published its Defence Artificial Intelligence Strategy, accompanied by a policy paper outlining five ethical principles for defence. These principles focus on human-centricity, responsibility, understanding, bias and harm mitigation, and reliability. The UK reiterates its commitment to the CCW as the primary forum for discussions on LAWS and IHL, emphasizing adherence to existing legal frameworks, including IHL.
France, through its Ministry for Defense, has a Defence Ethics Committee that expressed its view on autonomy integration into LAWS in a 2021 report. The committee deems fully autonomous weapons ethically unacceptable, but partially autonomous systems may be acceptable subject to defined conditions. French definitions align with the UN's definition of LAWS, and France has explicitly rejected incorporating fully autonomous systems into military operations.
China, while not specifically addressing defence, has published various documents related to AI governance. These documents include principles with 'Chinese characteristics,' emphasizing harmony, but specific details regarding AI's role in defence remain limited in this context.
In 2019, China established a National Ethics Committee on Science and Technology to oversee the regulation of AI in general. Observers have noted that China's definition of Lethal Autonomous Weapon Systems (LAWS) is unclear, potentially allowing for machines that cannot be deactivated or could use force indiscriminately. In a position paper on military AI regulation released in early 2022, China acknowledges the broader need to manage potential risks but does not outline specific commitments or initiatives indicating the development of national laws, rules, or regulations for LAWS.
Both NATO and the Department of Defense (DoD) frameworks include traceability among their principles of ethical and responsible use. NATO principles also incorporate explainability in addition to traceability. In civilian ethical guidelines for AI, traceability and explainability align with transparency, one of the most frequently mentioned principles. However, transparency is a term with varying interpretations, prompting the need for a careful distinction and definition of traceability, explainability, and transparency.
Transparency is commonly perceived as having and revealing information about the internal processes of a public institution, company, or enterprise. This type of transparency is often considered a virtue that aids in combating corruption, ensuring accountability, and building trust. The conventional definition of transparency places the responsibility on the enterprise to make certain information publicly available, typically from the sender's (enterprise's) perspective without ensuring that the public (receiver) is effectively informed.
However, many AI applications can provide only a quantitative explanation of why certain inputs and outputs are correlated, lacking a semantic explanation for stakeholders. To address these issues, two crucial concepts are explainability and traceability. Traceability involves the ability to trace certain outputs from an AI algorithm back to specific inputs in the decision chain. From an ethical standpoint, traceability is significant for ascribing responsibility and predicting future behaviour (governability). Yet, knowing the correlation between inputs and outputs does not inherently provide an explanation of why they are correlated. Explainability, on the other hand, refers to the ability to offer a semantic expression, rather than merely quantitative and operational, explaining why decision processes unfolded in a certain way.
In the context of LAWS, the trade-off between accuracy and explainability is intricate. Accuracy in LAWS can determine the success of military operations or the avoidance of civilian casualties. Trading off accuracy for explainability is problematic, as the impact of accuracy metrics is tangible and lethal, while the benefits of explainability are conceptual and retrospective. Additionally, the robustness of an AI system may rely on a degree of opacity to prevent malicious actors from reverse engineering it. The tension between openness for innovation and the need for secrecy in military technology poses challenges in creating cyber defences for LAWS that could withstand scrutiny.
Finally, contrary to common belief, black boxes can instil trust. It is a widely accepted notion that understanding the inner workings of a system is essential for trusting it. However, this is not universally true, as there exists another source of trust—practical value. Black box algorithms have found applications in high-stakes domains such as the military, healthcare, and criminal justice, functioning with human oversight but lacking explainability. Professionals in these fields, such as radiologists, rely on these systems for their expertise. If a system consistently delivers accurate predictions, it naturally generates trust.
To further analyze the concepts of accuracy, secrecy, and trust, one should connect them back to the established NATO principles of responsible use. Specifically, accuracy and secrecy should be considered within the NATO principle of reliability. This principle was originally intended to encompass an AI system's technical capability to perform as intended, which inherently includes high accuracy. The reliability principle explicitly addresses security, focusing on the system's resilience against electronic attacks that could lead to malfunction or reveal critical technical information. Trust, on the other hand, is encompassed by the NATO principle of governability, emphasizing "appropriate human–machine interaction." This phrase was designed to cover efforts aimed at fostering trust between AI systems and their human operators or collaborators. We recommend emphasizing the concepts of accuracy, secrecy, and trust within the existing NATO principles and utilizing these concepts to define what constitutes sufficient fulfilment of the principle of explainability and traceability.
With indications that the Department of Defense (DoD) and NATO are actively working to integrate AI into warfare operations, additional policy considerations emerge for the deployment of Lethal Autonomous Weapon Systems (LAWS). Military and defence organizations must invest substantial resources, both financially and in terms of skilled expertise, to ensure the responsible use of LAWS and AI systems in defence more broadly. Evaluating the application of these principles to LAWS demands expertise in legal, policy, governance, and technical domains. At the policy level, overarching questions persist about how states will adopt AIs and employ LAWS in conflict, and to what extent these principles will be enforced in conflict scenarios. Beyond addressing the responsible use of AI, policymakers must confront the increasing technological dependence, not only for maintaining security postures through rapid adoption but also for promoting responsible and consequently more stable use. This paper underscores the significance of principles as a mechanism to tackle the legal and ethical challenges associated with LAWS.