The independent resource on global security

Dilemmas in the policy debate on autonomous weapon systems

Photo: Adobe stock.
Photo: Adobe stock.

It is more than 10 years since the first multilateral policy debate took place on whether and how to regulate autonomous weapon systems (AWS). And for most of the past decade, formal discussions on AWS have been limited to groups of experts under the auspices of the United Nations Convention on Certain Conventional Weapons (CCW). However, a growing number of stakeholders have voiced dissatisfaction with the CCW process. There has been pressure from various sides for the policy debate on AWS to be diversified—both in terms of where it should take place (many argue that taking the debate to alternative multilateral forums could accelerate progress towards an international regulatory instrument) and of what the debate should focus on, given technological and geostrategic developments since 2014.

However, diversifying the policy process implies several trade-offs. This essay highlights some of those trade-offs and helps policymakers consider their potential implications for international efforts to regulate AWS.

The AWS debate spreads beyond the CCW

The AWS debate has already started to spread to forums outside the Geneva-based CCW. In 2023, the Netherlands hosted the first international Summit on Responsible Artificial Intelligence in the Military Domain (REAIM), where AWS was widely discussed. Also in 2023, debate in the First Committee of the UN General Assembly culminated in the first UN resolution on AWS, requesting the UN Secretary-General to collect and report states’ views on the topic.

Since 2023, diplomatic talks on regulating AWS and related military artificial intelligence (AI) technologies have happened along three concurrent tracks: at the CCW group of governmental experts on lethal autonomous weapon systems (GGE on LAWS); at the REAIM Summit (with South Korea hosting the second summit in September 2024); and in the First Committee (which adopted a second resolution on AWS in October 2024 requesting the Secretary-General to facilitate formal consultations on AWS in the General Assembly). The First Committee also adopted a resolution on responsible AI in the military domain that has implications for some types of AWS. In addition, like-minded states can participate voluntarily in governance initiatives associated with the 2023 United States Political Declaration on Responsible Military AI.

All this raises a number of questions, not the least of which are: how will these processes overlap, complement or compete with one another? And how should states and organizations allocate institutional resources and efforts between these different tracks? Answering these questions is complicated by the fact that different actors have different ideas of what constitutes success. Hence, a third question is what the overall goal of the AWS policy process ought to be. The current mandate of the GGE on LAWS is to formulate ‘by consensus, a set of elements of an instrument, without prejudging its nature’ for regulating AWS. Many want this instrument, if agreed on, to be legally binding—for example, a new protocol to the CCW. There are also widely varying expectations about what such an instrument should cover. 

In addition, there are outcomes other than a legally binding instrument on the table, such as a manual on the application of international humanitarian law (IHL) in relation to AWS or the establishment of a standing group of technical experts. It is also possible that states will simply continue to address AWS issues at the national level, in light of their existing obligations under international law.

Rationales behind the calls to diversify the AWS debate. 

A key factor pushing calls to diversify the AWS policy process is the perceived limitations of the CCW. While the GGE on LAWS has played a crucial role in the international discourse on AWS and has achieved some convergence among states on a possible ‘normative and operational framework’, it has long faced criticism on several grounds. These relate to its focus on IHL to the exclusion of other aspects of AWS, including ethics and security issues; its smaller membership and participation relative to other forums; its narrow focus on AWS when other technologies pose similar challenges; and its consensus-based procedural rules.

Exclusive focus on international humanitarian law

While the CCW—an instrument of IHL—provides a crucial framework for addressing the challenges associated with the use of AWS, some argue that it means the GGE on LAWS does not fully capture the range of legal, ethical and security implications posed by AWS. For instance, stakeholders have raised questions about the human rights implications of AWS use, the potential for AWS use to lead to unintended escalation, and the impact of AWS possession on strategic stability. The continued emphasis on IHL issues in the GGE on LAWS has left many stakeholders desiring a forum in which other concerns can be given more consideration.

Lack of Inclusiveness

The GGE on LAWS has also been criticized for a perceived lack of inclusiveness. Many have argued for taking intergovernmental deliberation outside of the GGE in order to ensure that a wider range of voices are heard. One line of argument is that too many states are left out of the debate, as the CCW has only 127 high contracting parties and four signatories, compared with the UN General Assembly’s 193 member states and two non-member state observers. The CCW process has also been criticized for excluding non-government voices. For example, over the years some states have tried to make the participation of civil society groups at the GGE on LAWS more difficult, for example excluding them from certain sessions. In this respect the GGE on LAWS is often contrasted with REAIM, which bills itself as ‘a platform for global discussion with all stakeholders’ of military AI. However, this comparison may be unfair, since REAIM has no authority to make law. 

Narrow focus on AWS

A key rationale that has been advanced for diversifying the policy process is the concerns raised by broader military applications of AI. The CCW debate on AWS has historically focused on weapons systems that, once activated, can select and engage targets without further human intervention. However, some have argued that a number of other technologies raise similar legal, ethical, humanitarian and security concerns, and thus fall within the GGE on LAWS’s mandate to cover ‘emerging technologies in the area of lethal autonomous weapon systems’. One example is the use of AI-enabled decision-support systems (DSS) to inform targeting decisions, as Israel has reportedly done in Gaza. However, while the GGE on LAWS has not adopted a formal definition of AWS, the characterization of AWS included in its current rolling text would not cover advancements such as AI-enabled DSS.

The consensus-based model

One of the most prominent criticisms of the GGE on LAWS is that its consensus-based model has allowed some participants to block progress towards a substantial outcome, particularly for agreeing on elements of a potential legally binding instrument. Achieving consensus among states with diverse interests and perspectives can be challenging, especially on complex and sensitive issues like AWS. It is especially difficult in today’s tense and polarized geopolitical context. By contrast, the UN General Assembly can adopt decisions by a two-thirds majority. Indeed, the overwhelming support for the two General Assembly resolutions on AWS has been taken by some as proof that the CCW process is allowing a handful of states to block the development of a treaty that the majority appear to urgently want. 

Potential trade-offs

While there may be compelling reasons to diversify the AWS policy process in terms of forum and content, policymakers need to carefully consider and balance some potential trade-offs. These can be broadly categorized as comprehensiveness versus focus; diffusion versus duplication; diversification versus political buy-in; and inclusiveness versus efficiency.

Comprehensiveness versus focus

There are potential drawbacks to broadening the scope of the CCW debate on AWS to include a breadth of other military applications of AI. States do not yet agree about how to define AWS. It took time to develop an informal ‘working characterization’ of AWS within the GGE on LAWS, and this characterization focuses on weapons that, once activated, can identify, select and engage targets without human intervention. Introducing technologies such as AI-enabled DSS to the conversation could potentially reopen the question of characterization and could take the GGE on LAWS back to square one.

A similar risk also applies to generally expanding the range of AWS challenges that are to be considered in addition to IHL, whether in the GGE on LAWS or another forum. Broadening the scope of the debate—to take in, for example, human rights law or escalation risks—will mean either devoting more time to discussions or keeping them on a more superficial level. It may also present resourcing issues for some states. Policymakers need to seek the right balance between comprehensiveness and focus to ensure that the international debate on AWS is effective and relevant.

Diffusion versus duplication

Spreading the debate on regulating AWS to several forums might create resilience by fostering alternative pathways towards an instrument if the CCW process proves inconclusive. But doing so might also bring new uncertainties, including the risk that different processes will duplicate effort or lead to incompatible outcomes. Also, any other process may well be subject to the same geopolitical constraints and compromises that influence deliberations in the GGE on LAWS. 

Thus the potential benefits of maintaining parallel processes need to be balanced against the risk that the substantial effort, resources and time they demand might ultimately be wasted. In view of this, states must be clear-eyed about what is to be desired and expected from any given process. 

Diversification versus political buy-in

Including a greater diversity of stakeholders, or simply moving the debate to additional forums, could reduce some states’ political buy-in to the processes and, as a consequence, to any outcomes from them. For one thing, participating states have argued that as the CCW was previously selected as the legitimate forum for discussing the regulation of AWS, the GGE on LAWS should, at the very least, be allowed to complete its current mandate, which expires in 2026, before starting parallel processes. They have also displayed a general aversion to prejudging the success of the GGE process.

Indeed, many states may see the current level of participation of the GGE on LAWS, the limited scope of its discussions, and its requirement for consensus as advantages.  The GGE on LAWS also has a larger value beyond the regulation of AWS, being one of the few remaining forums where major powers still discuss arms control issues. Any agreement on regulating AWS would have limited efficacy if it did not attract buy-in from the large military powers. It is noteworthy that China chose to abstain on the 2024 First Committee resolution on responsible AI in the military domain.

Inclusiveness versus efficiency

Another possible downside of making the AWS policy debate more inclusive is how it could slow and complicate the process. For example, the more participants there are, the more difficult it could be to reach agreement. More generally, it is likely to make debates and other processes longer. Participants will come to the debate with varying levels of knowledge and engagement and different understandings of the issues involved. It could thus take more time and effort to ensure that they are all on the same page. There is also a risk that trying to make discussions as broadly accessible as possible could prevent them from going into sufficient depth and detail.

No easy answers

The multilateral process on AWS is at a critical juncture, with frustration mounting at the slow rate of progress in the GGE on LAWS and its limited participation and scope. However, policymakers considering the various calls to continue diversifying the process must weigh up the benefits and possible trade-offs involved. Autonomous weapons systems raise profound questions about the human role in the use of force. How those questions get answered on the international stage, or whether they get answered at all, currently hangs in the balance. 

ABOUT THE AUTHOR(S)

Dr Alexander Blanchard is a Senior Researcher in the Governance of Artificial Intelligence Programme at SIPRI.
Dr Vincent Boulanin is Director of the Governance of Artificial Intelligence Programme at SIPRI.
Laura Bruun is a Researcher in the SIPRI Governance of Artificial Intelligence Programme.
Netta Goussac was an Associate Senior Researcher within SIPRI’s Armament and Disarmament research area.