The independent resource on global security

Bias in Military Artificial Intelligence

To support states involved in the policy debate on military artificial intelligence (AI), this background paper provides a deeper examination of the issue of bias in military AI. Three insights arise. 

First, policymakers could usefully develop an account of bias in military AI that captures shared concern around unfairness. If so, ‘bias in military AI’ might be taken to refer to the systemically skewed performance of a military AI system that leads to unjustifiably different behaviours—which may perpetuate or exacerbate harmful or discriminatory outcomes—depending on such social characteristics as race, gender and class. 

Second, among the many sources of bias in military AI, three broad categories are prominent: bias in society; bias in data processing and algorithm development; and bias in use.

Third, bias in military AI can have various humanitarian consequences depending on context and use. These range from misidentifying people and objects in targeting decisions to generating flawed assessments of humanitarian needs.

Table of contents

I. What does ‘bias in military AI’ refer to?

II. The sources of bias in military AI     

III. The humanitarian consequences of bias in military AI    

IV. Conclusions

ABOUT THE AUTHOR(S)/EDITORS

Dr Alexander Blanchard is a Senior Researcher in the Governance of Artificial Intelligence Programme at SIPRI.
Laura Bruun is a Researcher in the SIPRI Governance of Artificial Intelligence Programme.