The independent resource on global security

Responsible innovation in AI for peace and security

Responsible innovation in AI for peace and security
Photo: Shutterstock

Advances in artificial intelligence (AI) present both opportunities and risks for international peace and security. Peaceful applications of AI can help achieve the United Nations Sustainable Development Goals, or even support UN peacekeeping efforts, including, through the use of drones for medical deliveries, monitoring and surveillance. However, civilian AI can also be misused for political disinformation, cyberattacks, terrorism or military operations. Those working on AI in the civilian sector often remain unaware of the risks that the misuse of civilian AI technology may pose to international peace and security, and unsure about the role they can play in addressing them 

In 2018, the UN Secretary General identified responsible innovation in science and technology as an approach for academia, the private sector and governments to work on mitigating the risks that are posed by new technologies. 

This initiative, which is conducted in partnership with the UN Office for Disarmament Affairs (UNODA) and supported by a decision of the Council of the EU, aims to promote responsible innovation as a way for the AI community to help ensure the peaceful application of civilian AI technology.  

Combining awareness-raising and capacity-building activities, the project seeks to provide the civilian AI community—especially the next generation of AI practitioners—with the necessary knowledge and means to understand and mitigate the unintended negative consequences that their work could have on peace and security.  

The initiative is guided by an Advisory Board which consists of university professors from all over the world.  

For more information on the initiative, the activities, deliverables and the Advisory board please consult the initiative’s website.