On 14–15 February, SIPRI and the United Nations Office for Disarmament Affairs (UNODA) convened a workshop on artificial intelligence (AI). The two-day capacity-building workshop focused on the topic ‘Responsible AI for Peace and Security’ and was held for a group of 18 students.
The workshop gave emerging practitioners in the field of AI the opportunity to learn about the risks that civilian AI research and innovation could generate for international peace and security and how to address those risks. The event—the second in a series of four—was held in Tallinn, Estonia, in collaboration with the Tallinn University of Technology. All participants were from science, technology, engineering, and mathematics disciplines. They hailed from 13 countries, including China, Egypt, Estonia, Greece, India, Indonesia, Iran, Italy, Lithuania, Philippines, Romania, Singapore and Türkiye.
Through interactive seminars, live polls and scenario-based exercises, the workshop gave participants a grounding in the field of responsible AI. Participants worked through risk assessments, debated ideas and engaged creatively to increase their understanding of: (a) how peaceful AI research and innovation may generate risks for international peace and security; (b) how they could help prevent or mitigate those risks through responsible research and innovation; and (c) how they could support the promotion of responsible AI for peace and security.
The workshop series, which will continue throughout 2024, is part of a European Union-funded initiative on ‘Responsible Innovation in AI for Peace and Security’, conducted jointly by SIPRI and UNODA.
About SIPRI’s Governance of AI Programme
SIPRI’s Governance of AI Programme seeks to contribute to a better understanding of how AI impacts on international peace and security. The programme’s research on AI explores themes such as: (a) how AI may find uses in conventional, cyber and nuclear force related systems; (b) how the military use of AI might create humanitarian but also strategic risks, and opportunities, for arms control and export verification; and (c) how the risks posed by AI may be governed through international law, arms control process and responsible research and innovation.
Click here to read more about SIPRI’s Governance of AI Programme.