At the start of June, SIPRI co-organized two events in Lisbon, Portugal, on responsible artificial intelligence (AI) for peace and security. Both events were held in collaboration with the United Nations Office for Disarmament Affairs (UNODA).
The first event was a roundtable discussion following the fourth Foundation of Trustworthy AI: Integrating Learning, Optimisation and Reasoning (TAILOR) Conference. The roundtable, held under the Chatham House Rule, brought together AI experts, educators specializing in AI curricula development, and governance representatives.
Participants explored strategies for integrating AI ethics and responsible innovation principles into educational frameworks, while also addressing governance challenges related to preventing the misuse of civilian AI technologies. The roundtable underscored the critical role of collaboration and dialogue for shaping responsible AI practices and highlighted the commitment of stakeholders to advancing ethical standards in AI development.
The second event, held on 6–7 June with the Instituto Superior Técnico at the University of Lisbon, was a capacity-building workshop for young AI practitioners from around the world. Interactive sessions over two days equipped participants with the knowledge and skills necessary to address the risks that civilian AI research and innovation may pose to international peace and security. Participants were exposed to responsible AI practices during the sessions and given the opportunity to conduct their own risk assessments, challenge ideas and creatively engage.
The event brought together 18 participants from Brazil, China, France, Germany, India, Italy, Mexico, Portugal, the Republic of Korea (South Korea), Syria and Viet Nam.
The workshop and the roundtable are part of a series that will continue throughout 2024 under a European Union-funded initiative on ‘Responsible Innovation in AI for Peace and Security' conducted jointly by SIPRI and UNODA.
About SIPRI’s Governance of AI Programme
SIPRI’s Governance of AI Programme seeks to contribute to a better understanding of how AI impacts on international peace and security. The programme’s research on AI explores themes such as: (a) how AI may find uses in conventional, cyber and nuclear force-related systems; (b) how the military use of AI might create opportunities but also humanitarian and strategic risks for arms control and export verification; and (c) how the risks posed by AI may be governed through international law, arms control processes and responsible research and innovation.
Click here to read more about SIPRI’s Governance of AI Programme.