A global summit held in Seoul on Tuesday has unveiled a significant “blueprint for action” to guide the responsible use of artificial intelligence (AI) in military applications. This new framework marks a shift from last year’s less concrete “call to action,” providing more practical guidelines, though it remains legally non-binding.
The Responsible AI in the Military Domain summit, the second of its kind, was attended by representatives from 96 nations, including major global players such as the United States and China. However, it remains unclear how many of these countries will endorse the new document.
The summit follows a similar event in Amsterdam last year, where approximately 60 nations supported a more general call for responsible AI use without legal commitments. This year’s summit, co-hosted by South Korea, the Netherlands, Singapore, Kenya, and the United Kingdom, aims to build on that groundwork with more actionable steps.
Netherlands Defence Minister Ruben Brekelmans highlighted the progress made since the previous summit. “Last year was more about creating shared understanding; now we are getting more towards action,” Brekelmans told Reuters. The blueprint outlines specific actions, including risk assessments, maintaining human control, and implementing confidence-building measures to manage potential risks associated with military AI.
One of the key additions in this year’s document is a focus on preventing the use of AI to proliferate weapons of mass destruction, particularly by terrorist groups. The importance of preserving human oversight in the deployment of nuclear weapons is also emphasized.
South Korean officials noted that while the document aligns with principles found in other frameworks, such as the U.S. government’s declaration on responsible military AI use, the Seoul summit strives to ensure that discussions remain balanced and not dominated by any single nation or entity.
Sources By Agencies