US, China opt out of joint declaration on AI use in military
Published by Global Banking & Finance Review®
Posted on February 5, 2026
2 min readLast updated: February 5, 2026
Published by Global Banking & Finance Review®
Posted on February 5, 2026
2 min readLast updated: February 5, 2026
US and China abstained from a joint AI military use declaration at a recent summit, highlighting global concerns over AI governance in warfare.
By Victoria Waldersee
A CORUNA, Spain, Feb 5 (Reuters) - Around a third of countries attending a military AI summit agreed on Thursday to a a declaration on how to govern deployment of the technology in warfare, but military heavyweights China and the U.S. opted out.
Tensions in relations between the United States and European allies, and uncertainty over how transatlantic ties will look in coming months and years, made some countries hesitant to sign joint agreements, several attendees and delegates said.
The pledge underscores growing concern among some governments that rapid advances in artificial intelligence could outpace rules around its military use, raising the risk of accidents, miscalculation or unintended escalation.
Governments are facing a "prisoner's dilemma", caught between putting responsible restrictions in place and not wanting to limit themselves in comparison with adversaries, said Dutch Defence Minister Ruben Brekelmans.
"Russia and China are moving very fast. That creates urgency to make progress in developing AI. But seeing it going fast also increases the urgency to keep working on its responsible use. The two go hand-in-hand," he said in comments to Reuters.
Only 35 countries out of 85 attending the Responsible AI in the Military Domain (REAIM) summit in A Coruna, Spain, on signed a commitment to 20 principles on AI on Thursday.
These included affirming human responsibility over AI-powered weapons, encouraging clear chains of command and control, and sharing information on national oversight arrangements "where consistent with national security".
The document also outlined the importance of risk assessments, robust testing and training and education for personnel operating military AI capabilities.
At two prior military AI summits in The Hague and Seoul in 2023 and 2024 respectively, around 60 nations, excluding China but including the United States, endorsed a modest "blueprint for action" without legal commitment.
While this year's document was also non-binding, some were still uncomfortable with the idea of endorsing more concrete policies, said Yasmin Afina, a researcher at the U.N. Institute for Disarmament Research, an adviser on the process.
Major signatories on Thursday included Canada, Germany, France, Britain, the Netherlands, South Korea and Ukraine.
(Reporting by Victoria Waldersee; editing by Aislinn Laing and Mark Heinrich)
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines programmed to think and learn. AI is used in various applications, including military technology, to enhance decision-making and operational efficiency.
Military AI governance involves establishing guidelines and principles for the ethical and responsible use of artificial intelligence in military operations, ensuring that technology is used safely and effectively.
A joint declaration is a formal agreement made by multiple parties, often countries, to express shared commitments or principles on specific issues, such as the governance of AI in military contexts.
Risk assessments are systematic processes used to identify, evaluate, and prioritize risks associated with a particular action or technology, such as the deployment of AI in military operations.
Human responsibility in AI refers to the principle that humans must maintain oversight and accountability for decisions made by AI systems, especially in critical areas like military applications.
Explore more articles in the Finance category

