Security Disputes and U.N. Efforts Raise Pressure for Global AI Rules
Leaders, multilateral initiatives and defense conflicts all point to the widening gap between capability and governance.

Pressure is building across diplomatic, political and security arenas to establish guardrails for artificial intelligence, as competing approaches to governance emerge and the gap between technological capability and oversight widens.
A group of former heads of government, Nobel laureates and leading scientists, convened by The Elders, called on governments “to manage artificial intelligence with an urgency that reflects both scientific evidence and public concern,” warning that current governance frameworks are falling behind rapid advances in the technology.
The group, which was founded by the late Nelson Mandela in 2007 and includes former leaders such as Mary Robinson and Helen Clark, highlighted risks across security, human rights and environmental domains.
“Militaries are integrating commercial AI systems into weapons prematurely,” the group said on Friday, warning of potential violations of international law and escalation risks tied to autonomous and AI-assisted systems.
Governance gap widens
Their call for regulation comes as the United Nations advances a parallel effort to shape AI governance through non-binding mechanisms rather than negotiated rules.
“People expect their governments to regulate companies so profit is not prioritised over public safety,” the group said. “We reject claims that governments cannot or should not regulate AI.”
U.N. Secretary-General António Guterres has proposed a three-part framework combining an independent scientific panel, a global policy dialogue and a funding mechanism aimed at expanding AI capacity in developing countries.
The approach reflects a shift toward building shared technical standards and participation capacity before negotiating binding rules — an acknowledgment of geopolitical divisions that have stalled formal regulation.
Guterres has warned that the future of AI “cannot be decided by a handful of countries or left to the whims of a few” powerful nations, companies or billionaires, emphasizing the need for broader participation in shaping the technology’s trajectory.
Security tensions sharpen focus
At the same time, disputes over the use of AI in national security are moving from abstract debate to operational conflict.
In Washington, a standoff between the Pentagon and AI firm Anthropic over safeguards on military use of artificial intelligence escalated into U.S. President Donald Trump’s decision to order federal agencies to stop using the company’s technology.
But on Friday, Anthropic won a legal round against the Trump administration when a federal judge granted an injunction.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Judge Rita Lin of the Northern District of California ruled in granting the injunction against a government order that called it a supply chain risk. “This appears to be classic First Amendment retaliation.”
Trump and Defense Secretary Pete Hegseth had publicly declared the Pentagon was cutting ties with Anthropic after it refused to allow unrestricted military use of its Claude AI model. The restrictions include the use of lethal autonomous weapons without human oversight and mass surveillance of Americans.
The dispute centered on whether private developers can impose limits on how governments deploy AI systems in areas such as surveillance and autonomous weapons — a question that goes to the core of emerging governance models.
While the U.S. government has argued that lawful use should not be constrained by vendors, technology firms and researchers have raised concerns about the risks of deploying advanced systems without enforceable safeguards.
Converging signals
Political leaders and scientific bodies are calling for stronger AI oversight even as governments accelerate its adoption in defense and intelligence systems. Multilateral institutions, meanwhile, are attempting to build consensus through technical coordination and capacity-building rather than binding agreements.
The result is an emerging governance landscape defined less by formal treaties than by overlapping initiatives — national policies, corporate standards and multilateral frameworks shaping how AI is used.
The Elders, which is now chaired by Juan Manuel Santos and Graça Machel, argued that without coordinated action, the trajectory of AI development will be set by a select few in power.
“There is nothing inevitable about how AI develops,” the group said. “Who it benefits and harms is a shared global challenge, not a race between a handful of countries or companies.”

