What the U.S. AI Showdown Reveals About Competing Governance Models
As the Pentagon seeks sovereign authority, the U.N. advances redistribution, dialogue and shared technical baselines.

WASHINGTON (AN) — For more than a decade, diplomats in Geneva have wrestled with a question that once felt speculative: What happens when machines begin to make battlefield decisions?
The meetings, held in U.N. disarmament forums, have produced draft principles, warnings and stalled proposals. Advocacy groups have called for binding rules to ensure “meaningful human control” over autonomous weapons. The International Committee of the Red Cross has cautioned that delegating lethal force to algorithms risks eroding the legal foundations designed to protect civilians.
The debates have been careful and incremental. In Washington last week, they collided with something far less patient.
At 5:01 p.m. on Friday, a Pentagon deadline expired in a dispute with Anthropic, one of the world’s leading artificial intelligence firms. The Defense Department had insisted that its systems be available for “all lawful purposes.” Anthropic declined, seeking explicit assurances that its model would not be used for mass domestic surveillance or in fully autonomous weapons.
U.S. President Donald Trump escalated the standoff, directing federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology.” Defense Secretary Pete Hegseth followed by designating the company a “supply-chain risk to national security,” effectively severing its access to government contracts.
Anthropic’s chief executive, Dario Amodei, defended the refusal. “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” he wrote. Some applications, he argued, remain “outside the bounds of what today’s technology can safely and reliably do.”
For a few hours, it looked like a rupture — Silicon Valley’s safety culture colliding head-on with Washington’s assertion of sovereign authority. Then the ground shifted.
A Recalibration
OpenAI, Anthropic’s chief rival, announced it had reached its own agreement with the Pentagon.
Under the deal, OpenAI agreed to permit use of its systems for “any lawful purpose.” At the same time, the company said it would embed technical safeguards designed to prevent domestic surveillance or fully autonomous weapons use.
“In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome,” OpenAI chief executive Sam Altman wrote, using the Trump administration’s preferred name for the Department of Defense. He added that OpenAI would “build technical safeguards to ensure our models behave as they should.”
The distinction between the two approaches was subtle but telling. Anthropic sought to fix limits in legal language. OpenAI accepted the government’s authority while promising to encode constraints in software.
The message, however, was clear: in Washington’s view, lawful use is determined by the government. Safeguards may exist, but they are to be negotiated within that framework, not imposed upon it.
The immediate consequences of the Anthropic designation remain unclear. Claude has been used in classified environments for analysis and data processing. Transitioning systems is complex and gradual. Other vendors, including xAI, are expanding defense engagements.
The longer-term implications are less about this contract than about how artificial intelligence is reshaping economic productivity, military planning and geopolitical influence simultaneously. Governance is emerging from overlapping arenas: executive orders, procurement contracts, technical architectures, advisory panels and development funds.
A Different Conversation at the U.N.
The confrontation unfolded as the United Nations advances a more expansive and slower approach to AI governance.
In December, the U.N. Development Program warned that artificial intelligence could trigger what it called a “Next Great Divergence.” Countries with computing power and skilled workforces would accelerate ahead; others risked being left behind. “The central fault line in the AI era is capability,” said Philip Schellekens, UNDP’s Asia-Pacific chief economist.
U.N. Secretary-General António Guterres has since proposed a $3 billion fund aimed at expanding computing infrastructure and technical capacity in developing nations. That initiative accompanies the launch of an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance — mechanisms designed to build shared technical baselines and broaden participation before formal regulation is even attempted.
“The future of AI cannot be decided by a handful of countries or left to the whims of a few billionaires,” Guterres told the India AI Impact Summit last month.
The emphasis is different. Where Washington asserted control, the U.N. is attempting redistribution. Where the Pentagon used procurement leverage, the U.N. is building advisory panels and funding mechanisms. Where the administration framed the issue as one of lawful authority, the U.N. frames it as one of shared capacity and inclusion.
Sovereignty Versus Structure
The Pentagon’s position throughout the dispute was straightforward: private contractors cannot dictate how lawfully acquired tools are used for national security. Anthropic tested that boundary. OpenAI worked within it.
The episode suggests that, at least for now, the United States sees AI governance less as a multilateral rule-making project and more as a function of domestic law, executive authority and technical design.
That does not end the international conversation. Negotiations in Geneva over autonomous weapons continue, even if progress remains slow. Humanitarian organizations continue to press for binding limits. Developing nations continue to demand a voice in setting standards that may shape their economic futures.
Diplomatic discussions on autonomous weapons have been going on for more than a decade. Later this year, governments will meet again at the Convention on Conventional Weapons review conference in Geneva, where nations face renewed pressure to begin negotiating binding international law.
Advocacy groups view the Anthropic dispute as evidence of the risks diplomats have warned about for years. The Campaign to Stop Killer Robots, a coalition of more than 270 civil society organizations, said it underscored the urgency.
“We are at a pivotal moment for humanity,” said Nicole Van Rooijen, the campaign’s executive director. “When the companies building these technologies are themselves refusing to deploy it on safety grounds, it must raise alarm bells for governments and people everywhere.”
The group argued that reliance on voluntary safeguards or internal corporate policies is insufficient. “Every year without binding international law, the gap between what these systems can do and our ability to govern them grows wider,” Van Rooijen said. “The time for kicking the can down the road has passed.”

