Governments Turn to AI Firms as New Layer of Governance Emerges
Direct agreements with developers reflect a shift toward operational oversight as regulatory systems lag behind

Governments are increasingly turning to artificial intelligence companies themselves to help govern systems that regulators are still struggling to understand.
Anthropic’s new memorandum of understanding with the Australian government on calls for collaboration on AI safety, sharing information on how its systems are used and providing insight into model capabilities and risks.
Similar arrangements are already in place with government-linked safety institutes in the United States, United Kingdom and Japan.
The agreements reflect a shift in how artificial intelligence is being governed. Rather than relying solely on legislation or multilateral frameworks, governments are seeking direct access to the technical knowledge concentrated inside a small number of firms.
“This MOU gives our collaboration a formal foundation,” Anthropic CEO Dario Amodei said on Tuesday.
Artificial intelligence is often described as unregulated. In practice, multiple governance systems already exist. The European Union’s AI Act, which begins full enforcement of most provisions in August, imposes obligations on high-risk systems and applies beyond the bloc’s borders.
The Council of Europe has adopted a binding AI treaty, while the United States has relied on executive measures, voluntary commitments and export controls. China regulates algorithms, generative models and cross-border data flows.
These frameworks are developing in parallel, with different priorities and timelines and little coordination between them. The result is not a lack of governance, but a fragmented system.
The agreements with Anthropic introduce a more operational layer. By giving governments insight into how systems perform in practice, they allow oversight to be informed by real-world use rather than external assessment alone. At the same time, they give companies a role in shaping how safety and risk are interpreted.
Collaboration needed to ‘see the full picture’
Evidence suggests that governance inside companies remains uneven.
A new report released by UNESCO and the Thomson Reuters Foundation, based on data from 3,000 companies, found that while many firms report having AI strategies, far fewer can demonstrate how risks are managed in practice or who is accountable when systems fail. The limits of the current approach are already visible.
On the same day it announced its agreement with Australia, Anthropic confirmed that part of the source code for its Claude Code system had been exposed through a publicly accessible file. The company said the incident was caused by human error and did not involve sensitive data, but it highlighted the operational risks that remain even as firms take on a greater role in governance.
Efforts are also underway to introduce a shared evidentiary foundation. The United Nations has established an Independent International Scientific Panel on Artificial Intelligence to synthesize research on AI capabilities, risks and societal impacts.
“No country, no company and no field of research can see the full picture alone,” U.N. Secretary-General António Guterres told the panel last month.
Geneva is positioning itself as a center for coordination. The International Telecommunication Union convenes governments, companies and researchers through its AI for Good platform, and Switzerland is preparing to host a global AI summit in 2027.
For now, however, the central challenge remains unresolved. Artificial intelligence is not ungoverned. It is governed through overlapping systems that have yet to be aligned. The question facing policymakers is whether those systems can be made to work together as the technology continues to advance.
“AI is no longer a niche technical topic – it is a core governance issue,” Thomson Reuters Foundation CEO Antonio Zappulla said. “Without robust oversight of how businesses are adopting AI, we risk causing significant downstream harm to the environment and wider society.”

