AI After Davos: Global Institutions Agree on the Risk — Not on the Response
From U.N. scientific panels to telecom standards bodies and regional blocs, artificial intelligence governance is taking shape through parallel institutions, not a single global deal.

DAVOS, Switzerland — Artificial intelligence cut across nearly every major policy discussion at last week’s World Economic Forum annual meeting, surfacing in sessions on finance, labor, development, and regulation. What did not emerge was a shared approach to managing its effects across borders.
Across public remarks in Davos, leaders largely agreed on the nature of the challenge. AI is already reshaping labor markets and economic outcomes, and its effects are arriving faster than institutions are adapting. Where views diverged was on how and through whom governance should occur.
Rather than advancing through a single negotiation or forum, AI governance is taking shape through a growing patchwork of institutions, each asserting a different kind of authority. What is emerging is a form of governance by accumulation: authority built through repeated use, technical reliance, and institutional credibility, long before formal rules are written.
Private-sector leaders were among those publicly raising concerns about timing and social absorption. JPMorgan Chase chief executive Jamie Dimon cautioned during a WEF session that the pace of AI deployment could outstrip society’s ability to adapt.
“If it goes too fast for society, that’s where government and business in a collaborative way should step in together,” Dimon said, pointing to the need for retraining and adjustment mechanisms alongside technological rollout.
From a macroeconomic perspective, International Monetary Fund Managing Director Kristalina Georgieva described AI’s impact on employment as “a tsunami … hitting the labor market,” noting that countries’ ability to absorb the shock would vary widely.
At the global level, the United Nations has positioned itself as a source of scientific and normative authority rather than as a regulator. In September, the General Assembly launched the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance, aimed at providing evidence-based input for governments struggling to keep pace with technological change.
“Generative AI is racing into our lives,” General Assembly President Annalena Baerbock said at the launch. “Within this decade, it will reshape industry, economies and societies faster than any technology before it.”
At the same time, standards bodies are asserting influence through technical necessity. Organizations such as the International Telecommunication Union shape markets through voluntary technical frameworks that companies often adopt before laws exist, giving standards practical force even in the absence of formal regulation.
European Central Bank President Christine Lagarde warned in Davos that regulatory divergence could undermine the productivity gains governments hope to achieve through AI.
“The development of AI, the gain of productivity that we hope for, is difficult to reconcile with fragmentation in terms of standards, licensing, and access,” she said. “This can only be remedied by a degree of cooperation.”
Alongside these global efforts, regional approaches are also emerging. At the Asia-Pacific Economic Cooperation summit in November, Chinese President Xi Jinping proposed creating a World Artificial Intelligence Cooperation Organization, arguing that AI governance should ensure the technology becomes a “public good for the international community.”
The proposal reflects a more centralized vision than those favored by the United States and some partners, underscoring how parallel governance models are developing in the absence of an agreed framework.
As AI governance advances through practice rather than agreement, authority is being claimed well ahead of rules. U.N. Secretary-General António Guterres has cautioned against allowing that gap to widen unchecked. “Humanity’s fate,” he warned during a U.N. debate on artificial intelligence, “cannot be left to an algorithm.”

