When Science Outruns Institutions
Who governs science when no one is in charge: Part 4 of a five-part series

Editor’s Note: This weekly series explores how science is governed globally in the absence of a world authority, beginning with the myth and reality of coordination without control.
The systems were already in place. By the time the United Nations created an Independent International Scientific Panel on Artificial Intelligence last year, frontier models were embedded in search engines, integrated into productivity software, and shaping security debates. Governments were convening summits. Companies were releasing new versions in rapid succession.
Institutional architecture followed acceleration.
Under a mandate from the Pact for the Future, U.N. member countries approved the 40-member panel to establish a shared scientific evidence base. In February, Secretary-General António Guterres proposed a next layer: a $3 billion global fund to expand computing capacity, data infrastructure and technical expertise in developing countries.
“The future of AI cannot be decided by a handful of countries or left to the whims of a few billionaires,” Guterres said at the India AI Impact Summit in New Delhi. He warned that without investment, many countries risk being “logged out” of the AI era.
Taken together, the initiatives form what he has described as a “practical architecture” for AI governance: scientific assessment, political dialogue, and capacity-building finance.
The Pace Problem
International scientific institutions are built for deliberation, relying on committees, consultation and consensus. Legitimacy and inclusion take precedence over speed, but AI development has not followed that rhythm.
The panel’s first report is expected in time to inform a Global Dialogue on AI Governance in July. The compressed timeline reflects urgency. The systems under discussion are already shaping markets and public administration.
Guterres has framed the panel’s role as helping countries “move from philosophical debates to technical coordination” by agreeing on how to test systems and measure risk. The sequencing is revealing. The U.N. is attempting to stabilize expectations after the fact instead of attempting to halt or slow AI deployment.
Science diplomacy can convene experts and define baselines, but it is not designed to slow discovery or commercial rollout.
Redistribution as Governance
The capability gap sharpens the problem. A U.N. Development Program report released in December warned that AI could deepen global divides, with benefits flowing fastest to early movers in higher-income economies while disruption hits hardest where digital infrastructure and social protections are weakest. The report described the risk as a “Next Great Divergence,” reversing decades of economic convergence.
“As a general-purpose technology, AI can lift productivity, spark new industries, and help latecomers catch up,” the report said. Yet gains are concentrated. China accounts for nearly 70% of global AI patents, and more than 3,100 newly funded AI companies have emerged across just six Asia-Pacific economies.
“The central fault line in the AI era is capability,” said Philip Schellekens, UNDP’s Asia-Pacific chief economist. “Countries that invest in skills, computing power and sound governance systems will benefit, others risk being left far behind.”
The proposed $3 billion fund is therefore an attempt to prevent structural exclusion from a technology already reshaping economies. Capacity-building, however, expands participation instead of determining more regulation or deployment.
Governance Without Mandate
No global institution holds comprehensive authority over artificial intelligence. Mandates are fragmented across telecommunications, trade, human rights and development forums.
Rather than negotiate a binding convention, the U.N. is assembling parallel functions in scientific assessment, political dialogue and financing for participation. Each operates without enforcement power.
Switzerland’s President Guy Parmelin announced that the 2027 World Summit on Artificial Intelligence will be held in Geneva, calling the city “the epicenter of multilateralism.” India, the United Kingdom, South Korea and France have each hosted global AI summits in recent years.
The geography of convening is expanding. Authority remains diffuse.
Structural Mismatch
The AI episode illustrates a recurring pattern. Innovation advances through distributed research networks and private investment. Deployment precedes consensus. Institutions respond with panels, voluntary frameworks and funding mechanisms.
The lag is not accidental. Global institutions were built to manage nuclear risk, telecommunications standards and climate assessments over multi-year cycles. They were not designed for software systems updated in weeks.
When science outruns institutions, governance becomes provisional. Guidelines are voluntary. Dialogues are exploratory. Financing depends on contributions. Authority remains political and national.
Science diplomacy can create shared vocabulary and testing criteria, but it cannot compel alignment among competing economies.
Speed Versus Legitimacy
The tension is between speed and legitimacy. Faster governance risks exclusion and error. Slower governance risks irrelevance.
The U.N.’s emerging AI framework attempts to balance those pressures by building architecture rather than drafting law: establishing evidence, convening dialogue and broadening capacity before regulatory consensus is achievable.
Whether that architecture will shape technological trajectories or merely document them remains uncertain.
“Traditional policy approaches have proven inadequate to deal with the much larger transformation challenges,” the Organization for Economic Cooperation and Development reported in its Science, Technology and Innovation Outlook 2025. “To remain effective, policies must be agile: adaptive, forward-looking, and capable of responding to complex and evolving challenges.”
Artificial intelligence is unlikely to be the last field to expose the gap between innovation and oversight. Biotechnology, climate intervention research and quantum systems present similar challenges.
Looking Ahead
If emerging technologies repeatedly outrun institutional design, the question becomes more precise: what can influence achieve when control is absent?
Part 5 examines authority without enforcement, and the conditions under which science diplomacy can still matter.
Next: Part 5 — Authority Without Control

