If you think the U.S. Congress is moving slowly on AI regulation, you’ll be waiting much longer for a global AI regulator or treaty.
The big picture: That’s the message out of the UN General Assembly in New York this week, as political leaders, tech companies and civil society gather to debate global challenges.
While leaders including President Biden and UN Secretary-General António Guterres mentioned AI in their set-piece speeches, political gridlock at the UN since Russia invaded Ukraine virtually eliminates the prospect for action.
Why it matters: A plurality of AI experts surveyed by Axios support global guardrails for AI.
But in the absence of UN leadership, no organization has asserted authority over the AI safety debates led by groups ranging from the G-7 to the World Economic Forum and the Organization for Economic Cooperation and Development.
Driving the news: The leaders of four of the five permanent members of the UN Security Council skipped this week’s debate — only Biden showed up.
A downbeat Guterres this week called for “some global entity” with AI monitoring and regulatory capacity and warned that “governments alone will not be able to tame” AI. But in a CNN interview, he admitted the UN “has no power at all” to bring superpowers together and warned that the world is headed towards a “great fracture.”
Context: Aside from determining what guardrails are needed for powerful AI models, the thorniest question in AI governance is how to involve China.
Most of the international forums that are working on AI governance exclude China by default — including the G-7, OECD, and Council of Europe.
But the U.K. has invited China to a global AI safety summit it will host in November.
The intrigue: By playing a supporting role in the upcoming U.K. summit rather than seizing control of the global debate, the White House has created space to include China in governance discussions.
In the absence of the UN or U.S. leading the global debate, there’s also more room for others traditionally sidelined in global discussions, such as academics, who proposed a range of governance models in a July paper.
The current circular debate leaves executives from tech companies tasked with participating in parallel political dialogues.
Flashback: In the past, the UN has only mobilized to regulate emerging risk — including via UN-affiliated nuclear and civil aviation watchdogs — when spurred to action by catastrophic events.
Yes, but: Effective AI guardrails don’t depend on a new global AI agency or treaty.
The EU is close to finalizing a comprehensive AI law, and advocates of guardrails share common ground around the world.
What they’re saying: Omar Sultan Al Olama, the UAE’s AI minister, told Axios that he welcomes any global AI governance effort but “many efforts are non-starters” because they will lack buy-in from developing countries or China.
“The first thing we need to agree on is principles that cut across” industries and geographies, before leaving national governments to implement those principles, he said.
Aisén Etcheverry Escudero, Chile’s technology minister, told those gathered at an AI for Humanity event Wednesday that she wants to frame AI guardrails “not just from a regulation perspective but from a capacity creation perspective,” seeing AI as a way to close digital divides globally.
Anthony Aguirre, executive director of the Future of Life Institute, which six months ago led a call for a pause in generative AI development, told Axios he is optimistic that leaders are taking concerns about possible runaway AI seriously. Across multiple forums there’s agreement that “people want AI systems that are safe and transparent and unbiased,” he told Axios.