UN moves to regulate AI amid warnings of disinformation, bio‑threats

Unlike previous multilateral efforts, this step aims for institutional oversight

Representative image
i
user

NH Digital

google_preferred_badge

Artificial intelligence, increasingly seen as a rising threat to global security paradigm, is now firmly on the agenda of world leaders gathering at the United Nations this week. Experts at the gathering warned that without urgent regulation, AI could accelerate threats ranging from engineered pandemics and mass disinformation to autonomous weapons and destabilisation of democracies.

In a landmark development, the UN has approved a new governance framework to manage AI globally.

Unlike prior multilateral efforts — such as AI summits convened by the UK, South Korea, and France, which produced only non‑binding commitments— this step aims for institutional oversight.

Last month, the UN General Assembly passed a resolution establishing two key bodies: a global forum and an autonomous scientific panel of experts.

On Wednesday, the UN Security Council will host an open debate on using AI responsibly, raising questions such as how the Council can ensure AI applications comply with international law, and aid peace processes and conflict prevention.

On Thursday, in the course of the UN’s annual meeting, Secretary‑General António Guterres will inaugurate the Global Dialogue on AI Governance — a platform for states and stakeholders to exchange ideas and formulate cooperative strategies. The forum is slated to meet formally in Geneva next year and in New York in 2027.

Concurrently, recruitment will begin for 40 experts to serve on the scientific panel — led by two co‑chairs, one from a developed nation and one from the developing world. The panel has drawn comparisons to the UN’s climate science machinery, including its annual COP gatherings.

Some observers have hailed the move as 'a symbolic triumph' and 'the most globally inclusive approach to governing AI' in the words of Isabella Wilkinson, a research fellow at Chatham House.

But she also cautions that in practice these new bodies may lack real power, especially considering the UN’s slow decision‑making apparatus, which may struggle to keep pace with the rapid evolution of AI.

Ahead of the sessions, a coalition of AI experts has called on governments to establish red lines — minimum guardrails to prevent the most critical and unacceptable AI risks — by late next year. Among them are figures from OpenAI, DeepMind, and Anthropic, who argue that nations should negotiate a binding AI treaty, much like existing treaties banning nuclear testing or regulating biological weapons.

“One idea is simple,” said Stuart Russell, an AI professor at UC Berkeley, “Just as we require safety checks for pharmaceuticals or power plants, developers might need proof of safety before entering the market.”

He proposed that AI oversight could mirror the function of the International Civil Aviation Organization, coordinating safety rules across borders. Rather than rigid regulations, diplomats might adopt a more fluid 'framework convention' adaptable to rapid changes in AI technology.

As the UN sets the stage for these initiatives, all eyes will be on whether member states can translate ambition into enforceable measures — and whether the new bodies can act fast enough to keep pace with AI’s forward march.

With agency inputs

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines