The UN’s Amandeep Singh Gill Is Building Global A.I. Guardrails

The UN’s Amandeep Singh Gill Is Building Global A.I. Guardrails


The UN’s top A.I. diplomat warns against the unchecked concentration of A.I. power while envisioning a more democratic, sustainable future. Courtesy United Nations

Amandeep Singh Gill, recognized on this year’s A.I. Power Index, is part of a small but critical group of leaders shaping how A.I. is integrated into global systems—and international diplomacy itself. As the United Nations Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, Gill sits at the center of one of the most complex policy challenges of our time: how to ensure A.I. strengthens humanity rather than undermines it. Gill is candid about the assumptions, power dynamics and governance hurdles that define the global A.I. landscape. He challenges the prevailing notion that A.I. will automatically deliver “abundance without limits,” warning that the concentration of A.I.-driven power and wealth in a few hands could erode human agency and freedom.

Yet he’s not a pessimist. For Gill, the recent drop in training costs for smaller language models and signs that scaling laws may be reaching their limits signal the possibility of a more democratic, less centralized A.I. future. From negotiating frameworks that reconcile vastly different national A.I. capabilities to confronting the ethical and humanitarian dangers of autonomous weapons, Gill views governance guardrails as the essential foundation of innovation. “Innovation that undermines trust and safety poisons the well for everyone who cares about A.I.,” he says. Gill’s work reflects the UN’s broader ambition to foster responsible A.I. development, building digital capacity, promoting cooperation and investing in shared human and technical infrastructure.

What’s one assumption about A.I. that you think is dead wrong?

That A.I. is going to usher in an era of abundance without limits by turning human intelligence into a commodity. “Intelligence too cheap to count” reminds me of the phrase “Electricity too cheap to meter” from the heydays of the nuclear hype.

If you had to pick one moment in the last year when you thought “Oh shit, this changes everything” about A.I., what was it?

It was the almost simultaneous occurrence of the drop in training costs for smaller, more efficient LLMs (DeepSeek, Indus, WinAI) and Ilya Sutskever’s NeurIPS talk about scaling laws reaching a plateau. It gave me hope that a different long-term outcome is possible, namely, a more democratic, less concentrated, more sustainable A.I. innovation ecosystem.

What’s something about A.I. development that keeps you up at night that most people aren’t talking about?

The social and political consequences of the enormous concentration of A.I.-derived power and wealth in a few hands. I worry that the growing gulf between the architects of cognition and those who inhabit architected cognitive spaces would undermine human agency and freedom.

What are the biggest challenges in getting countries with vastly different A.I. capabilities and philosophies to agree on common frameworks, and how do you balance innovation with safety?

This is a collective action problem familiar to us from other areas, such as climate change. In the UN, we address this problem with multidimensional differentiated agendas that address different priorities not only of countries but also stakeholders such as the private sector—promotion of A.I. use, risks, governance interoperability (of interest to the private sector), capacity building (of particular concern to developing countries), etc. We also rely on agreed norms such as human rights to harmonize perspectives. The innovation-safety dilemma is false. Innovation has always proceeded best within guardrails. Innovation that undermines trust and safety poisons the well for everyone who cares about A.I.

How is the UN leveraging A.I. to accelerate progress on the Sustainable Development Goals, particularly in developing nations? What specific A.I. applications have you seen make the most impact in bridging the technology gap?

Our focus is on boosting national capacity to develop and deploy A.I. responsibly for the Sustainable Development Goals. Solution-making ability rather than solutions. There are countless sectors where A.I. will have a beneficial impact, delivering government services in real time and context, planning and monitoring of infrastructure projects, agriculture extension and livestock/fisheries management, the green transition, public health and diagnostics equity, education, etc. We have barely scratched the surface with some applications, such as data-driven A.I. tools for predicting food insecurity. The Secretary-General’s recent report on innovative financing options for A.I. capacity building makes the case for urgent investments around the globe in critical enablers such as compute, cross-domain talent and context-rich datasets. A broad innovation base can unleash A.I. solutions close to where the need is.

How do you navigate the tension between military A.I. applications and humanitarian concerns in your diplomatic work, and what progress are you making on international agreements around lethal autonomous weapons?

From a UN perspective, we remain concerned about the potential of military A.I. applications to undermine respect for international humanitarian law, set off new arms races and lower the threshold of conflict. The Secretary-General believes that life and death decisions cannot be delegated to machines and has called on UN Member States to set clear prohibitions and regulations on such systems by 2026.

If you were tasked with using A.I. to protect students in schools, what would be your approach, and what key challenges would need to be solved? How might A.I. contribute to prevention and early intervention efforts without creating surveillance environments undermining the educational experience?

I would have the students themselves think through the pros and cons of A.I. as a tool to address their insecurity. For example, if they perceive that bullying is an issue or there is a risk of violence with guns or knives, I will have them think through how an A.I. tool could help identify victims or perpetrators and deter intimidation or violence. This will also allow them to think about trade-offs around data collection and use or A.I. versus analog solutions.

In reflecting on this question, I am reminded of an experiment by Carnegie Mellon University researchers where air quality sensors were brought into class. Students experimented with them, took them out into the parking and realized that there were spikes in pollution at pick up and drop off times. They engaged their parents in conversations empowered by the data they had collected and brought about a change in behavior to improve air quality. This is an example of empowering use of technology, and this is how we should approach A.I. solutions in general. 

The UN’s Amandeep Singh Gill Is Building Global A.I. Guardrails





Source link

Posted in

Forbes LA

I am an editor for Forbes Washington DC, focusing on business and entrepreneurship. I love uncovering emerging trends and crafting stories that inspire and inform readers about innovative ventures and industry insights.

Leave a Comment