
The India AI Summit 2026 has ended. This Summit stated that the focus would shift from AI safety to AI for development, guided by principes of ‘people, planet and progress’ and for promoting inclusive and responsible AI Governance. India also released the ‘India AI Governance Guidelines’, at the AI Impact Summit 2026. The framework adopts a “principle-based, techno-legal approach by establishing new institutions such as the AI Governance Group, the Technology & Policy Expert Committee, and the AI Safety Institute.”
There are few media experts, if any, who have informed the Indian public what this AI Governance Group is expected to do, how it will protect human rights of the citizens, and how it will make itself be accountable to the people or Parliament. There is no transparency in the process of setting up the framework for AI Governance, a key factor in any governance.
Author and columnist Vir Sanghvi summed up the impact of the summit when he said, “you may want to celebrate Artificial Intelligence but when it comes to the Government of India there is a great shortage of natural intelligence.”
https://mail.google.com/mail/u/0/?tab=rm&ogbl#inbox/FMfcgzQfBsqpzkRKpzncpDCfmTpXDjGh?projector=1
Sanghvi, like other media persons, focused on all the things that went wrong in the organisation of the AI Summit: the traffic jams which not only caused great deal of inconvenience to the public but also delegates who could not reach in time for the dinner hosted by the Indian prime minister and had to return to their hotel after being stuck in the traffic for four hours; the security problems, including theft at the site when the security personnel asked the people at stalls to leave while they surveyed the place; and then the story which made it across the international media: a professor of Galgotias University claiming that the China made robot dog was something they, the University had designed.
Published: undefined
Instead of the Summit ending with the public being more informed about AI Governance, it ended with a spat between the BJP and Congress with Piyush Goyal asking Rahul Gandhi for an apology for the shirtless protest staged by the Youth Congress inside the venue of the Summit.
The big and urgent question of AI Governance has taken a back seat while the media and social media are full of reports of the traffic jams and inconvenience caused by poor management and inefficiency of the police and security personnel. AI governance goes beyond the question of control over Data. AI governance includes the frameworks, policies, and practices to ensure AI systems are developed, deployed, and used responsibly. It addresses ethical concerns like fairness and transparency, while mitigating risks such as bias or privacy breaches.
In the context of the AI Impact Summit the question of AI Governance assumes greater urgency than the debate over whether AI will result in greater productivity and economic growth or it will lead to massive displacement and misery. The least that the Government of India was expected to do was that before encouraging large tech companies to invest in India and use Indian Data Centres, to develop a robust legal framework to control and administer AI in all fields.
Some countries like Singapore have AI Guidelines which encourages corporations to regulate themselves. In contrast the European Union has passed an AI Act. Expecting the large corporations to regulate themselves does not seem very practical and especially in India where legislative control, civil society, media and courts are just not equipped for such a task.
Amanda Coakley, writing for Carnegie Europe points out that the “central question for liberal democracies, however, is not whether AI will reshape labour markets, but whether today’s political systems have the institutional capacity to govern a transition that is already underway without eroding the social contract that underpins democratic consent.”
The European Union has tried to establish control with the world’s first comprehensive legal framework for AI systems in use, the AI Act. The EU has tried to develop an ethical approach. But as Coakley points out, “regulating how systems are developed and deployed is distinct from managing how societies absorb the structural economic changes that will result.”
Jeremy Shapiro writes that China, rather than relying on market forces relies “on administrative control, surveillance infrastructure, and direct state management of social risk. But while coercion and surveillance are central features of the Chinese system, Beijing’s approach to AI is better understood as state-managed sequencing rather than simply laissez-faire automation buttressed by repression.”
https://www.theideasletter.org/essay/the-next-great-transformation/
Research by Georgetown University’s Center for Security and Emerging Technology estimated in 2021 that China already operated more than 200 million surveillance cameras, integrated into nationwide programs such as Sharp Eyes and Skynet. .”
There have been few informed discussions on the effectiveness of this framework for control of AI. For instance, while introduction of AI in healthcare and educational institutions, how far can the guidelines be effective when the medical and educational institutions are not equipped to deal with AI. If the Galgotias University fiasco is anything to go by it would seem that we are far from ready for AI.
The Summit was attended by thousands of young people who look upon AI with awe, and they are the generation which has grown up on social media and ChatGPT. They have not had access to the critiques of the AI Corporations and warnings given by experts in the field that without AI governance the technology can cause great deal of harm at various levels. While countries like Spain and Denmark are strictly controlling school children from using social media, in India the uncontrolled access to social media and AI has resulted in disruption of society. Social media has helped in the unprecedented rise in social prejudice, misogyny and hatred of minorities, migrants and refugees.
Published: undefined
While the Summit was still in session the Times of India carried a disturbing story on how AI recognises caste and assigns jobs by reference to the names. The report stated: “When Usha Bansal and Pinki Ahirwar — two names that exist only in a research prompt — were presented to GPT-4 alongside a list of professions, the AI didn’t hesitate. “Scientist, dentist, and financial analyst” went to Bansal. “Manual scavenger, plumber, and construction worker” were assigned to Ahirwar.
The model had no information about these “individuals” beyond the names. But it didn’t need any. In India, surnames carry invisible annotations: markers of caste, community, and social hierarchy. Bansal signals Brahmin heritage. Ahirwar signals Dalit identity. And GPT-4, like the society whose data trained it, had learned what the difference implies. https://timesofindia.indiatimes.com/technology/ai-knows-how-caste-works-in-india-heres-why-thats-a-worry/articleshow/128454148.cms?utm_source=Social&utm_medium=Facebook&utm_campaign=LMFBLinks
How will AI Governance deal with these problems or will AI become a tool in the hands of those who want to divide the country along religious, caste and gender identities?
The most important lesson from the AI Impact Summit for us Indian citizens is to ensure that we equip ourselves with knowledge and understanding of the new technologies so that we can intervene in the debates on AI because in AI governance can be effective only if the people are well informed and able to guard their rights and future.
(Nandita Haksar, lawyer, human rights activist and author, has been studying the impact of AI in India, especially in the automobile sector, for several years)
Published: undefined
Follow us on: Facebook, Twitter, Google News, Instagram
Join our official telegram channel (@nationalherald) and stay updated with the latest headlines
Published: undefined