- Sam Altman called for international AI regulation similar to the IAEA at the India AI Summit
- OpenAI launched an initiative to build AI infrastructure and skills tailored for India
- India is OpenAI's fastest growing market with 100 million weekly ChatGPT users
If there is one consensus from the massively successful India AI Impact Summit 2026 held in Delhi, it would be the call for regulation of artificial intelligence. The latest to endorse this view is ChatGPT-maker OpenAI's chief Sam Altman.
Altman, who addressed a gathering at the Delhi summit today, said the world needs a body similar to the International Atomic Energy Agency to coordinate efforts on keeping a handle on the extremely fast development and evolution of AI in nearly every field.
"Democratisation of AI is the best way to ensure humanity flourishes... centralisation of AI in one company or country could lead to ruin," Altman said at the event where he was one of the top tech CEOs in attendance.
"This is not to suggest that we won't need any regulation or safeguards. We obviously do, urgently, like we have for other powerful technologies. We expect the world may need something like the IAEA for international coordination of AI," Altman said.
Such a body should be able to "rapidly respond to changing circumstances", he added.
India And AI Opportunities
The ChatGPT maker described India as one of the countries where OpenAI's service is the fastest-growing. The generative AI chatbot has 100 million weekly users in India, more than a third of whom are students, he said.
"This is one of our fastest growing markets in the world. Maybe it's the fastest at this point. It's certainly the fastest for Codex," Altman told reporters.
His comments came hours after he announced the launch of the 'OpenAI for India' initiative at the Delhi summit. The overall goal of this initiative is to build infrastructure, strengthen skills and create local partnerships to develop AI solutions tailored for India.
The objective is to build "AI with India, for India, and in India," Altman said.
OpenAI, for example, announced a plan to build data centre infrastructure in India along with Tata Consultancy Services (TCS).
Altman told news agency ANI he believes India is not just participating in the AI revolution but leading it.
"It (AI) will definitely impact the job market, but we always find new things to do, and I have no doubt we will find lots of better ones this time," Altman said.
OpenAI, which set up its first office in Delhi last year, has the second-largest user base in India. "It is amazing to be here, obviously the work happening in India and the adoption of AI is leading the world, and I can't wait to see what goes next," Altman told ANI.
The new data centre infrastructure will allow OpenAI's most advanced models to run securely in India, delivering lower latency while meeting data residency, security, and compliance requirements for mission-critical and government workloads.
The analogy of AI with nuclear development has been cited often by experts in recent times.
On Wednesday, UC Berkeley professor Stuart Russell said at the NDTV Ind.AI Summit that the world does not have an answer yet to what would happen to humanity if machines start 'thinking'.
The only way world leaders could be jolted to act on the ethical and safety side of development of artificial intelligence would be a disaster whose scale is similar to the Chernobyl accident, the Distinguished Professor of Computer Science told NDTV.
"Some of the CEOs, pretty much all the leading CEOs, have admitted there is enormous risk to humanity. Privately, they will say, 'I wish I could stop'. The one person who said it publicly is Dario Amodei, the CEO of Anthropic," Russell, who is also the president of the International Association for Safe and Ethical AI, and features among Time Magazine's 100 Most Influential People in AI 2025, told NDTV.
"But I have heard similar things in private from the other CEOs, to the point where one of them said the scenarios are so grim that the best case would be a Chernobyl-scale disaster. Because that would get governments to regulate," he said.
Russell appealed to governments to recognise the risks of AI early on and protect their people.
With inputs from agencies














