"Chernobyl-Scale Disaster" Warning In AI Guru Stuart Russell's Note To World

"Look at what the risks are, and set acceptable levels of risk for each type of consequences that we might be considering," UC Berkeley professor Stuart Russell said at the NDTV Ind.AI Summit

Advertisement
Read Time: 3 mins
UC Berkeley professor Stuart Russell said at the NDTV Ind.AI Summit
Quick Read
Summary is AI-generated, newsroom-reviewed
  • The world lacks an answer if machines begin to think independently, UC Berkeley professor Stuart Russell said
  • AI company CEOs privately admit risks but public warnings remain rare, he said
  • Russell said a Chernobyl-scale disaster might prompt government AI regulations
Did our AI summary help?
Let us know.
New Delhi:

The world does not have an answer yet to what would happen to humanity if machines start 'thinking', UC Berkeley professor Stuart Russell said at the NDTV Ind.AI Summit today. The only way world leaders could be jolted to act on the ethical and safety side of development of artificial intelligence would be a disaster whose scale is similar to the Chernobyl accident, the Distinguished Professor of Computer Science told NDTV.

"Some of the CEOs, pretty much all the leading CEOs, have admitted there is enormous risk to humanity. Privately, they will say, 'I wish I could stop'. The one person who said it publicly is Dario Amodei, the CEO of Anthropic," Russell, who is also the president of the International Association for Safe and Ethical AI, and features among Time Magazine's 100 Most Influential People in AI 2025, told NDTV.

"But I have heard similar things in private from the other CEOs, to the point where one of them said the scenarios are so grim that the best case would be a Chernobyl-scale disaster. Because that would get governments to regulate," he said.

Russell appealed to governments to recognise the risks of AI early on and protect their people.

"Look at what the risks are, and set acceptable levels of risk for each type of consequence that we might be considering," he said.

The Chernobyl disaster happened in April 1986 at the Chernobyl nuclear power plant near Pripyat, then Ukraine under the Soviet Union. The Chernobyl disaster is a cautionary tale about secrecy, governance, and the cost of ignoring safety protocols.

Fundamental Question

On the fundamental question of how humanity can maintain power forever over entities more powerful than themselves, the professor said this question was first asked by Alan Turing, the founder of computer science, in 1951.

"At a lecture, he said that once the machine thinking method had started, it would not take long to outstrip our feeble powers at some stage; therefore, we should have to expect the machines to take control," Russells said.

"That's Alan Turing, maybe one of the greatest geniuses of the 20th century. And we do not yet have an answer to his question. We are pouring enormous resources into building a technology that its own creators, the CEOs of the companies who are building it, say has a significant chance of causing human extinction," he said.

Russell said humanity seems to be in the process of losing control.

"We have seen just in the last week or two some very significant and disturbing developments. So one of them is a thing called Maltbook, which is a place online where AI systems can talk to each other. They are already inventing their own religion; they are complaining that humans are watching them; they are proposing to talk to each other in a language that we don't understand, so that they won't be spied on by humans, and so on," the professor said.

Topics mentioned in this article