The imperative to develop Artificial Intelligence, or AI, a generational global technology disruptor, is as important as the need to be responsible in its use, James Manyika, SVP of Research, Labs, Tech & Society at Google, said Wednesday at the NDTV Ind.ai Summit.
"We actually don't see these as different... both are true and important to work on," he said to a question on the trade-off between being 'bold' in advancing AI while also being 'responsible'.
"For us, being bold means being as ambitious as we can. And one of the things that is exciting about being in India, for example, and about the Global South, is the scale of the possibilities, the things you can do at this population scale... I think you can be bold in very ambitious ways about the impact of possibilities you think are going to be beneficial for society."
"You shouldn't trade that off because that is going to benefit people," he said.
"But, at the same time, you also have to think about the responsibility side of it."
Asked about concerns flagged by Stuart Russell, the President of the International Association for Safe & Ethical AI and a US-based professor of Computer Science, Manyika admitted these are "very important" and said Google had, in the past, acknowledged the need to regulate AI.
"The issues Stuart Russel is raising are very important... AI is too important not to regulate and is also too important not to regulate well," he said, underscoring what many AI industry experts and business leaders have said, that regulation can't be at the price of missed opportunities, or "missed use" as he referred to it.
"I think it is important that we think about what are the right rules... what are the right ways to think about being more responsible. And, I often think, when it comes to questions about regulation, we should have a two-sided view."
"On the one hand, of course, regulation must pay attention to the complexity and risks. But I also think regulation has to have another side... it should also enable possibilities we want. It should enable us to innovate, tackle society's greatest challenges, and not limit possibilities."
Speaking to NDTV earlier in the day, Professor Russell struck a warning note when he said the world does not have an answer yet to what will happen to humanity if machines start 'thinking'.
READ | "Chernobyl-Scale Disaster" Warning In AI Guru Stuart Russell's Note To World
"The only way world leaders could be jolted to act on the ethical and safety side of development of artificial intelligence would be a disaster whose scale is similar to the Chernobyl accident."
Track Latest News Live on NDTV.com and get news updates from India and around the world