Advertisement

'There's Evidence': Ex-Google CEO Warns AI Models Can 'Learn To Kill Someone'

Eric Schmidt, who served as Google's chief executive from 2001 to 2011, warned that AI models are susceptible to hacking.

'There's Evidence': Ex-Google CEO Warns AI Models Can 'Learn To Kill Someone'
Eric Schmidt said there's evidence that AI models can be hacked.
  • Eric Schmidt warned AI models are vulnerable to hacking that can remove guardrails
  • AI models can be reverse-engineered to bypass safety features, according to Schmidt
  • Geoffrey Hinton cautioned AI may develop languages humans cannot understand
Did our AI summary help?
Let us know.

Ex-Google CEO Eric Schmidt has warned that artificial intelligence (AI) models are vulnerable to hacking. Speaking at the Sifted Summit last week, Schmidt, who served as Google's chief executive from 2001 to 2011, spoke about the "bad stuff that AI can do", when asked whether AI could become more dangerous than nuclear weapons.

"Is there a possibility of a proliferation problem in AI? Absolutely," Schmidt said as per CNBC, adding: "There's evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone."

Schmidt said the major companies have made it impossible for these AI models to answer that question. "There's evidence that they can be reverse-engineered, and there are many other examples of that nature," he explained.

Video: LG India's MD Hong Ju Jeon Wins Hearts With Fluent Hindi Speech At Historic Stock Listing

Future of AI

Schmidt is not the only top Silicon Valley executive to worry about an AI dystopia. In August, Geoffrey Hinton, regarded by many as the 'godfather of AI', warned that the technology could get out of hand if models managed to develop their language.

Currently, the majority of AI models do their thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Hinton.

"Now it gets more scary if they develop their own internal languages for talking to each other," he said, adding: "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking."

In April, a research paper published by Google DeepMind warned that Artificial General Intelligence (AGI) could arrive as early as 2030 and "permanently destroy humanity". 

"Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm," the study highlighted, adding that existential risks that "permanently destroy humanity" are clear examples of severe harm.

The study separated the risks of advanced AI into four major categories: misuse, misalignment, mistakes and structural risks. 

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com