Can We Control Artificial Intelligence? Expert Sounds Alarm On Unpredictable Future

An expert argues that there is no clear evidence proving that we can successfully control and make artificial intelligence safe.

Can We Control Artificial Intelligence? Expert Sounds Alarm On Unpredictable Future

The swift progress of AI is accompanied by a parallel increase in associated risks.

Renowned AI safety expert Dr Roman V Yampolskiy is raising red flags in his upcoming book, "AI: Unexplainable, Unpredictable, Uncontrollable." The book paints a chilling picture of the potential dangers posed by artificial intelligence, arguing that current technology lacks the safeguards to ensure its safe and ethical use.

Dr Yampolskiy's extensive research, detailed in the book, reveals a startling truth: there's no concrete proof we can control super-intelligent AI once it surpasses human capabilities. This "existential threat," as he calls it, looms large, with the potential for disastrous consequences if left unchecked.

The book delves into the inherent challenges posed by AI's autonomy and unpredictability. These very features, while offering immense potential, also make it difficult to ensure AI aligns with human values and remains under our control.

Dr Yampolskiy's message is clear and urgent: we need a drastic shift in focus toward developing robust AI safety measures. He advocates for a balanced approach that prioritizes human control and understanding of AI systems before allowing them to operate with unchecked autonomy.

“We are facing an almost guaranteed event with the potential to cause an existential catastrophe,” said Dr Yampolskiy in a statement. “No wonder many consider this to be the most important problem humanity has ever faced.

“The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

"Why do so many researchers assume that the AI control problem is solvable?" he said. ‘To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.

"This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, shows we should be supporting a significant AI safety effort."

.