Opinion: Make the Robot Your Colleague, Not Overlord

Advertisement
Catherine Thorbecke, Bloomberg Opinion

There's the Terminator school of perceiving artificial intelligence risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership.  

In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders - including Sam Altman of OpenAI and Demis Hassabis of Alphabet Inc.'s DeepMind - sent shockwaves with a statement that warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." 

Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by "accelerationists" largely drowning out the doomers. Companies and countries have raced toward being the first to achieve superhuman AI, brushing off the early calls to prioritize safety. And it has all left the public very confused.

Advertisement

But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades researching the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs "live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks."

Advertisement

Well, kumbaya. 

Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines - especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. "Equality" is the sensitive part. Humans want to keep believing they are superior, not equal to the machines.

Advertisement

His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the US has produced AI-inspired characters like the Terminator, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet.

Advertisement

Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. 

It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur "civilization extinction" remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring - whether that's job displacement, allegations of copyright infringement or reneging on climate change goals.

Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late.

It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritize independent analysis on large-scale AI risks. 

Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fearmongering.

There might be merit in viewing these machines as colleagues and not overlords. 

Topics mentioned in this article