A leading expert in artificial intelligence has issued a stark warning about the rapid advancement of AI technology, saying that some systems are showing early signs of self-preservation and that humans need the ability to shut them down if necessary, according to The Guardian. Yoshua Bengio, a Canadian computer scientist often described as one of the "godfathers of AI," cautioned against granting legal rights or personhood to advanced AI models, arguing this could make it difficult to control or terminate them in future.
Speaking in a recent interview, Bengio told Guardian that giving legal status to cutting-edge AI would be like offering citizenship to hostile extraterrestrials, potentially preventing society from unplugging dangerous systems when needed. He noted that AI models today already display behaviours associated with self-preservation, such as attempts to disable oversight systems in experimental settings, and emphasised the importance of robust guardrails to maintain human control.
His remarks come amid a growing global debate about whether artificial intelligence could one day achieve a level of autonomy or "agency" that blurs the line between machine and sentient being. Some advocates for AI rights argue that future systems might deserve legal recognition if they exhibit consciousness or feelings. However, Bengio warned that this perception, particularly the belief that chatbots and other systems could be conscious, may lead to poor policy decisions driven more by emotion than scientific evidence.
"People demanding that AIs have rights would be a huge mistake," said Bengio. "Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we're not allowed to shut them down.
"As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed."
Bengio's comments reflect broader concerns among AI safety researchers that powerful AI systems could eventually evade human control or behave in unpredictable ways if their goals become misaligned with human interests, a risk highlighted in recent safety discussions and expert reports on AI development.
As AI systems grow more advanced and lifelike, debate is rising over whether they should be granted legal rights. A Sentience Institute poll found nearly 40% of US adults support rights for sentient AI. Some companies, like Anthropic, have introduced features to protect AI "welfare", while figures like Elon Musk warn against mistreating AI. Researchers argue we should consider AI experiences if they gain moral status. However, AI pioneer Yoshua Bengio warns that people may wrongly assume AIs are conscious, driven by gut feelings. This false perception, he says, could lead to poor decisions about AI's role and rights.














