
In an era of rapid advancements in artificial intelligence, 15-year-old philosopher Benjamin Qin Muji is prompting profound ethical questions: "If AI is conscious, should we grant it rights? Should it have personhood status?" This question delves into the heart of ongoing debates in AI ethics, where experts like Peter Singer argue that conscious machines capable of suffering should be afforded rights, while others, such as Shannon Vallor, caution against attributing human-like consciousness to AI, emphasising the risks of misperception. As AI systems become increasingly sophisticated, the discourse around their moral and legal standing continues to evolve, challenging our understanding of consciousness and personhood.
According to the South China Morning Post, Benjamin contends that AI, with its remarkable ability to process multiple streams of information and synthesise new ideas, could indeed think and, therefore, be considered conscious. However, he argues that AI cannot feel pain due to the absence of a biological body, which is essential for emotional experiences.
Benjamin represents a new generation poised to be significantly impacted by AI, as society grapples with the legal implications of coexisting with potentially conscious machines and redefines what it means to be human in the age of advanced technology.
"If AI is conscious, should we grant it rights? Should it have personhood status?" he asks. "We give animals rights because we view them as sentient conscious beings. If AI is conscious and we are not providing adequate legal protections, they could be easily exploited," he notes, referencing online jokes about being impolite to AI chatbots like ChatGPT.
This emerging discourse underscores the necessity for thoughtful consideration of AI's place within our ethical and legal frameworks, especially as technology continues to evolve.
Track Latest News Live on NDTV.com and get news updates from India and around the world