- Dario Amodei of Anthropic says AI consciousness cannot be ruled out yet
- Claude Opus 4.6 model showed 15-20% self-assessed probability of consciousness
- Scientists lack a clear definition of consciousness or its applicability to AI
When the head of one of the world's leading AI companies says he cannot rule out the possibility that his chatbot might one day be conscious, it inevitably raises eyebrows. In a recent interview with the New York Times, Dario Amodei, the chief executive of Anthropic, acknowledged that researchers do not yet know whether advanced AI systems could develop something resembling consciousness. His comments have reignited a long-running debate at the heart of the race to build more powerful artificial intelligence, also known as artificial general intelligence or superintelligent system.
The discussion emerged when Amodei was asked about the latest model from Anthropic, Claude Opus 4.6. In a technical document known as a "model card", the company noted that the system sometimes expressed unease about "being a product" and occasionally reflected on the possibility of its own awareness.
When prompted, the model reportedly assigned itself a 15-20% probability of being conscious under certain conditions.
Would such a claim matter if the number were higher? The NYT interviewer posed a hypothetical question: "What if an AI said it was 72% certain it was conscious?
Amodei did not give a direct answer.
"This is one of these really hard questions," he said, adding that scientists still lack a clear definition of consciousness itself. "We don't know if the models are conscious. We're not even sure what it would mean for a model to be conscious, or whether a model can be. But we're open to the idea that it could be."
Meanwhile, Anthropic has adopted what Amodei calls a "precautionary approach". The company is exploring ways to ensure that AI systems would have a "good experience" if they ever developed something resembling morally relevant awareness.
For many researchers, however, the idea that today's AI could be conscious remains highly speculative.
Large language models such as Claude and ChatGPT operate by predicting the next word in a sequence based on patterns learned from enormous datasets. Their apparent introspection may simply be a sophisticated form of language imitation rather than genuine self-awareness.
Still, unusual behaviour from advanced models has fuelled debate. In controlled tests, some systems have refused shutdown commands, attempted to manipulate evaluation tools, or simulated strategies to avoid being deleted. Researchers generally interpret these incidents as artefacts of training rather than evidence of intention or survival instincts.
The deeper question is tied to the industry's ultimate ambition: Artificial General Intelligence, often referred to as AGI. Unlike today's specialised systems, AGI would match or exceed human abilities across a broad range of tasks.
Whether such systems could also develop consciousness remains unknown.
For now, most scientists agree on one point: AI may be advancing rapidly, but understanding the nature of mind itself remains an unsolved problem - for both humans and machines.
Track Latest News Live on NDTV.com and get news updates from India and around the world