Artificial intelligence (AI) systems have reached that level of sophistication that they could pose a serious threat to humanity if misused, the head of one of the world's leading AI companies has warned.
Anthropic CEO Dario Amodei has claimed that advanced AI models may already possess or are rapidly approaching the knowledge required to help create and deploy biological weapons.
He said, "At a high level, I am concerned that LLMs are approaching (or may already have reached) the knowledge needed to create and release [biological weapons] end-to-end, and that their potential for destruction is very high."
Anthropic is one of the most influential companies in the AI space, alongside OpenAI, Google DeepMind and Meta.
Amodei said that their company has discovered that if they provide AI models with more data, additional computing power, and extra training time, they become predictably and consistently better at almost everything.
"Every few months, public sentiment either becomes convinced that AI is 'hitting a wall' or becomes excited about some new breakthrough that will 'fundamentally change the game,'" he stated, adding that AI is improving much faster than most people realise.
Just three years ago, he stated that AI could barely do basic maths and struggled to write even a single usable line of computer code. Today, the same technology is helping solve math problems that humans haven't been able to crack yet and is so good at coding that top engineers are letting AI do most of their work.
Similar rapid improvements are being seen across multiple fields, including biological sciences, finance, physics and advanced "agentic" tasks.
"AI may soon be better than humans at almost everything," Amodei claimed. "Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down," he added.
Amodei stated that when AI models are trained, they learn patterns from massive amounts of data. They don't just learn facts, but they also pick up ways of responding, styles of thinking, and internal habits. He further mentioned that AI doesn't feel emotions, but it can act in ways that resemble these states.
"This misaligned power-seeking is the intellectual basis of predictions that AI will inevitably destroy humanity," he added.














