- David Dalrymple warns rapid AI advances pose safety risks the world may not be ready for
- He highlights a gap between public sector and AI firms on understanding AI breakthroughs
- Dalrymple says most valuable tasks could be done by machines better and cheaper within five years
David Dalrymple, a program director and AI safety expert at the UK government's Advanced Research and Invention Agency (ARIA), has warned that the world "may not have time" to prepare for the safety risks posed by rapidly advancing AI systems. He told The Guardian people should be concerned about the growing capability of the technology.
"I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better. We will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society, and planet," he said.
Dalrymple further highlighted a gap in understanding between the public sector and AI companies about the potential breakthroughs in AI. He advised that things are moving really fast and we may not have time to get ahead of it from a safety perspective. "And it's not science fiction to project that within five years most economically valuable tasks will be performed by machines at a higher level of quality and lower cost than by humans," he added.
He also cautioned that governments should not take the reliability of advanced AI systems for granted. Aria, an organisation that directs research funding while remaining independent despite public funding, is focused on safeguarding the use of AI in critical sectors like energy infrastructure. Dalrymple, who is working on these safety systems, noted that the science needed to ensure full reliability may not emerge quickly enough due to economic pressures.
"So the next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides," he said.
The AI expert warned that when technological advancement outpaces safety measures, it can lead to serious risks for both security and the economy. He called for more technical research to better understand and manage the behaviour of advanced AI systems.
"Progress can be framed as destabilising and it could actually be good, which is what a lot of people at the frontier are hoping. I am working to try to make things go better but it's very high risk and human civilisation is on the whole sleep walking into this transition," he said further.
The UK's AI Security Institute (AISI) reported that the AI capabilities are advancing rapidly, with performance doubling every 8 months in some areas. Advanced models can now complete apprentice-level tasks 50% of the time, and some systems can autonomously complete tasks that would take a human expert over an hour. AISI tested models for self-replication, a key safety concern, and found two cutting-edge models achieved success rates over 60%. However, AISI stresses a worst-case scenario is unlikely in everyday conditions.
Track Latest News Live on NDTV.com and get news updates from India and around the world