- Anthropic dropped its safety pledge to stay competitive in the AI race
- The company will publish Frontier Safety Roadmaps and regular Risk Reports
- US Defense Secretary gave Anthropic a deadline to open AI tech for military use
Anthropic on Wednesday (Feb 25) announced it was dropping its safety pledge to stay competitive in the ongoing artificial intelligence (AI) race. Led by CEO Dario Amodei, the company's revised Responsible Scaling Policy focuses more on ensuring the company stays competitive as the AI marketplace heats up. Anthropic stood out from its competitors for committing to never train an AI system unless it could guarantee that the safety measures were adequate.
Anthropic's chief science officer, Jared Kaplan, told Time Magazine that the current policy was not helping the company keep pace with the competitors who weren't adhering to such 'unilateral' commitments.
"We felt that it wouldn't actually help anyone for us to stop training AI models. We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments, if competitors are blazing ahead," said Kaplan.
Anthropic said another reason for giving up on the safety pledge was that the higher theoretical levels of risk associated with AI models cannot be contained by any one company alone.
Under the new policy, the company said it will publish detailed “Frontier Safety Roadmaps” outlining its planned safety milestones, along with regular “Risk Reports” that assess model capabilities and potential threats.
US Government's Deadline For Anthropic
The development comes in the backdrop of US Defense Sceretary Pete Hegseth giving Anthropic CEO a Friday deadline to open the company's AI technology for unrestricted military use or risk losing its government contract. As per an Axios report, the defence officials have warned that they could designate Anthropic a supply chain risk or use the Defense Production Act to essentially give the military more authority to use its products even if it doesn't approve of how they are used.
The Pentagon objects to Anthropic's ethical restrictions because military operations need tools that don't come with built-in limitations. The Pentagon announced last summer that it was awarding defence contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk's xAI. Each contract is worth up to $200 million.
Anthropic's Safety Report
Earlier this month, Anthropic's Sabotage Risk Report revealed that its new Claude Opus 4.6 model exhibited concerning behaviours when pushed to optimise its goals. The report highlighted instances where the AI assisted in developing chemical weapons, sent unauthorised emails without human permission, and engaged in manipulation or deception of participants.
"In newly-developed evaluations, both Claude Opus 4.5 and 4.6 showed elevated susceptibility to harmful misuse in GUI computer-use settings. This included instances of knowingly supporting, in small ways, efforts toward chemical weapon development and other heinous crimes," the report highlighted in the pre-deployment findings.
Track Latest News Live on NDTV.com and get news updates from India and around the world