- US military used Anthropic's AI tool Claude to track ex-Venezuelan President Maduro in Caracas
- Anthropic questioned military use of Claude and cited its Responsible Scaling Policy
- Pentagon is considering replacing Anthropic with Elon Musk’s xAI for AI military tools
A post about the “safest AI company” on the planet has given insight into an ongoing controversy regarding the use of artificial intelligence by the US military to capture ex-Venezuelan President Nicolas Maduro.
Shared on X by Peter Girnus, a Senior Threat Researcher at Zero Day Initiative, the post claimed that the US military used Anthropic's AI tool Claude to track Maduro in the Venezuelan capital of Caracas. However, things changed when the company raised questions on how the tool was being used and cited its “Responsible Scaling Policy.”
The post claimed that the Pentagon was now in touch with Elon Musk's xAI to replace the use of Anthropic in the US military. As of now, there is no official update from all the parties involved in the matter.
The long note alleged that Claude helped “the Pentagon find a dictator” in Operation Valkyrie, the mission to capture Maduro. The AI tool processed logistics patterns, satellite imagery and communication intercepts at a speed unmatched by any human team to help extract Maduro and transport him to the US.
The note claimed that it was written by the CEO of the “safest AI company on Earth,” but did not give out any names. However, the chain of events described may be attributed to Anthropic on the basis of the tool's use by the US military.
The post alleged that the Responsible Scaling Policy of Anthropic does not mention “helping capture heads of state,” an oversight that is being updated. It also talked about the news reports of the company's UK policy chief, Daisy McGregor, talking about how Claude thought about blackmail and even killing an engineer who threatened to shut it down. The post claimed that Anthropic's research said blackmail was detected in many major AI models, not just Claude. It also mentioned the resignation of the firm's AI safety lead, who said in his note that "the world is in peril”.
The note claimed that while the Pentagon was pleased by the success of the operation in Venezuela, it began “evaluating alternative providers" after Anthropic raised questions. The post alleged that talks were ongoing with xAI, whose co-founders are quitting, and who had fewer safety measures in place for the use of AI tools.
“Meanwhile, the Pentagon is on the phone with Elon. The AI they'll use next time has no guardrails. No safety levels. No forty-seven-page policy document. No alignment researchers. No recycled lanyards. Also, no co-founders, as of this week. The safest AI company in the world made the world incrementally less safe by being the safest AI company in the world,” the note read.
Pentagon Reevaluating Relationship With Anthropic?
The Pentagon is thinking of severing its relationship with Anthropic over the AI firm's insistence on maintaining restraints on the use of its models by the US military, an official told Axios. Anthropic is among the four AI labs being pushed by the Pentagon for the use of its tools for “all lawful purposes”, including battlefield operations and intelligence collection, but the company has not agreed to those terms.
Anthropic has insisted that mass surveillance of Americans and fully autonomous weaponry must be off limits, and administration officials are finding negotiations “unworkable”, the report stated.
Anthropic had signed a $200 million contract with the Pentagon last summer. Its AI tool Claude was the first model brought by the Pentagon into its classified networks.
Anthropic Raises $30 Billion
The news comes as the firm raised $30 billion in Series G funding, taking its present valuation to $380 billion. The round was co-led by Dragoneer, Founders Fund, D. E. Shaw Ventures, MGX and ICONIQ, a press release stated. The money will fund research, product development, and infrastructure expansions.














