- Palantir integrates AI chatbot Claude to assist US military intelligence analysis and planning
- Claude suggests enemy unit types and generates response options like airstrikes or ground teams
- AI aids in operation prep by mapping routes and assigning electronic jamming to disrupt foes
US defence contractor Palantir has shown how artificial intelligence chatbots could help military analysts review intelligence and generate possible battlefield plans, according to software demos and Pentagon records.
The system is part of Palantir's partnership with AI firm Anthropic. In November 2024, the company said its Claude chatbot would be integrated into software used by US intelligence and defence agencies to help analysts find patterns in large datasets and support decision-making during military operations.
The demos use Palantir's Artificial Intelligence Platform, which integrates large language models such as Claude into defence data systems through a chat interface, Wired reported.
In one example, the system alerts an analyst to unusual enemy activity detected in radar or satellite imagery. The analyst asks the chatbot what military unit may be present, and the AI suggests it could be an armoured battalion based on equipment patterns. The operator can then request further surveillance, such as sending a reconnaissance drone to collect more images.
The analyst can also ask the system to generate “courses of action” to respond to the threat. The chatbot produces several options, including an airstrike, long-range artillery or deploying a tactical ground team, which can be sent to a commander for approval.
Once a plan is chosen, the AI helps prepare the operation by analysing the battlefield, generating routes for troops and assigning electronic jamming equipment to disrupt enemy communications.
Other demos show analysts asking the system to interpret satellite images, generate military strategies on digital maps and produce short intelligence reports based on available data.
In late February, Anthropic refused to grant the US government unrestricted access to its Claude models, saying the systems should not be used for mass surveillance of Americans or fully autonomous weapons. The Pentagon later labelled Anthropic's products a “supply-chain risk”. Anthropic has since filed two lawsuits alleging illegal retaliation by the Trump administration and seeking to overturn the designation.
Palantir has been the main contractor for Project Maven since 2017, a US military programme that uses artificial intelligence in operations. For the project, it built the Maven Smart System, managed by the National Geospatial-Intelligence Agency and used across multiple US military branches.
Palantir says Anthropic's Claude chatbot can run through its Artificial Intelligence Platform (AIP), inside systems such as Foundry and Gotham, where it can answer questions, analyse data and perform tasks.
In a 2025 demonstration, Claude was also shown generating an intelligence report on a Ukrainian drone strike known as Operation Spider's Web.














