Minutes before the airstrike, the only thing that had changed was a number on a screen. A young Gazan man was sleeping in a crowded apartment with his parents and younger siblings. Across the border, an Israeli intelligence officer sat in a darkened command center, staring at a list of AI-ranked targets. The system known as 'Lavender' had flagged the man as a "likely militant" and spat out a coordinate. The officer, under pressure and staring at a long list of similarly ranked targets, hit approve in seconds. Minutes later, the building collapsed. The man, his parents, and the children did not survive. On paper, the AI had simply "cleared" another target. In the street, there was a crater and a family erased by a decision made in moments and never revisited.
When news broke of the missile strike that obliterated the Shajareh Tayyebeh girls' elementary school in Minab, southern Iran, killing 175 people - over 100 of them schoolchildren - fingers immediately pointed to AI as the villain. Emerging reports now blame outdated human intelligence for the disaster. But the Minab tragedy is a perfect window into the slippery, terrifying world of AI in warfare we have just stumbled into: a place where algorithms promise precision, but human error - or worse - still turns schools into craters.
This is agentic AI in war: not a distant sci-fi experiment, but the quiet hand that helps pick a street, a face, a home, and then hands that choice to a human who has just enough time to say yes. The technology is no longer just a lab curiosity. It is now embedded in the very machinery that decides who lives, who dies, and who disappears from the records as if they were never there.
Wars Find New Forms
Wars never really end. They just find new forms and new frontiers. Today, the battlefield is not just trenches, tanks, and infantry; it is also server racks and software. Artificial intelligence has slipped into the ranks of modern warfare, not as a side project, but as a quiet co-pilot in the chain of killing. In the war with Iran, we are seeing AI used at scale for the first time, a brutal, systematized version of the battlefield automation that first flickered across Ukraine's skies.
Agentic AI isn't creeping onto the battlefield; it's kicking down the door to the command center, quietly taking over how modern militaries see, decide, and kill. It promises something chillingly simple: let software do what humans do too slowly, watch every camera, scrape every feed, sift every intercept, and then lean in to whisper to power: "Here is the target. Here is the timing. Here is what happens next."
For decades, intelligence work was defined by scarcity: too few analysts, too little time, too many dots to connect. Now the problem is abundance. Armies drown in satellite imagery, drone video, social media chatter, and sensor data. Agentic AI turns that flood into something like a weaponized recommendation engine, ranking threats instead of Netflix shows and turning "collect - analyze - warn" from a bureaucratic cycle measured in weeks into an automated loop measured in minutes.
Maven, Palantir, And The Privatized Kill Chain
Nowhere is that transformation more visible than in the recent confrontation with Iran. According to a leaked Pentagon memo and subsequent reporting, the Maven Smart System, a command-and-control AI platform built by Palantir, has effectively become the US military's central brain for target selection. It ingests surveillance feeds, battlefield reports, and classified intelligence, then evaluates that information to pinpoint targets and generate prioritized strike lists. US officials openly credit Maven with enabling a wave of precision strikes on Iranian assets in a matter of days, not months.
Crucially, Maven is no longer a quirky pilot project run out of a Pentagon lab. The Defense Department is moving to designate it a "program of record," locking Palantir's system into the long-term force structure and guaranteeing sustained funding.
Gaza And Iran: AI's Human Toll
The same logic plays out in Gaza, where the human cost is far more visible. Injuries, grief, and rubble do not rate well on a screen, but they are the real outcome of the "efficiency" Israel has termed its AI-assisted systems like "the Gospel" and "Lavender." These tools analyzed data on most of Gaza's 2.3 million residents, assigned them probabilities, and at one point marked around 37,000 people as potential militants.
Iran is no innocent bystander. Tehran's operators use the same AI logic-automate, personalize, and scale to generate fake personas, multilingual disinformation, and synthetic media aimed at shaping narratives in Israel, Europe, and North America. If Palantir's Maven is the flagship of kinetic AI warfare, Iran's campaigns prove the same machinery works just as well when the targets are minds and newsfeeds.
Ukraine's Live-Fire AI Lab
Then there is Ukraine, where AI is less a prestige project than a survival mechanism. Ukrainian officials say their forces now receive over 50,000 drone video feeds from the front every month, an impossible volume for human teams to process without help. AI systems classify vehicles, trenches, and movements, rapidly mapping targets for artillery or loitering munitions. Ukrainian and Russian units alike are fielding drones that can lock onto the image of a target and continue the attack even if the communication link is jammed, effectively flying the final run autonomously.
Ukraine has even begun opening its battlefield datasets to allied AI labs so they can train better models on real war footage, explicitly betting that whoever automates faster will win not just this war but the next one too. The result is a kind of live-fire research program in which the front line doubles as a training set.
Who Governs The Machines?
Across Iran, Gaza, and Ukraine, agentic AI has stopped being war's sidekick. It is the nervous system now - picking signals, faces, buildings for the next missile. Palantir's Maven is Washington's new targeting OS. Israel's Lavender normalizes algorithms sorting militants from civilians. Ukraine's planners admit the AI arms race will be won by whoever masters swarm logic first.
We chant "humans stay in the loop" like a prayer. Reality? The loop runs on code lawmakers can't see, and commanders barely grasp. The fight isn't if AI will rule warfare-it's already here. It's whether we draw hard lines on what stays human before Silicon Valley contractors write the defaults in secret code, tested on flesh and blood under falling bombs.
(Author: Aadil Brar is a geopolitics and defense expert)
Disclaimer: These are the personal opinions of the author













