Microsoft recently disclosed its involvement in providing advanced artificial intelligence and cloud computing services to the Israeli military during the conflict in Gaza. According to the company, these services were used to aid in efforts to locate and rescue Israeli hostages.
In a blog post, Microsoft stated that it had provided the Israeli military with software, professional services, Azure cloud storage, and Azure AI services, including language translation. The company emphasised that it had significant oversight and approved some requests while denying others, believing it followed its principles to help save hostages' lives while honouring the privacy and rights of civilians in Gaza.
The company's statement comes after an investigation by The Associated Press revealed details about Microsoft's partnership with the Israeli Ministry of Defence. The investigation found that the Israeli military uses Azure to transcribe, translate, and process intelligence gathered through mass surveillance, which can be cross-checked with Israel's AI-enabled targeting systems.
Microsoft's involvement in the conflict has raised concerns among human rights groups, who worry that AI systems can be flawed and prone to errors, potentially leading to the deaths of innocent people.
In response to employee concerns and media reports, Microsoft launched an internal review and hired an external firm to conduct additional fact-finding. However, the company declined to provide further details about its involvement or answer specific questions about how its AI models were used by the Israeli military.
Microsoft stated that it had found no evidence that its Azure platform and AI technologies were used to target or harm people in Gaza, but conceded that it "does not have visibility into how customers use our software on their own servers or devices."
Experts have noted that Microsoft's statement is significant because it sets a precedent for commercial technology companies dictating terms of use to governments engaged in conflicts. Emelia Probasco, a senior fellow at Georgetown University, said, "We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict."
Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft for taking a step toward transparency but raised questions about the details of its services and AI models being used by the Israeli military. "I'm glad there's a little bit of transparency here," said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. "But it is hard to square that with what's actually happening on the ground."
The conflict in Gaza has resulted in significant loss of life, with over 50,000 people killed, many of them women and children. Israel's use of intelligence to target militants and conduct hostage rescue operations has often put civilians in harm's way. Microsoft's involvement in the conflict highlights the growing trend of tech companies providing AI products to militaries, raising concerns about the potential consequences of these technologies being used in conflict zones.