by

Rina Chandran:

The U.S. military is using the most advanced AI it has ever used in warfare, with Anthropic’s Claude AI reported to be assessing intelligence, identifying targets, and simulating battle scenarios, even as the Pentagon said it would terminate its contract with the company over a disagreement about its use. […]

The biggest role that AI now has in U.S. military operations in Iran, as well as Venezuela, is in decision-support systems, or AI-powered targeting systems, Feldstein said. AI can process reams of surveillance information, satellite imagery, and other intelligence, and provide insights for potential strikes. The AI systems offer speed, scale, and cost-efficiency, and “are a game-changer,” he said. […]

AI has long been used to analyze satellite imagery and guide missile-defense systems, but the use of chatbots such as Claude in decision-support systems is new. There is no clarity yet on how accurate these systems are and how they make decisions. In a recent study, AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95% of cases. Lavender, an AI-powered database used by Israel to analyze surveillance data and identify potential targets in Gaza, was wrong at least 10% of the time, resulting in thousands of civilian casualties.

Murderers without malice” was an understatement.