The Pentagon Wants to Use AI to Kill People Without Checks. Is Anthropic Really Pushing Back?
Anthropic’s rupture with the Pentagon has sparked public backlash and soaring support. But an AI ethics expert warns against 'falling for the theatrics.'
Over the weekend, the US military relied on Anthropic’s powerful AI model, Claude, to help support the launch of its brutal attack on Iran. The Pentagon used Claude despite the fact that just hours earlier, the Trump administration declared it would end use of Anthropic’s AI tools after a spat over the government’s desire to mass-surveil US citizens using its AI model.
The dispute between Anthropic and the US government has devolved into a chaotic split that has upended Silicon Valley, thrown the government into confusion, and kicked off a national debate about the ethics of AI in warfare.



