Zeteo

Zeteo

Home
Mehdi Unfiltered
We’re Not Kidding
Shows
Columns
Documentaries
Watch
Ask The Editor
Book Club
Shop
Donate To Zeteo
About

The Pentagon Wants to Use AI to Kill People Without Checks. Is Anthropic Really Pushing Back?

Anthropic’s rupture with the Pentagon has sparked public backlash and soaring support. But an AI ethics expert warns against 'falling for the theatrics.'

Taylor Lorenz's avatar
Taylor Lorenz
Mar 03, 2026
∙ Paid
Photo illustration by Samuel Boivin/NurPhoto via Getty Images

Over the weekend, the US military relied on Anthropic’s powerful AI model, Claude, to help support the launch of its brutal attack on Iran. The Pentagon used Claude despite the fact that just hours earlier, the Trump administration declared it would end use of Anthropic’s AI tools after a spat over the government’s desire to mass-surveil US citizens using its AI model.

The dispute between Anthropic and the US government has devolved into a chaotic split that has upended Silicon Valley, thrown the government into confusion, and kicked off a national debate about the ethics of AI in warfare.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Zeteo · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture