It’s official: The Pentagon has labeled Anthropic a supply-chain risk
The Department of Defense (DOD) has officially notified Anthropic leadership that the company and its products have been designated a supply-chain risk, Bloomberg reports, citing a senior department official. The designation comes after weeks of conflict between the AI lab and the DOD. Anthropic CEO Dario Amodei has refused to allow the military to use its AI systems for mass surveillance of Americans or to power fully autonomous weapons with no humans assisting in the targeting or firing decisions.
The Department has argued that its use of AI should not be limited by a private contractor.
Supply-chain-risk designations are typically reserved for foreign adversaries.
The label requires any company or agency that does work with the Pentagon to certify that it doesn’t use Anthropic’s models.
The Pentagon’s finding threatens to disrupt both the company and its own operations.
Anthropic has been the only frontier AI lab with classified-ready systems. military is currently relying on Claude in its Iran campaign, where American forces are using AI tools to quickly manage the data for their operations.
Dean Ball, a former Trump White House AI adviser, has referred to the designation as a “death rattle” of the American republic, arguing government has abandoned strategic clarity and respect in favor of “thuggish” tribalism that treats domestic innovators worse than foreign adversaries.
They have also urged their leaders to stand together to continue to refuse the DOD’s demands to use their AI models for domestic mass surveillance and “autonomously killing people without human oversight. ” TechCrunch has reached out to Anthropic for comment. In the midst of the dispute, OpenAI forged its own deal with the Department to allow the military to use its AI systems for “all lawful purposes.
Logic Quality Breakdown:
- Updated_At:
- Truth_Blocks:
- Analysis_Method: