- Anthropic CEO Dario Amodei does not want Claude used by the Pentagon for mass domestic surveillance and autonomous weapons
- A statement has laid bare Anthropic’s reasons for retaining Claude’s safety rails
- Pete Hegseth gave Anthropic until Friday to provide the DoD with full access
Anthropic CEO Dario Amodei has released a statement concerning the company’s ongoing disagreement with the US Department of Defense.
Amodei declared Anthropic “cannot in good conscience accede” to the DoD’s request to provide full access to its AI models, over fears they could be used for ‘mass domestic surveillance’ and ‘fully autonomous weapons’.
US Defense Secretary Pete Hegseth has threatened to label Anthropic as a “supply chain risk” and invoke the Defense Production Act to force the company to comply.
Unprecedented threats against Anthropic
In his statement, Amodei said Anthropic has historically had a very good relationship with the US government, including being the first AI company to deploy its models within US government networks, the National Laboratories, and the first to deploy models for national security.
Amodei also noted the company has complied with US regulations on the use and sale of AI models to China, to the extent that it chose to “forgo several hundred million dollars in revenue” by preventing the use of Claude by the Chinese Communist Party.
“Anthropic understands that the Department of War, not private companies, makes military decisions,” Amodei continued. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
But the hesitations to provide the DoD with full access to Claude surround the potential misuse of the model for two nefarious purposes.
Regulations surrounding AI have not caught up with the capabilities of AI models such as Claude, Amodei says, which would allow the US government to deploy Claude as a tool for mass domestic surveillance.
Theoretically, the government could purchase highly detailed records and use AI models organize it into highly accurate reflection of US citizens at a scale never seen before.
As for AI use in weapons systems, Amodei says they “may prove critical for our national defense,” but he argues that current AI models are “simply not reliable enough to power fully autonomous weapons.” If an AI model in charge of an autonomous weapon system were to suffer a hallucination, the responsibility would likely fall on the model developer.
Amodei also addresses the threats made by Hegseth, stating that they “are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
The statement concludes that Anthropic’s “strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.”
“Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”

The best ID theft protection for all budgets
https://cdn.mos.cms.futurecdn.net/ruEKtjWUMx4iRk97JXVPY9-2560-80.jpg
Source link
benedict.collins@futurenet.com (Benedict Collins)




