Dario Amodei mentioned Thursday that Anthropic plans to problem the Division of Protection’s determination to label the AI agency a supply-chain danger in courtroom, a designation he has known as “legally unsound.”
The assertion comes just a few hours after the DOD formally designated Anthropic a supply-chain danger following a weeks-long dispute over how a lot management the army ought to have over AI techniques. A supply-chain danger designation can bar an organization from working with the Pentagon and its contractors. Amodei drew a agency line that Anthropic’s AI is not going to be used for mass surveillance of Individuals or for totally autonomous weapons, however the Pentagon believed it ought to have unrestricted entry for “all lawful functions.”
In his assertion, Amodei mentioned the overwhelming majority of Anthropic’s clients are unaffected by the supply-chain danger designation.
“With respect to our clients, it plainly applies solely to using Claude by clients as a direct a part of contracts with the Division of Battle, not all use of Claude by clients who’ve such contracts,” he mentioned.
As a preview of what Anthropic will seemingly argue in courtroom, Amodei mentioned the Division’s letter labeling the agency a supply-chain danger is slim in scope.
“It exists to guard the federal government reasonably than to punish a provider; the truth is, the regulation requires the Secretary of Battle to make use of the least restrictive means essential to perform the aim of defending the provision chain,” Amodei mentioned. “Even for Division of Battle contractors, the provision chain danger designation doesn’t (and may’t) restrict makes use of of Claude or enterprise relationships with Anthropic if these are unrelated to their particular Division of Battle contracts.”
Amodei reiterated that Anthropic had been having productive conversations with the DOD over the past a number of days, conversations that some suspect obtained derailed when an inner memo he despatched to employees was leaked. In it, Amodei characterised rival OpenAI’s dealings with the Division of Protection as “security theater.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
OpenAI has signed a deal to work with the DOD in Anthropic’s place, a transfer that has sparked backlash amongst OpenAI employees.
Amodei apologized for the leak in his Thursday assertion, claiming that the corporate didn’t deliberately share the memo or direct anybody else to take action. “It isn’t in our curiosity to escalate the state of affairs,” he mentioned.
Amodei mentioned the memo was written inside “just a few hours” of a collection of bulletins, together with a presidential Reality Social put up saying Anthropic could be faraway from federal techniques, then Protection Secretary Pete Hegseth’s supply-chain danger designation, and eventually the Pentagon’s deal announcement with OpenAI. He apologized for the tone, calling it “a troublesome day for the corporate” and mentioned the memo didn’t replicate his “cautious or thought of views.” Written six days in the past, he added, it’s now an “out-of-date evaluation.”
He completed by saying Anthropic’s prime precedence is to make sure American troopers and nationwide safety specialists preserve entry to necessary instruments in the midst of ongoing main fight operations. Anthropic is at present supporting a few of the U.S.’s operations in Iran, and Amodei mentioned the corporate would proceed to supply its fashions to the DOD at “nominal value” for “so long as essential to make that transition.”
Anthropic might problem the designation in federal courtroom, seemingly in Washington, however the regulation behind the choice makes it tougher to contest as a result of it limits the standard methods firms can problem authorities procurement selections and offers the Pentagon broad discretion on nationwide safety issues.
Or as Dean Ball — a former Trump-era White Home adviser on AI who has spoken out towards Hegseth’s remedy of Anthropic — put it: “Courts are fairly reluctant to second-guess the federal government on what’s and isn’t a nationwide safety problem … There’s a really excessive bar that one must clear with a view to do this. But it surely’s not not possible.”


