CEO and co-founder of Anthropic Dario Amodei communicate onstage through the 2025 New York Instances Dealbook Summit at Jazz at Lincoln Middle on December 03, 2025 in New York Metropolis.
Michael M. Santiago | Getty Photos
A federal decide in San Francisco granted Anthropic’s request for a preliminary injunction in its lawsuit towards the Trump administration.
Choose Rita Lin issued the ruling on Thursday, two days after legal professionals for the factitious intelligence startup and the U.S. authorities appeared in courtroom for a listening to. Anthropic sued the administration to attempt to reverse its blacklisting by the Pentagon and President Donald Trump’s directive banning federal companies from utilizing its Claude fashions.
Anthropic sought the injunction to pause these actions and forestall additional financial and reputational hurt because the case unfolds.
“Punishing Anthropic for bringing public scrutiny to the federal government’s contracting place is traditional unlawful First Modification retaliation,” Choose Lin wrote within the order. A ultimate verdict within the case might nonetheless be months away.
Throughout Tuesday’s listening to, Lin pressed the federal government’s legal professionals about why Anthropic was blacklisted. Her language within the order was even sharper.
“Nothing within the governing statute helps the Orwellian notion that an American firm could also be branded a possible adversary and saboteur of the U.S. for expressing disagreement with the federal government,” she wrote.
Anthropic’s go well with adopted a dramatic couple weeks in Washington D.C., between the Division of Protection and one of the invaluable personal corporations on this planet.
In a put up on X in late February, Protection Secretary Pete Hegseth declared Anthropic a so-called provide chain danger, which means that use of the corporate’s expertise purportedly threatens U.S. nationwide safety. The DOD formally notified Anthropic concerning the designation in a letter earlier this month.
Anthropic is the primary American firm to publicly be named a provide chain danger, because the designation has traditionally been reserved for overseas adversaries. The label requires Protection contractors, together with Amazon, Microsoft, and Palantir, to certify that they don’t use Claude of their work with the navy.
The Trump administration relied on two distinct designations – 10 U.S.C. § 3252 and 41 U.S.C. § 4713 – to justify the motion, and so they must be challenged in two separate courts. Due to that, Anthropic has filed one other lawsuit for a proper evaluate of the Protection Division’s willpower within the U.S. Courtroom of Appeals in Washington.
Shortly earlier than Hegseth declared Anthropic a provide chain danger, President Donald Trump wrote a Reality Social put up ordering federal companies to “instantly stop” all use of Anthropic’s expertise. He stated there can be a six-month phase-out interval for companies just like the DOD.
“WE will determine the destiny of our Nation — NOT some out-of-control, Radical Left AI firm run by individuals who don’t know what the true World is all about,” Trump wrote.
The Trump administration’s actions shocked many officers in Washington who had come to admire and depend on Anthropic’s expertise. The corporate was the primary to deploy its fashions throughout the DOD’s categorized networks, and it was championed for its capability to combine with current Protection contractors like Palantir.
Anthropic signed a $200 million contract with the Pentagon in July, however as the corporate started negotiating Claude’s deployment on the DOD’s GenAI.mil AI platform in September, talks stalled.
The DOD needed Anthropic to grant the Pentagon unfettered entry to its fashions throughout all lawful functions, whereas Anthropic needed assurance that its expertise wouldn’t be used for absolutely autonomous weapons or home mass surveillance.
The 2 failed to succeed in an settlement, and now, the dispute will probably be settled in courtroom.
“Everybody, together with Anthropic, agrees that the Division of [Defense] is free to cease utilizing Claude and search for a extra permissive AI vendor,” Lin stated through the listening to Tuesday. “I do not see that as being what this case is about. I see the query on this case as being a really totally different one, which is whether or not the federal government violated the regulation.”
— CNBC’s Jeffrey Kopp contributed to this report.
WATCH: Choose says Pentagon actions seem aimed toward crippling Anthropic


