Artificial intelligence company Anthropic has filed an unprecedented lawsuit against the US government after it was officially labeled a “supply chain risk,” escalating a bitter dispute over the military use of advanced AI technology.
The lawsuit, filed in federal court in California, challenges a policy issued by Donald Trump’s administration that effectively banned U.S. government agencies from using Anthropic’s AI systems. The company argues the move was politically motivated retaliation after it refused to lift restrictions on the U.S. military’s use of its technology.
Anthropic’s lawsuit claims the decision was “unprecedented and unlawful” and violated constitutional protections regarding free speech and due process.
“The Constitution does not allow the government to use its enormous power to punish a company for its protected speech,” the company said in its complaint. “No federal law allows the measures taken here.”
The conflict stems from a disagreement between Anthropic boss Dario Amodei and US defense officials, including Pete Hegseth, over how the company’s artificial intelligence tools could be used by the Pentagon.
Anthropic has long maintained strict contractual restrictions on the use of its technology, including bans on using its AI models for “lethal autonomous warfare” and for domestic mass surveillance of American citizens.
According to the lawsuit, defense officials demanded that the company remove these restrictions from its government contracts. Anthropic declined, arguing that such safeguards are essential to ensure responsible use of powerful AI systems.
The company said negotiations with the Defense Department were initially progressing and both sides had worked on revised language that would allow continued cooperation while maintaining ethical boundaries.
However, those talks reportedly collapsed after the White House intervened.
After negotiations collapsed, the Pentagon labeled Anthropic a “supply chain risk” — a classification that typically applies to companies seen as unsafe or unreliable partners for government systems.
The designation effectively blocks U.S. government agencies and contractors from using Anthropic’s software tools.
The move was met with public criticism from the Trump administration. White House officials accused the company of trying to dictate military policy.
Liz Huston, a White House spokeswoman, told reporters that Anthropic is “a radical left-wing, woke company” that wants to impose its own terms on national defense operations.
“Under the Trump administration, our military will follow the Constitution of the United States – not the terms of service of some woke AI company,” Huston said.
Anthropic disputes this characterization, arguing that the restrictions are standard contractual provisions intended to prevent misuse of AI systems.
The legal challenge names a broad list of defendants, including President Trump’s executive office and senior administration officials such as Marco Rubio and Howard Lutnick.
The lawsuit also targets 16 federal agencies, including the Department of Defense, the Department of Homeland Security and the Department of Energy.
Anthropic claims that the policy banning its technology caused significant reputational and business damage.
The company said both current and future commercial contracts were now in jeopardy and could potentially put “hundreds of millions of dollars” at risk in the near future.
It also argued that the decision had a broader chilling effect across the technology sector by discouraging companies from speaking publicly about the risks associated with advanced AI.
The case has already garnered support from across the tech industry.
Nearly 40 employees from rival companies, including Google and OpenAI, filed a joint legal filing supporting Anthropic’s position, even though the companies are competitors in the fast-growing AI sector.
The signatories warned that the use of advanced AI systems without safeguards could pose serious risks, particularly when used for mass surveillance or autonomous weapons.
“As a group, we are diverse in our politics and philosophy,” the engineers wrote in their post. “But we are united in our belief that today’s border AI systems pose risks when used to enable domestic mass surveillance or the operation of autonomous lethal weapon systems without human oversight.”
Anthropic’s flagship AI system, Claude, is widely used by technology companies and developers for coding, research, and enterprise software tasks.
Companies such as Microsoft, Amazon and Meta have confirmed that they will continue to use the technology in commercial applications, although not in projects involving US defense agencies.
Anthropic is not seeking any financial compensation in this case. Instead, it is asking the court to declare the government’s policy unconstitutional and immediately remove the “supply chain risk” designation.
Legal experts say the dispute could become a landmark case in determining how governments interact with AI developers.
Carl Tobias, a law professor at the University of Richmond, said the case could ultimately reach the U.S. Supreme Court.
“Anthropic could very well win in federal court,” Tobias said. “But this government is not afraid to appeal. They will probably go to the Supreme Court.”
The outcome could have major implications for the fast-growing AI industry, especially as governments around the world increasingly rely on private technology companies to provide critical artificial intelligence systems for defense, intelligence and national security operations.
The lawsuit marks a rare moment in which a major technology company openly challenges government authority over the future use of artificial intelligence.




