
Anthropic has approached a San Francisco federal court seeking to pause the Pentagon’s “supply chain risk” label imposed on its AI tool, Claude. The case, heard by Judge Rita Lin, focuses on the company’s blacklisting and a directive linked to Donald Trump that restricts federal agencies from using its artificial intelligence services.
According to reports, the judge remarked that the Pentagon’s move to blacklist Claude AI “looks like an attempt to cripple” the company. In response, Anthropic has requested an emergency order to temporarily halt both the risk label and the ban. The company has also filed a parallel case in a federal appeals court in Washington, DC.
Judge Lin clarified that while the Department of Defense is free to stop using Claude and choose another vendor, the core issue is whether the government’s actions violated the law. The court is expected to deliver its decision on Anthropic’s request in the coming days.
If the temporary relief is granted, Anthropic will be able to continue working with government agencies and contractors while the case proceeds. However, if the request is denied, the company could face significant financial losses and reputational damage.
Anthropic has argued that the Department of Defense’s actions amount to unfair retaliation, stating that it declined to support the use of its AI for autonomous weapons or large-scale surveillance. The company has also challenged the “supply chain risk” label as “unprecedented and unlawful,” claiming it is already impacting its business.
Recent Random Post:















