Artificial intelligence company Anthropic, the developer of the AI assistant Claude, has recently filed a lawsuit against the Trump administration, accusing it of abusing its administrative power by labeling the company as a ‘supply chain risk’ in retaliation for its refusal to allow the military's excessive use of AI technology.
In its complaint, Anthropic emphasized that this move is “unprecedented and unlawful,” and pointed out that the Constitution prohibits the government from using public power to punish companies for exercising their legitimate speech rights. The company stated clearly that Claude has never been tested for the uses required by the U.S. military, especially in scenarios involving lethal autonomous weapon systems, and its safety and reliability cannot be guaranteed.
On March 3rd of this year, Defense Secretary Pete Hegseth formally placed Anthropic on the “supply chain risk” list, prohibiting any entity doing business with the U.S. Department of Defense from cooperating with it. This is the first time in U.S. history that a domestic technology company has been labeled as such—previously, this label was typically only used for companies associated with foreign adversaries.

Anthropic Sues Trump Administration for Abusing ‘Supply Chain Risk’ Label
AI company Anthropic, after refusing the military's misuse of its Claude system, was labeled a ‘supply chain risk’ by the Trump administration, leading to a lawsuit accusing the administration of abusing its power. This case may become a key turning point in U.S. AI regulation and ethical boundaries.

