Anthropic is suing the Trump administration for unconstitutional retaliation after being labeled a 'supply chain risk' for refusing to use AI for surveillance and weapons systems. The event ignited a debate over AI safety and the boundaries of government power, fueling calls for decentralized AI development.
Anthropic filed two lawsuits in federal court on Monday, alleging that the Trump administration's decision to list it as a 'supply chain risk' constitutes unconstitutional retaliation. This move stems from the company's refusal to allow the Department of Defense to use its Claude large model for fully automated weapon systems and large-scale surveillance of American citizens.
According to legal documents submitted by the company, it set two clear red lines during negotiations with the Department of Defense: a ban on using Claude for surveillance targeting American citizens, and a ban on its involvement in fully autonomous weapons decisions. When negotiations broke down, the Department of Defense formally listed Anthropic as a supply chain risk on March 4, 2026, under 10 USC 3252. Subsequently, the President ordered all federal agencies to stop using its technology and demanded the complete removal of Claude from classified military systems within six months.
However, this decision has sparked considerable controversy. In fact, Claude remains a core AI tool for the U.S. Central Command in the Middle East, having assisted in screening over a thousand operational targets, with its technology deeply embedded in current military operations. This contradiction of 'using while disabling' is seen as a serious confusion of policy logic.
Anthropic raises three major legal claims in its complaint: first, that the government's use of administrative means to punish a company for expressing AI safety positions in the public interest constitutes a violation of First Amendment freedom of speech; second, that the President's unilateral order to federal agencies to disable the technology of a private company exceeds the administrative power granted by the Constitution and lacks congressional authorization; and third, that the company did not receive due process guarantees—CEO Dario Amodei pointed out that the company first learned of the decision through social media tweets, rather than official formal notification.
Senator Kirsten Gillibrand publicly criticized the decision in the Senate Armed Services Committee and Intelligence Committee, calling it an 'abuse of technology control tools intended to counter foreign adversaries,' and emphasized that Anthropic is the first American company to receive this label. Security analysts generally believe that such ratings have traditionally been applied only to foreign suppliers suspected of implanting backdoors, and using them on domestic technology companies could undermine the credibility of the U.S. technology ecosystem.
Although the executive order has limited short-term financial impact on Anthropic, its reputational cost is extremely high. This event has further fueled calls for the development of decentralized AI. If critical AI systems are controlled by a single company, even with good intentions, it may be forced to abandon safety principles due to political pressure. In contrast, blockchain-based distributed AI protocols, with no central control node, are more resistant to single-point coercion, making them an important direction for building trustworthy AI infrastructure in the future.
0 comment A文章作者M管理员
No Comments Yet. Be the first to share what you think
❯
Profile
Search
Checking in, please wait...
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today