OpenAI and Google AI experts jointly support Anthropic's lawsuit against the Department of Defense, opposing the view that ethical refusal is a security risk. This dispute touches on the deep-seated contradictions between technological autonomy and national security, and may reshape the AI industry's regulatory landscape.
Recently, the U.S. Department of Defense (DoD) labeled AI company Anthropic as a ‘supply chain risk,’ sparking a strong backlash from Silicon Valley's tech community. Over 30 leading AI experts from OpenAI and Google DeepMind have jointly signed an amicus brief, publicly supporting Anthropic's legal challenge. This rare display of industry solidarity reveals a deep divide between tech companies and the government on the ethical boundaries of AI.
This stems from Anthropic's refusal to participate in two military applications: mass surveillance of U.S. citizens and the development of autonomous weapon systems. As an AI company emphasizing ethical constraints, Anthropic explicitly restricts the use of its technology in high-risk scenarios in its contracts. The DoD, however, believes that as long as the use is legal, the government should unconditionally have access to any AI technology. This conflict of positions directly challenges the fundamental question of whether companies can set boundaries on the use of technology in public contracts.
In the brief, the experts pointed out that if the DoD is dissatisfied with the contract terms, it can resolve the issue by terminating the cooperation and selecting another supplier, rather than resorting to the severe label of ‘supply chain risk,’ which is typically used to sanction hostile foreign entities. What is even more questionable is that shortly after taking action against Anthropic, the DoD signed a new military cooperation agreement with OpenAI. This contradictory approach is interpreted as a punishment for ethical stances rather than a genuine security concern.
Many OpenAI employees have also expressed dissatisfaction with internal participation in military projects. Experts warn that such practices will chill the entire AI research ecosystem, suppress open discussions about technology risks, and ultimately weaken the United States' global innovation competitiveness in the field of artificial intelligence. They emphasize that in the absence of a federal AI legal framework, the usage restrictions set by companies themselves are the last line of defense against technology abuse.
The ‘supply chain risk’ label originates from a 2019 presidential executive order, typically used to exclude foreign companies deemed to pose a national security threat. Its application to a domestic tech company signals that regulatory boundaries are expanding in an unprecedented manner, and its subsequent impact may reshape the cooperation models and ethical standards of the entire AI industry.
0 comment A文章作者M管理员
No Comments Yet. Be the first to share what you think
❯
Profile
Search
Checking in, please wait...
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today