Cloud Giants Clarify Claude Access: No Restrictions for Commercial Use by the Department of Defense

Microsoft, Google, and AWS clarify that the Claude model can continue to be used in non-Department of Defense business, addressing the Pentagon's supply chain risk designation and highlighting the boundaries of AI ethics and commercial applications.

As significant advancements occur in the enterprise AI sector, Microsoft, Google, and Amazon Web Services (AWS) jointly issued a statement clarifying that access to Anthropic's Claude large model on their platforms will remain unchanged, provided it does not involve direct contracts with the U.S. Department of Defense. This statement aims to address recent concerns from thousands of commercial clients and developers following the Pentagon's identification of supply chain risks associated with Anthropic.

Cloud Giants Clarify Claude Access: No Restrictions for Commercial Use by the Department of Defense插图

Previously, the U.S. Department of Defense unusually placed this AI safety-focused startup on its "supply chain risk" list, a label typically reserved for foreign adversaries. This move caused market turbulence, leading clients to question whether Claude could continue to be accessed via platforms like Azure, Google Cloud, and AWS. In response, the three tech giants quickly coordinated their statements.

Microsoft was the first to release an official statement, with its legal team confirming after review: "Upon examination, this designation does not affect our ability to provide Claude and related products to non-defense department clients, including services through platforms like M365, GitHub, and AI Foundry." This means Microsoft can continue to advance its collaborative projects with Anthropic in civilian sectors such as education, healthcare, and finance.

Google followed suit, emphasizing: "The Department of Defense's decision does not prohibit us from collaborating with Anthropic on non-defense projects; the Claude model will continue to be available on Google Cloud for compliant clients." AWS also conveyed the same position to its enterprise clients, confirming that Claude can be used for non-military purposes such as enterprise automation, content generation, and customer service.

The core of this controversy stems from Anthropic's commitment to AI ethics. According to insiders, the Pentagon had attempted to acquire Claude's technology for building large-scale surveillance systems and fully autonomous lethal weapon platforms, but this was firmly rejected by Anthropic's management. The company believes such applications violate its core principle of "safety first." In response, the Department of Defense officially added it to the risk list last Thursday.

Although this designation imposes direct restrictions on military procurement, the joint statement from the three cloud service providers successfully delineates a clear boundary between commercial and military uses. This outcome not only stabilizes market confidence but also highlights the increasing role of private tech companies as important guardians of AI ethical standards.

0 comment A文章作者 M管理员
    No Comments Yet. Be the first to share what you think
Profile
Search
🇨🇳Chinese🇺🇸English