OpenAI's head of robotics and hardware, Caitlin Carinoffsky, has resigned due to ethical concerns regarding the company's collaboration with the U.S. Department of Defense. According to multiple media reports, her departure is primarily rooted in opposition to two key technological applications: first, the surveillance of American citizens without judicial authorization, and second, the deployment of lethal autonomous weapon systems without human intervention.

Carinoffsky pointed out that the collaboration project was announced publicly before its security mechanisms were clearly defined, exposing flaws in the company's governance processes. In response, OpenAI CEO Sam Altman acknowledged that the project was pushed forward too hastily and stated that the company has negotiated to introduce several technical and legal safeguards, including a clear prohibition on domestic surveillance and the development of autonomous weapons.

Her resignation has sparked widespread public discussion about the boundaries of AI companies' collaborations with the military. Some members of Congress and civic groups have called for a review of the Department of Defense's AI procurement projects. Senator Chris Coons (D-DE) emphasized that forcing AI companies to participate in surveillance or weapon development is a "chilling concept that exceeds the authority of the defense department," potentially jeopardizing civil liberties and the ethical baseline of technology.
In this context, OpenAI's response strategy contrasts with that of its competitor, Anthropic. The latter tends to avoid controversy through strict contractual terms and clear usage restrictions, while OpenAI adopts a combination of technology stack layering, controlled deployment, and legal constraints in an attempt to seek a balance between commercial collaboration and ethical responsibility. External attention has shifted to whether these protective mechanisms are sufficiently transparent, auditable, and effectively enforceable in practical applications.
This incident not only affects internal organizational trust within OpenAI but also intensifies societal institutional reflections on the militarization boundaries of AI technology. In the future, the ethical decision-making mechanisms of AI companies may no longer be merely internal affairs but become a core issue scrutinized by regulatory bodies and the public.

