Details
On February 27, 2026, OpenAI and the Department of War reached a formal agreement for deploying advanced AI systems in classified environments. OpenAI's models are deployed via cloud-only infrastructure using OpenAI's safety stack; the company is not providing 'guardrails off' or non-safety-trained models, and edge deployment is excluded. Cleared, forward-deployed OpenAI engineers and safety researchers remain involved in oversight. Permitted uses, as confirmed in a separate OpenAI announcement about the GenAI.mil platform, include summarizing and analyzing policy documents, drafting procurement materials, generating internal reports, and supporting research, mission planning, and administrative workflows. The agreement explicitly bars use of the AI for domestic surveillance of U.S. persons (including through commercially acquired data), directing autonomous weapons, or high-stakes automated decisions. The deal followed the collapse of a similar agreement with Anthropic; on February 27, 2026, the Department designated Anthropic a supply-chain risk to national security.
Have evidence about U.S. Department of War's AI practices? Submit a report.
Report a Sighting →