OpenAI’s agreement with the Department of Defense is drawing scrutiny — and even CEO Sam Altman admits the rollout wasn’t smooth.
Altman said the deal was “definitely rushed” and acknowledged that “the optics don’t look good.”
The backdrop matters. After negotiations between Anthropic and the Pentagon collapsed, President Donald Trump directed federal agencies to phase out Anthropic’s technology within six months. Defense Secretary Pete Hegseth labeled the AI company a supply-chain risk.
Soon after, OpenAI announced it had struck its own agreement to deploy models in classified environments.
Read More: Anthropic Claims DeepSeek and Other Chinese AI Firms Misused Claude
That raised immediate questions. Anthropic had publicly drawn firm red lines around the use of its AI in fully autonomous weapons and mass domestic surveillance. Altman has said OpenAI shares those same limits. So critics asked: If the guardrails are similar, why could OpenAI get a deal done when Anthropic couldn’t? And are the safeguards really as strong as advertised?
As executives defended the move online, OpenAI published a blog post laying out its position.
The company said its models cannot be used for three specific categories: mass domestic surveillance, autonomous weapons systems, and high-stakes automated decisions such as social credit scoring.
OpenAI also argued that, unlike some competitors that rely primarily on usage policies in national security settings, its approach is multi-layered. According to the post, the company retains control over its safety stack, deploys models via cloud infrastructure, keeps cleared personnel involved, and includes contractual protections — all on top of existing U.S. legal safeguards.
Read More: Enterprise Adoption of AI Still Limited, Says OpenAI COO
OpenAI added that it does not know why Anthropic failed to reach a similar agreement and said it hopes more AI labs will consider such arrangements.
Not everyone is convinced.
Techdirt’s Mike Masnick argued that the contract language “absolutely does allow for domestic surveillance.” He pointed to references to compliance with Executive Order 12333, which he described as a framework the NSA has used to collect communications outside the U.S., even when they involve Americans.
In response, OpenAI’s head of national security partnerships, Katrina Mulligan, pushed back on LinkedIn. She said much of the criticism assumes that a single contract provision is the only barrier preventing mass surveillance or autonomous weapons.
“That’s not how any of this works,” Mulligan wrote. She emphasized that deployment architecture matters more than contract wording. By limiting access to a cloud-based API, she said, OpenAI can prevent its models from being directly integrated into weapons systems, sensors, or other operational hardware.
Altman also addressed the backlash on X. He acknowledged that the speed of the deal fueled criticism — and that it may have contributed to Anthropic’s Claude briefly overtaking ChatGPT in Apple’s App Store rankings.
Read More: Sam Altman defends ChatGPT energy consumption and Says ‘Training people also uses a lot of energy’
So why move quickly?
Altman said OpenAI wanted to de-escalate tensions between the Defense Department and the broader AI industry and believed the terms on offer were worthwhile.
“If we are right and this leads to de-escalation,” he wrote, “we’ll look like geniuses — a company that took on pain to help the industry.” If not, he conceded, OpenAI will continue to be seen as rushed and uncareful.
For now, the debate over AI and national security isn’t cooling down — it’s just getting started.


