Lead: OpenAI CEO Sam Altman acknowledged that the company “shouldn’t have rushed” its new agreement with the U.S. Department of Defense after an online backlash, and said the contract will be amended to clarify limits on domestic surveillance. The revisions, posted by Altman as a repost of an internal memo, add language prohibiting intentional use of the system for domestic surveillance of U.S. persons and nationals. Altman also said the Defense Department affirmed that intelligence agencies such as the NSA would not use OpenAI’s tools, and pledged further technical safeguards. The announcement follows a public dispute over Anthropic’s dealings with the Pentagon and intense scrutiny over timing and optics.
Key Takeaways
- OpenAI will amend its Defense Department contract to include explicit language that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
- Altman admitted publicly that the company “shouldn’t have rushed” the Friday announcement and called the rollout “opportunistic and sloppy.”
- The Defense Department purportedly agreed to limits preventing deliberate tracking or monitoring of U.S. persons, including use of commercially acquired identifiable information.
- Altman said the Department affirmed that intelligence agencies such as the NSA would not be authorized to use OpenAI’s tools under the agreement.
- The timing of OpenAI’s deal followed a breakdown in talks between Anthropic and the Pentagon and came amid a broader dispute over AI safeguards and a presidential directive affecting Anthropic’s tools.
- Public reaction included users switching from ChatGPT to Anthropic’s Claude in app stores, creating consumer and reputational pressure on OpenAI.
Background
The dispute centers on how commercial AI labs contract with U.S. defense and intelligence customers while guarding against domestic surveillance and autonomous weapons use. Anthropic, founded in 2021 by former OpenAI researchers, reached an initial deployment across parts of the Defense Department’s classified network last year and later sought explicit guarantees against certain use cases. That negotiation frayed after officials raised concerns and a public feud developed between Anthropic and the government.
OpenAI and Anthropic have framed their positions as sharing similar “red lines”—limitations on uses such as domestic surveillance and fully autonomous weapons without human control—but enforcement and contractual language differ. The Pentagon’s decisions have been scrutinized by industry, lawmakers and the public because they set precedents for how U.S. national security customers can procure powerful generative AI systems from private labs.
Main Event
On Friday, OpenAI announced a new deal with the Department of Defense, prompting immediate online criticism over the speed and timing of the agreement. Within days, CEO Sam Altman posted a repost of an internal memo on X acknowledging public concerns and outlining planned amendments to the contract’s language. The memo added explicit restrictions on domestic surveillance and on procurement or use of commercially acquired personal identifiable information for tracking U.S. persons.
Altman said the company and the Department agreed that intelligence agencies such as the NSA would not use OpenAI’s systems under the terms of the agreement. He described ongoing technical work with the Pentagon to establish safeguards and to limit the technology’s deployment to appropriate use cases, while noting there remain tradeoffs and unresolved safety questions.
The timing of OpenAI’s agreement followed high-profile friction between the Pentagon and Anthropic, which earlier sought more robust contractual guarantees. Defense Secretary Pete Hegseth publicly said Anthropic would be designated a supply-chain threat, a label that intensified scrutiny of how the Pentagon assesses vendor risk. Altman said he had urged the Department not to designate Anthropic a supply-chain risk and hoped similar terms would be offered to them.
Analysis & Implications
The episode highlights an emerging governance dilemma: how to integrate rapidly evolving AI capabilities into national security workflows while avoiding domestic civil liberties risks. Contract language that forbids “intentional” domestic surveillance addresses a clear public concern, but enforcement depends on auditability, transparency and technical constraints that are not yet standardized. Absent robust monitoring, contractual promises may provide limited reassurance by themselves.
The difference in outcomes between OpenAI and Anthropic negotiations raises questions about procurement timelines, corporate posture and political factors influencing vendor selection. If the Pentagon can negotiate different terms with competing vendors, vendors may adapt their contractual language or internal controls to remain eligible for sensitive contracts, potentially creating uneven incentives for safety-first design choices.
For the private sector and purchasers, the incident underscores reputational risks tied to defense contracts. Consumer shifts—reported app-store movement from ChatGPT to Claude—show how public trust can translate into commercial impact. For policymakers, the episode may accelerate calls for clearer standards on acceptable military and domestic uses of commercial AI, and for formal oversight of how contract safeguards are verified.
Comparison & Data
| Item | Anthropic | OpenAI |
|---|---|---|
| Initial DoD deployment | Deployed models across classified network (after initial deal last year) | New agreement announced on Friday; amendments pledged after backlash |
| Contract stance | Sought explicit guarantees against domestic surveillance and autonomous weapons | Added language to prohibit intentional domestic surveillance; pledged restrictions on intelligence use |
The table highlights the sequence and contractual focus: Anthropic previously worked to secure guarantees and faced a standoff with the Pentagon, while OpenAI’s faster announcement triggered public criticism and a subsequent pledge to amend language. These contrasts do not by themselves explain the Department’s negotiation choices but show differing vendor approaches.
Reactions & Quotes
Altman’s public acknowledgement framed the company’s intent and missteps, and he emphasized restraint in a high-stakes moment.
“We shouldn’t have rushed”
Sam Altman, CEO of OpenAI
Altman also characterized the company’s initial intent as an effort to de-escalate tensions between Anthropic and the Department.
“We were genuinely trying to de-escalate things and avoid a much worse outcome.”
Sam Altman (internal memo repost)
Defense leadership signaled a stricter stance toward Anthropic during the episode, framing it as a supply-chain concern.
“Designated a supply-chain threat”
Defense Secretary Pete Hegseth (public statement)
Unconfirmed
- Why the Defense Department accepted OpenAI’s terms but reportedly did not reach the same accommodation with Anthropic remains unclear and unverified.
- Whether the Department’s assurances will prevent all secondary or indirect domestic-use scenarios is unresolved and depends on implementation details not yet public.
- The exact mechanics and timelines for the technical safeguards OpenAI pledged to develop with the Pentagon have not been disclosed.
Bottom Line
This episode illustrates the friction at the intersection of commercial AI innovation, national security procurement and public trust. OpenAI’s rapid acknowledgement and pledge to amend its Pentagon contract address immediate reputational harms, but contractual language alone will not eliminate enforcement or oversight challenges.
Policymakers, vendors and civil-society groups will need to translate such clauses into verifiable standards and monitoring mechanisms if the government intends to scale commercial AI deployments without eroding civil liberties. For industry, the lesson is clear: speed in securing contracts with national security customers can create long-term brand and regulatory costs if safeguards and communication are incomplete.