— OpenAI said on Friday that it reached an agreement with the Department of Defense to allow its A.I. technologies to be used on classified systems, hours after President Trump ordered federal agencies to stop using A.I. from rival Anthropic. Under terms required by the Pentagon, OpenAI agreed the department could employ its systems for any lawful purpose; OpenAI said it would install technical guardrails to keep the use aligned with its safety principles. The announcement followed a public breakdown in talks between Anthropic and the Pentagon over a separate $200 million contract and a designation by Defense Secretary Pete Hegseth labeling Anthropic a supply-chain risk to national security.
Key Takeaways
- Agreement timing: OpenAI reached the Pentagon deal on Feb. 27, 2026, the same day the administration barred federal agencies from using Anthropic technology.
- Scope of use: The Pentagon required that contractor A.I. be available for any lawful purpose; OpenAI acceded to that requirement for classified systems while adding technical constraints.
- Anthropic negotiations: Talks over a proposed $200 million Anthropic contract collapsed, and Anthropic was declared a supply-chain risk by Defense Secretary Pete Hegseth.
- Public posture: OpenAI emphasized safety and partnership; its CEO said the DoD showed ‘‘deep respect for safety’’ during negotiations.
- Transparency gap: Financial terms for the OpenAI–DoD arrangement were not disclosed publicly; the Anthropic dispute and designation are documented and dated.
- Policy friction: The episode highlights a widening gap between some A.I. firms’ internal safety limits and Pentagon procurement requirements.
Background
The United States has been accelerating efforts to integrate advanced commercial A.I. into defense and intelligence systems while grappling with ethical, legal and security questions. In recent months, the Pentagon has pressed leading A.I. firms to accept broad usage clauses that allow government actors to employ contracted systems for any lawful national security purpose. Some firms have pushed back, seeking contractual limits to prevent uses they judge to be reckless, such as domestic surveillance or lethal autonomous weapons.
Anthropic and OpenAI are two of the largest U.S. developers of generative A.I. Anthropic entered negotiations with the Pentagon over a proposed $200 million contract but argued for explicit contractual safeguards against certain applications. The disagreement over permitted uses became public in February 2026 as procurement deadlines and national-security concerns converged. Administration officials, including the president and the defense secretary, intervened publicly as talks reached a breaking point.
Main Event
On Feb. 27, 2026, OpenAI said it had reached terms with the Department of Defense that allow the agency to operate OpenAI systems on classified networks for lawful purposes. Company statements said OpenAI will implement technical guardrails designed to enforce its safety principles while satisfying the DoD requirement that contractors cannot unilaterally restrict lawful government use. OpenAI framed the outcome as a partnership that balances operational needs with safety commitments.
Earlier the same day the administration ordered federal agencies to stop using Anthropic products, and negotiations over Anthropic’s proposed $200 million contract failed to meet a 5:01 p.m. deadline. Defense Secretary Pete Hegseth then designated Anthropic a ‘‘supply-chain risk to national security,’’ a label that effectively terminates the company’s access to U.S. government business. The change in procurement prospects was immediate and public.
The public narrative was further sharpened by comments from senior officials and by President Trump, who criticized Anthropic in a social-media post. OpenAI’s CEO, Sam Altman, posted that the Defense Department showed a ‘‘deep respect for safety and a desire to partner to achieve the best possible outcome,’’ signaling OpenAI’s effort to portray the deal as both responsible and commercially successful.
Analysis & Implications
Commercial A.I. firms now face a stark procurement choice: accept broad government usage clauses to gain access to large defense contracts, or insist on contractual limits that may shut them out of federal work. The OpenAI–DoD agreement suggests that at least one major provider has opted for technical and engineering solutions to reconcile safety stances with the Pentagon’s requirements, rather than legal carve-outs.
For the Pentagon, securing access to leading A.I. capabilities is a strategic imperative as rivals deepen their own defense-related A.I. programs. The department’s insistence on ‘‘lawful purpose’’ flexibility reflects longstanding procurement norms intended to ensure that national-security customers retain operational control. But the public dispute with Anthropic shows the reputational and political risks of pressing vendors too hard.
Market dynamics are likely to shift. Companies that align with Pentagon terms and offer certified guardrails could capture government revenue and reputational benefits; those that prioritize contractual restrictions may pursue alternative markets. Investors and partners will watch whether the OpenAI arrangement sets a template for engineering-based safeguards versus legal restrictions.
Comparison & Data
| Company | Negotiation Outcome | Contract Value | Allowed Use |
|---|---|---|---|
| Anthropic | Negotiations failed; designated supply-chain risk | $200 million (proposed) | Vendor sought limits on surveillance and lethal weapon use |
| OpenAI | Agreement reached with DoD for classified systems | Undisclosed | Permitted for any lawful purpose with technical guardrails |
The table summarizes public details: Anthropic’s talks centered on a proposed $200 million award but ended without agreement and were followed by an official supply-chain risk designation. OpenAI’s deal permits use for lawful purposes on classified systems; financial terms were not publicly released. These contrasts highlight how procurement language and a firm’s willingness to accept it can determine government access.
Reactions & Quotes
OpenAI framed the deal as both a technical and relational success, emphasizing safety commitments alongside compliance with defense requirements.
“In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
Sam Altman, OpenAI CEO (social post)
The Pentagon’s public designation of Anthropic sharpened the political stakes and underscored procurement authorities’ concerns about supply-chain integrity.
“Anthropic presents a supply-chain risk to national security.”
Pete Hegseth, U.S. Secretary of Defense (official statement)
The president’s intervention further politicized the dispute and signaled administration-level pressure on vendor selection.
“A radical Left AI company.”
President Donald J. Trump (social-media post)
Unconfirmed
- Whether the OpenAI guardrails fully prevent all contested uses (for example, autonomous lethal systems) remains unverified by independent auditors.
- The monetary value and full legal text of the OpenAI–DoD agreement have not been publicly released and therefore cannot be independently confirmed.
- Internal Pentagon assessments and the detailed rationale for the supply-chain risk designation of Anthropic have not been published in full.
Bottom Line
The episode marks a turning point in how the U.S. government secures commercial A.I.: defense procurement requirements can force firms to choose between contractual limits and operational access, and engineering mitigations are emerging as a middle path. OpenAI’s move to accept Pentagon usage terms while layering in technical constraints may become a model for other vendors seeking government business without abandoning public-facing safety commitments.
Policymakers, Congress and independent auditors will now play an important role in scrutinizing whether technical guardrails are adequate and whether procurement rules strike the right balance between capability and oversight. For industry, the calculus is clear: alignment with government requirements may yield lucrative contracts but will also invite greater regulatory and public scrutiny.
Sources
- The New York Times (news report summarizing events and statements)