OpenAI Reaches Pentagon Agreement After Anthropic Blacklisting

OpenAI said late on Friday, Feb. 27, 2026, that it reached terms with the U.S. Department of Defense to deploy its AI models on the department’s classified network, hours after the Pentagon moved to blacklist rival Anthropic. The deal, announced by CEO Sam Altman on X, follows a day in which Defense Secretary Pete Hegseth labeled Anthropic a “Supply-Chain Risk to National Security” and President Donald Trump ordered federal agencies to stop using Anthropic technology. OpenAI said the DoD accepted specific safety restrictions — including limits on domestic mass surveillance and autonomous use of force — and that the company will provide technical safeguards and personnel to support the deployment. The development marks a sharp turn in relations between the U.S. military and commercial AI labs and raises immediate questions about procurement, safety oversight and industry access.

Key Takeaways

  • Agreement announced Feb. 27, 2026: OpenAI and the Department of Defense agreed to terms allowing OpenAI models on the DoD’s classified network, CEO Sam Altman said.
  • Anthropic designation: Defence Secretary Pete Hegseth labeled Anthropic a “Supply-Chain Risk to National Security,” triggering restrictions on use by DoD contractors.
  • White House action: President Donald Trump directed all federal agencies to “immediately cease” use of Anthropic’s technology the same day.
  • Safety commitments: OpenAI says the contract reflects prohibitions on domestic mass surveillance and retains human responsibility for use of force, including for autonomous weapons.
  • Operational safeguards: OpenAI committed to building technical controls and sending personnel to assist with deployment and model safety inside the classified environment.
  • Negotiation gap: Anthropic had sought contractual guarantees against certain military applications; the DoD reportedly wanted authority to use models for all lawful military purposes.
  • Industry impact: The moves could reshape which commercial models U.S. defense contractors can integrate and may prompt legal challenges and policy reviews.

Background

Commercial AI labs have been engaging with U.S. national security customers for several years, with at least one vendor — Anthropic — already running models inside parts of the Department of Defense classified network before recent talks broke down. Those earlier deployments were framed as pilot programs to test capabilities under strict security controls while policymakers grappled with how to balance innovation, readiness and ethical limits.

Tensions have risen because companies and the military have different priorities: labs emphasize safety constraints and limits on how models are used, while defense officials want broad authority to employ tools for lawful missions. This week crystallized those tensions as the Pentagon pressed for contracts that allow use across all authorized scenarios, and at least one lab pressed back, seeking legal and contractual guarantees on specific prohibited uses.

Main Event

On Feb. 27, 2026, CEO Sam Altman posted that OpenAI and the DoD had agreed terms to place OpenAI models on the department’s classified network. OpenAI characterized the agreement as incorporating key safety principles — explicitly barring domestic mass surveillance and preserving human accountability for force decisions — and said it would supply technical safeguards and trained staff to oversee deployment.

Earlier that day, Secretary Pete Hegseth publicly designated Anthropic as a supply-chain risk, a status typically reserved for entities tied to foreign adversaries and one that requires DoD vendors to certify they do not use Anthropic models. The designation effectively curtails Anthropic’s commercial access to many federal contractors and agencies, while Anthropic said it would challenge the label in court.

The administration’s parallel actions included President Trump directing federal agencies to stop using Anthropic technology immediately. Anthropic had previously negotiated with the DoD over restrictions on how its models would be used — notably seeking prohibitions on fully autonomous weapons and mass surveillance of Americans — but the two sides did not reach agreement.

Analysis & Implications

The Pentagon’s willingness to accept OpenAI’s safety conditions while rejecting those of Anthropic suggests a mix of tactical, legal and confidence-based assessments drove the outcome. Officials have privately criticized Anthropic for what they saw as excessive constraints; whether OpenAI’s concessions or different negotiation dynamics produced the deal is not fully public. The practical implication is that the DoD can proceed with a larger set of commercial AI capabilities while asserting contractual safety guardrails.

For U.S. national security procurement, this episode accelerates a new equilibrium: vendors acceptable to the DoD will likely need to demonstrate both operational suitability and legally binding safety commitments. That may advantage labs that align contractual language to Pentagon requirements without conceding core public safety principles. It also creates incentives for other providers to craft similar terms to remain eligible for classified work.

Internationally, allies watching U.S. procurement decisions may take cues on acceptable safeguards and supplier trustworthiness. If the U.S. military narrows its supplier set based on contractual assurances and perceived reliability, partner countries could mirror those standards — affecting the global commercial market for advanced models and cross-border technology cooperation.

Comparison & Data

Item Anthropic (status) OpenAI (status)
DoD classified network presence Previously deployed; now restricted by designation Approved for deployment under new agreement
Contractual limits requested Prohibitions on autonomous weapons, mass surveillance Same safety principles claimed and accepted by DoD
Regulatory/administrative action Designated supply-chain risk (Feb. 27, 2026) Agreement reached (Feb. 27, 2026)

The table summarizes the immediate contrasts: Anthropic’s prior deployment is now constrained by the supply-chain-risk designation, while OpenAI’s models were cleared for classified use under a signed agreement. The distinction turns on both contractual language and the DoD’s assessment of vendor posture toward safety and operational cooperation.

Reactions & Quotes

OpenAI positioned the agreement as consistent with its stated safety priorities and urged the DoD to offer comparable terms to other AI firms.

“Tonight, we reached an agreement with the Department…to deploy our models in their classified network,”

Sam Altman, OpenAI CEO (post on X)

The Pentagon’s designation of Anthropic drew sharp responses from the company and raised questions about the threshold for labeling a supplier a national security risk.

“We are deeply saddened by the decision to designate Anthropic a supply chain risk,”

Anthropic (company statement)

Defense leadership framed the Anthropic designation as a security measure; officials have said the label enforces tighter controls over which vendors serve defense contracts.

“Designated as a Supply-Chain Risk to National Security,”

Pete Hegseth, Secretary of Defense (public designation)

Unconfirmed

  • Why the DoD negotiated a different outcome with OpenAI than with Anthropic: internal deliberations and specific assessment details have not been publicly released.
  • Exact contractual language and enforcement mechanisms for the OpenAI-DoD agreement: full terms have not been published and remain private.
  • Whether other AI companies will be offered comparable terms by the DoD: OpenAI has requested parity, but the Pentagon’s broader policy approach is not yet clarified.

Bottom Line

The Feb. 27, 2026 sequence — the Anthropic designation and the near-simultaneous OpenAI pact — signals that the U.S. defense establishment is trying to square urgent operational needs with public safety commitments. Vendors that can translate safety principles into contract language acceptable to the DoD may gain privileged access to classified deployments, while those that cannot risk exclusion and legal exposure.

For policymakers and industry leaders, the immediate priority will be transparency around how decisions are made and how safety commitments will be monitored and enforced. Observers should watch for published contract terms, any legal challenges from Anthropic, and subsequent guidance from the DoD or the White House that could standardize treatment of commercial AI providers.

Sources

Leave a Comment