OpenAI amends Pentagon deal as Sam Altman admits it looked ‘sloppy’

Lead: OpenAI said it will revise a hastily arranged contract to supply AI tools to the US Department of War after CEO Sam Altman acknowledged the announcement “looked opportunistic and sloppy.” The move, disclosed early March 2026, follows the rapid replacement of Anthropic as a Pentagon AI supplier and sparked public concern about potential domestic surveillance uses and employee dissent. OpenAI has since pledged explicit limits on surveillance applications and on deploying its models to certain defence intelligence agencies. The adjustment aims to calm an online backlash and internal staff objections while preserving a role in classified AI work.

Key Takeaways

  • OpenAI announced it will amend its contract with the US Department of War after criticism of the initial announcement; CEO Sam Altman described the rollout as “opportunistic and sloppy.”
  • The company has more than 900 million ChatGPT users globally; the deal followed Anthropic being dropped as the Pentagon’s AI supplier.
  • OpenAI says it will explicitly prohibit use of its technology for domestic mass surveillance and block deployment by defence intelligence agencies such as the NSA.
  • Nearly 900 employees across Google and OpenAI (796 Google staff, 98 OpenAI staff) signed a letter opposing DoW use of models for mass surveillance or autonomous lethal targeting.
  • The public reaction included calls to abandon ChatGPT and a surge in downloads for Anthropic’s Claude on the Apple App Store, per Sensor Tower analysis.
  • Critics and some former OpenAI staff questioned whether the revised safeguards fully replicate Anthropic’s earlier ethical redlines.
  • Three cabinet-level agencies — State, Treasury and Health and Human Services — moved to stop using Anthropic products after the DoW flagged it as a supply chain risk.

Background

The Pentagon announced a new supplier arrangement after removing Anthropic from its roster of AI contractors. Anthropic had publicly argued that using generative AI for domestic mass surveillance was incompatible with democratic values, a stance that set it at odds with the Department of War’s stated operational needs. Shortly after Anthropic’s removal, OpenAI stepped in with an agreement that was announced quickly and without an extended public explanation. That rapid timing, together with the high-profile nature of both firms, intensified scrutiny from civil liberties groups, employees and the public.

OpenAI has grown rapidly since launching ChatGPT and now reports more than 900 million users, which heightened concern about how broadly its models might be used if integrated into government systems. The Snowden revelations of 2013 remain a touchstone for critics worried about large-scale harvesting of communications; commentators invoked that precedent when evaluating the DoW arrangement. Within the company and across the tech sector, staff have mobilised to press their employers to refuse requests that could enable mass surveillance or autonomous lethal systems.

Main Event

The contract announcement and its immediate aftermath unfolded over several days in early March 2026. OpenAI published a short statement describing the terms as having strong guardrails for classified AI deployments. Within hours, employees, privacy advocates and competitors challenged whether those limits were sufficient. CEO Sam Altman posted to employees and on social platforms that the company had moved too quickly to publish the deal and that the communication had come across poorly.

In response to the backlash, OpenAI said it would amend the agreement to explicitly ban the use of its models for domestic mass surveillance and to bar certain defence intelligence agencies, including the National Security Agency, from deploying its technology. The company reiterated an earlier red line that its systems should not be used to directly control autonomous weapons systems. Still, questions remained about enforcement mechanisms, auditing and how classified-use guardrails would be verified.

Public reaction included a grassroots social media campaign urging users to delete ChatGPT and calls for government oversight of AI contracts. Sensor Tower reported that Anthropic’s Claude rose to the top of the Apple App Store charts in downloads, an indicator of user migration in the immediate fallout. Meanwhile, three cabinet-level agencies—State, Treasury and Health and Human Services—moved to phase out Anthropic products after the Department of War labeled Anthropic a supply-chain risk, and President Donald Trump ordered agencies to stop using Anthropic following that decision.

Analysis & Implications

The episode underscores a tension at the intersection of commercial AI deployment and national security procurement. Rapid contracting can achieve operational speed but risks eroding public trust if safeguards and communications are perceived as inadequate. OpenAI’s admission that the announcement was rushed is unusually candid for a major tech CEO and speaks to reputational risk management becoming a core part of AI governance.

Even with the stated prohibitions, technical and bureaucratic complexity will determine how effectively limits on surveillance and intelligence use are enforced. Contractual clauses can restrict direct deployment by named agencies, but indirect use cases, subcontracting, or downstream integrations present enforcement challenges. Independent auditing, third-party oversight and clear transparency commitments would strengthen the practical effect of any red lines.

The personnel reaction — nearly 900 signatories across Google and OpenAI — shows that internal company culture now shapes public policy debates over AI. Employee activism influences corporate decisions and can alter procurement dynamics by signalling reputational and recruitment costs. For regulators and lawmakers, this raises questions about whether existing procurement frameworks, export controls and privacy laws are fit for the rapid pace of AI development.

Comparison & Data

Metric OpenAI Anthropic
Reported users/downloads ~900 million ChatGPT users Claude surge to App Store #1 (Sensor Tower)
Employee protest signatories 98 (OpenAI) 796 (Google)
Contract status Amending Pentagon deal (March 2026) Removed/flagged as supply-chain risk

The table above summarises available public figures tied to the dispute. The user base figure for ChatGPT (approximately 900 million) and the number of staff signatories reflect the data reported in contemporaneous coverage; app-store position shifts are drawn from third-party analytics. These figures illustrate the scale of the reputational and marketplace movements, but they do not capture classified contract terms that remain confidential.

Reactions & Quotes

Company leadership framed the revision as a corrective step intended to clarify limits and restore public trust. Employee and expert voices remained cautious, urging stronger oversight and independent verification.

“We shouldn’t have rushed to get this out on Friday… The issues are super complex, and demand clear communication.”

Sam Altman (reposted employee message)

Several staff and former employees questioned whether the updated terms match the ethical guardrails that Anthropic had set out.

“Using these systems for mass domestic surveillance is incompatible with democratic values.”

Anthropic statement (official)

“OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving.”

Miles Brundage (former OpenAI head of policy research)

Unconfirmed

  • Whether the amended contract will include independent, external audits of classified deployments remains unspecified and unconfirmed.
  • The precise contractual language that will bar NSA or other defence intelligence use has not been published and is therefore unverified.
  • Claims that the deal allows indirect surveillance via subcontractors or downstream integrations are plausible but not confirmed by publicly available contract text.

Bottom Line

This episode illustrates that the technical promise of AI collides with governance, procurement and public trust when national security uses are proposed. OpenAI’s decision to revise the deal and to state explicit prohibitions is a tactical response to reputational and internal pressure, but the substantive effectiveness of those limits will depend on concrete, verifiable enforcement mechanisms and greater transparency.

Policymakers, civil society and the companies themselves now face pressure to develop durable frameworks for classified AI use that reconcile operational needs with civil liberties. The near-term outcome will hinge on the specific contract wording, oversight arrangements, and whether agencies and vendors accept independent verification — details that remain to be published.

Sources

Leave a Comment