Anthropic CEO refuses Pentagon demand for unrestricted AI use

Lead: Anthropic CEO Dario Amodei said Thursday that the company cannot, in good conscience, accept the Pentagon’s demand for unrestricted use of its Claude AI, escalating a public standoff that could cost Anthropic its Defense Department work. The disagreement followed a Tuesday meeting and a Friday ultimatum from Defense Secretary Pete Hegseth to allow broader military use or risk contract loss. Anthropic says new draft contract language failed to adequately prevent mass domestic surveillance or fully autonomous weaponization. The company signaled it would help transition services if the dispute is not resolved.

Key takeaways

  • Anthropic CEO Dario Amodei publicly rejected Pentagon language that would allow unrestricted uses of Claude, citing risks to civil liberties and autonomous weapons.
  • The Pentagon gave Anthropic an ultimatum on Tuesday to permit broader use of its model by Friday, warning of contract termination and other measures.
  • Officials threatened to label Anthropic a supply-chain risk or invoke the Defense Production Act (DPA), while Anthropic called those options contradictory.
  • The Pentagon says it will only use AI for lawful purposes and that limiting access could jeopardize military operations, according to spokesman Sean Parnell.
  • Anthropic is the last of major AI vendors — others include Google, OpenAI and xAI — to decline to join a new internal military AI network.
  • Sen. Thom Tillis criticized the public handling of the dispute; Sen. Mark Warner urged stronger, binding AI governance for national security contexts.
  • Defense Department leaders have recently signaled a shift in legal culture, with Defense Secretary Hegseth saying military counsel should avoid being a roadblock to operations.
  • Anthropic reiterated it is continuing talks but will prepare to transition to another provider if necessary.

Background

The dispute has roots in months of talks between Anthropic and the Defense Department about how AI models should be used by the military. The Pentagon has been building an internal network to give commanders controlled access to commercial AI capabilities; several major vendors have already signed on. Anthropic contends its policies and model constraints are designed to prevent uses such as domestic mass surveillance and fully autonomous weapon systems, and it says the Department’s latest contract language does not sufficiently guard against those risks.

Defense officials say broader access to AI models is needed to avoid operational risk and to ensure continuity of critical missions. After meeting with Amodei on Tuesday, Secretary Pete Hegseth set a Friday deadline for Anthropic to accept the Department’s terms. Pentagon leaders have also discussed invoking tools such as the Defense Production Act to secure capability or, alternatively, designating vendors as supply-chain risks if they refuse to comply.

Main event

On Thursday Anthropic issued a statement saying it would not accede to the Department’s demands in good conscience, while clarifying it was not walking away from negotiations. The company said the revised contract text “made virtually no progress” on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons. Amodei warned that accepting the language would run counter to the company’s safety policies and ethical commitments.

Pentagon spokesman Sean Parnell pushed back publicly, saying the Department intends to use Anthropic’s model for “all lawful purposes” and that broader access would prevent companies from jeopardizing military operations. Parnell’s post did not delineate the specific operational uses the Department envisions, a gap that has been a central tension in the talks.

Emil Michael, the Defense undersecretary for research and engineering, attacked Amodei on social media, accusing him of seeking to control military decisions. Military officials described the options on the table as escalating: canceling contracts, branding a supplier a risk to the supply chain, or — in extremis — invoking the Defense Production Act to compel use or re-route supplies.

Anthropic argues those escalation options are incoherent: one approach treats the company as a security risk while another treats its model as essential to national security. With the Friday deadline looming, Amodei said Anthropic would help effect a smooth transition to another provider if the Department does not reconsider its position.

Analysis & implications

The clash exposes a broader policy gap between national-security demand for assured capabilities and private-sector caution about civil liberties and weapons use. If the Pentagon secures unqualified access to commercial models, it would set a precedent likely to reshape vendor risk calculus and the commercial AI market. Companies may face a stark choice between lucrative defense contracts and maintaining public-facing safety commitments.

Invoking the Defense Production Act would be legally notable and politically fraught. The DPA can compel production or prioritize contracts, but using it to force operational control of an AI model would be unprecedented and likely spark litigation and congressional scrutiny. Anthropic’s public refusal increases the political stakes and makes closed-door compromise harder.

For the military, restricting access to vendors that enforce safety guardrails could reduce available expertise and slow adoption of AI tools that commanders view as operationally vital. Conversely, if vendors are compelled to accept permissive terms, civil liberties advocates and some lawmakers warn of risks around domestic surveillance and autonomy in weapons systems.

Internationally, a U.S. posture that pressures companies to yield expansive access to military use could influence global norms and procurement practices. Allies and rivals alike will watch whether the U.S. balances operational needs with governance safeguards — a balance that could affect coalition interoperability, export controls and industry willingness to innovate.

Comparison & data

Vendor Reported stance on Pentagon internal AI network
Anthropic Declined to permit unrestricted use; negotiating
Google Reportedly has a contract to provide models
OpenAI Reportedly has a contract to provide models
xAI (Elon Musk) Reportedly has a contract to provide models
Reported vendor participation in the Pentagon’s new internal AI network (sources: Department of Defense reporting summarized in AP).

The table summarizes public reporting that Anthropic is the only major vendor to publicly resist joining the Department’s internal AI network. Those other vendors have been reported to have contracts or arrangements to supply models; participation levels and use restrictions vary and are not publicly detailed. This gap in transparency about permitted operational uses is a central driver of the current dispute.

Reactions & quotes

Officials, lawmakers and company leaders reacted quickly to the public escalation, underscoring partisan and cross-institutional tensions.

“We will not let ANY company dictate the terms regarding how we make operational decisions.”

Sean Parnell, Pentagon spokesman

Parnell’s comment framed the Department’s stance as a matter of operational control; it did not provide granular examples of the uses the Department requires. Pentagon officials say ensuring access prevents disruption to critical operations.

“We cannot in good conscience accede to language that makes virtually no progress on preventing mass surveillance or fully autonomous weapons.”

Dario Amodei, Anthropic CEO

Amodei’s statement emphasized Anthropic’s safety policies and the company’s reluctance to allow applications that might enable domestic surveillance or remove human control from weapons. He also repeated that Anthropic remains in negotiations while preparing contingency plans.

“He wants nothing more than to try to personally control the US Military,”

Emil Michael, Defense undersecretary for research and engineering (social media)

Michael’s post signaled frustration inside the Defense Department and suggested some leaders view vendor caution as obstructive to military readiness. The public tone of exchanges prompted criticism from some senators about how the dispute has been handled.

Unconfirmed

  • Whether the Defense Department will actually invoke the Defense Production Act in this case remains unconfirmed and has not been announced by the Department.
  • Specific operational scenarios the Pentagon intends to run on Anthropic’s model have not been publicly detailed by officials.
  • It is not confirmed which alternate vendor(s) the Department would transition to if Anthropic’s contract were terminated.

Bottom line

The public rupture between Anthropic and the Pentagon crystallizes a major policy dilemma: how to reconcile operational military needs with company-imposed guardrails designed to protect civil liberties and prevent autonomous weaponization. Anthropic’s refusal to accept broad, unrestricted use underscores industry resistance to terms that would undermine safety commitments; the Pentagon’s threats illustrate urgency within defense ranks to secure reliable AI access.

How the dispute resolves will have ripple effects across procurement, legal precedent and vendor behavior. If the Department backs down or narrows its demands, it may preserve a path for negotiated governance frameworks. If it escalates through contract termination or DPA actions, the result could produce rapid shifts in vendor strategy, congressional intervention and new legal tests over the limits of civilian control of commercial AI.

Sources

  • Associated Press — news reporting summarizing statements by Anthropic and Pentagon officials (news)

Leave a Comment