US military reportedly used Claude in Iran strikes despite Trump’s ban – The Guardian

Lead

The US military reportedly relied on Anthropic’s AI model Claude to support intelligence, targeting and battlefield simulations during the joint US‑Israel strikes on Iran that began on Saturday, according to reporting by the Wall Street Journal and Axios. That use came just hours after former president Donald Trump ordered federal agencies to cease using Claude, setting up a public dispute between the White House, the Pentagon and Anthropic. Officials say the tool was embedded into operational workflows, complicating any rapid disentanglement. The episode highlights tensions between political directives and operational dependence on commercial AI systems.

Key Takeaways

  • Multiple outlets (Wall Street Journal, Axios) reported the US military used Anthropic’s Claude in the weekend strikes on Iran for intelligence analysis and targeting support.
  • Donald Trump issued an order on Friday—hours before the strikes—directing all federal agencies to stop using Claude immediately.
  • Reporting indicates Claude was used for battlefield simulations and target selection, underscoring its operational role beyond routine analytics.
  • Anthropic’s terms of use prohibit violent or weapons-development applications; the company publicly objected to earlier military use in a January raid to capture Venezuela’s president, Nicolás Maduro.
  • Defense Secretary Pete Hegseth publicly criticized Anthropic and demanded broader model access, while allowing up to six months of continued service to permit a transition.
  • OpenAI and CEO Sam Altman have reportedly reached agreement to provide tools on the Pentagon’s classified network as Anthropic’s access wanes.

Background

Commercial AI models have been integrated into a growing array of defense and intelligence functions: from processing large volumes of imagery and signals data to running simulations that compress complex operational scenarios. That integration accelerates decision cycles but also creates technical and contractual dependencies on third‑party providers. The Trump administration’s recent order to sever ties with Anthropic came amid escalating political scrutiny of how US forces use privately developed models.

The controversy has antecedents. In January, US forces reportedly employed Claude in an operation to capture Nicolás Maduro, prompting Anthropic to object that such uses violate its terms of service, which bar violent ends, weapons development and surveillance use cases. Since that episode relations between the company and US political leadership have deteriorated, turning a technical procurement issue into a political standoff with strategic implications for operational continuity.

Main Event

According to the Wall Street Journal and Axios, US military commands used Claude during the major US‑Israel bombardment of Iranian targets that commenced on Saturday. Sources told those outlets the model was applied to sift intelligence, recommend potential targets and run battlefield projections to model likely effects of strikes. The reporting did not attribute all details to a single official, reflecting a mosaic of on‑the‑record and background briefings.

Hours earlier on Friday, Donald Trump directed federal agencies to immediately cease using Anthropic’s Claude. In public posts he criticized the company’s leadership and framed the move as a matter of national posture. The timing—immediately before an intense operational period—left military planners confronting a practical dilemma: operational systems already wired to benefit from Claude could not be unplugged without disrupting missions.

Defense Secretary Pete Hegseth amplified the political dimension by accusing Anthropic of “arrogance and betrayal” and demanding unfettered access to all the company’s models for lawful uses. At the same time, Hegseth acknowledged the logistical realities of migration and authorized up to six months of limited continued service from Anthropic to enable a transition to alternative providers or in‑house systems.

In the wake of the split, OpenAI reportedly stepped in to supply models for the Pentagon’s classified network after CEO Sam Altman reached an agreement with the Defense Department. That arrangement, described by OpenAI as for use on the classified network, signals rapid market reallocation when a key supplier is cut off from government contracts.

Analysis & Implications

Operational dependence on third‑party AI creates a friction point where political decisions, commercial policy and battlefield needs collide. A directive to ban a vendor can be straightforward on paper but practically disruptive if that vendor’s tools are deeply integrated into planning and targeting pipelines. Transitioning away requires validated technical substitutes, classified provisioning, and training time—none of which are instantaneous under combat timelines.

Legal and ethical questions are also foregrounded. Anthropic’s published terms of use disallow violent applications, yet military users have reported employing the model for strike‑related tasks. This raises questions about governance: who enforces terms, how contractual safeguards interact with national security exemptions, and what liability or reputational risks accrue to companies and governments when policies and practices diverge.

Geopolitically, the incident could shift how allies and adversaries view reliance on US commercial AI. If the Pentagon increasingly channels work to a small set of firms cleared for classified networks, market concentration and closer public‑private alignment may follow. That dynamic may accelerate procurement of hardened, auditable systems but risks reducing competitive pressures that drive innovation and safety improvements.

Comparison & Data

Aspect Anthropic/Claude OpenAI/ChatGPT
Public terms of use Prohibits violent use (company policy) Commercial policies with classified‑use arrangements
Reported Pentagon role Intelligence, targeting, simulations (reported) Agreed access for classified network (reported)
Transition window Up to six months authorized Immediate steps to supply tools (reported)

The table summarizes publicly reported differences in policy and reported government access. It does not quantify model performance or assurance levels; those technical measurements remain classified or proprietary. Still, the contrast shows how contractual terms and classified‑network agreements shape which vendors the military can use in crisis periods.

Reactions & Quotes

Pentagon leadership framed the dispute as both operational and normative. Hegseth’s remarks sought to assert that military needs outweigh private vendor restrictions while also allowing time to move systems.

“Arrogance and betrayal… America’s warfighters will never be held hostage by the ideological whims of Big Tech,”

Pete Hegseth, U.S. Secretary of Defense (via X)

Anthropic has pushed back, citing its terms of use that ban violent applications; company representatives have emphasized contractual and ethical boundaries around how customers may deploy Claude.

“Our terms of service prohibit the use of Claude for violent ends or weapons development,”

Anthropic (company statement)

OpenAI’s CEO positioned his firm as a ready alternative for the Defense Department’s classified needs, signaling rapid vendor substitution in practice.

“We have reached agreement to provide tools for use on the Pentagon’s classified network,”

Sam Altman, CEO of OpenAI (public statement)

Unconfirmed

  • Precise scope of Claude’s contributions: available public reports indicate use for intelligence and simulations, but the full technical role and outputs remain classified or unverified.
  • Whether any specific strike decisions were solely driven by Claude outputs is unconfirmed; reporting attributes the model as an assistive tool rather than the final decision authority.

Bottom Line

The episode exposes a growing governance gap: commercial AI models are now operationally consequential, but political, contractual and ethical controls have not kept pace with their use in sensitive missions. A political order severing ties with a vendor can be legally simple yet practically costly if the vendor’s tools are embedded in mission‑critical systems.

Expect rapid short‑term shifts in supplier relationships as the Pentagon secures alternatives and as vendors clarify or revise terms of service. In the longer term, this incident may spur tighter procurement rules, clearer terms for dual‑use models, and investment in auditable, government‑controlled AI capabilities to reduce single‑vendor operational risk.

Sources

Leave a Comment