{"id":21898,"date":"2026-03-01T22:05:54","date_gmt":"2026-03-01T22:05:54","guid":{"rendered":"https:\/\/readtrends.com\/en\/us-military-uses-claude-iran-strikes\/"},"modified":"2026-03-01T22:05:54","modified_gmt":"2026-03-01T22:05:54","slug":"us-military-uses-claude-iran-strikes","status":"publish","type":"post","link":"https:\/\/readtrends.com\/en\/us-military-uses-claude-iran-strikes\/","title":{"rendered":"US military reportedly used Claude in Iran strikes despite Trump\u2019s ban &#8211; The Guardian"},"content":{"rendered":"<article>\n<h2>Lead<\/h2>\n<p>The US military reportedly relied on Anthropic\u2019s AI model Claude to support intelligence, targeting and battlefield simulations during the joint US\u2011Israel strikes on Iran that began on Saturday, according to reporting by the Wall Street Journal and Axios. That use came just hours after former president Donald Trump ordered federal agencies to cease using Claude, setting up a public dispute between the White House, the Pentagon and Anthropic. Officials say the tool was embedded into operational workflows, complicating any rapid disentanglement. The episode highlights tensions between political directives and operational dependence on commercial AI systems.<\/p>\n<h2>Key Takeaways<\/h2>\n<ul>\n<li>Multiple outlets (Wall Street Journal, Axios) reported the US military used Anthropic\u2019s Claude in the weekend strikes on Iran for intelligence analysis and targeting support.<\/li>\n<li>Donald Trump issued an order on Friday\u2014hours before the strikes\u2014directing all federal agencies to stop using Claude immediately.<\/li>\n<li>Reporting indicates Claude was used for battlefield simulations and target selection, underscoring its operational role beyond routine analytics.<\/li>\n<li>Anthropic\u2019s terms of use prohibit violent or weapons-development applications; the company publicly objected to earlier military use in a January raid to capture Venezuela\u2019s president, Nicol\u00e1s Maduro.<\/li>\n<li>Defense Secretary Pete Hegseth publicly criticized Anthropic and demanded broader model access, while allowing up to six months of continued service to permit a transition.<\/li>\n<li>OpenAI and CEO Sam Altman have reportedly reached agreement to provide tools on the Pentagon\u2019s classified network as Anthropic\u2019s access wanes.<\/li>\n<\/ul>\n<h2>Background<\/h2>\n<p>Commercial AI models have been integrated into a growing array of defense and intelligence functions: from processing large volumes of imagery and signals data to running simulations that compress complex operational scenarios. That integration accelerates decision cycles but also creates technical and contractual dependencies on third\u2011party providers. The Trump administration\u2019s recent order to sever ties with Anthropic came amid escalating political scrutiny of how US forces use privately developed models.<\/p>\n<p>The controversy has antecedents. In January, US forces reportedly employed Claude in an operation to capture Nicol\u00e1s Maduro, prompting Anthropic to object that such uses violate its terms of service, which bar violent ends, weapons development and surveillance use cases. Since that episode relations between the company and US political leadership have deteriorated, turning a technical procurement issue into a political standoff with strategic implications for operational continuity.<\/p>\n<h2>Main Event<\/h2>\n<p>According to the Wall Street Journal and Axios, US military commands used Claude during the major US\u2011Israel bombardment of Iranian targets that commenced on Saturday. Sources told those outlets the model was applied to sift intelligence, recommend potential targets and run battlefield projections to model likely effects of strikes. The reporting did not attribute all details to a single official, reflecting a mosaic of on\u2011the\u2011record and background briefings.<\/p>\n<p>Hours earlier on Friday, Donald Trump directed federal agencies to immediately cease using Anthropic\u2019s Claude. In public posts he criticized the company\u2019s leadership and framed the move as a matter of national posture. The timing\u2014immediately before an intense operational period\u2014left military planners confronting a practical dilemma: operational systems already wired to benefit from Claude could not be unplugged without disrupting missions.<\/p>\n<p>Defense Secretary Pete Hegseth amplified the political dimension by accusing Anthropic of \u201carrogance and betrayal\u201d and demanding unfettered access to all the company\u2019s models for lawful uses. At the same time, Hegseth acknowledged the logistical realities of migration and authorized up to six months of limited continued service from Anthropic to enable a transition to alternative providers or in\u2011house systems.<\/p>\n<p>In the wake of the split, OpenAI reportedly stepped in to supply models for the Pentagon\u2019s classified network after CEO Sam Altman reached an agreement with the Defense Department. That arrangement, described by OpenAI as for use on the classified network, signals rapid market reallocation when a key supplier is cut off from government contracts.<\/p>\n<h2>Analysis &#038; Implications<\/h2>\n<p>Operational dependence on third\u2011party AI creates a friction point where political decisions, commercial policy and battlefield needs collide. A directive to ban a vendor can be straightforward on paper but practically disruptive if that vendor\u2019s tools are deeply integrated into planning and targeting pipelines. Transitioning away requires validated technical substitutes, classified provisioning, and training time\u2014none of which are instantaneous under combat timelines.<\/p>\n<p>Legal and ethical questions are also foregrounded. Anthropic\u2019s published terms of use disallow violent applications, yet military users have reported employing the model for strike\u2011related tasks. This raises questions about governance: who enforces terms, how contractual safeguards interact with national security exemptions, and what liability or reputational risks accrue to companies and governments when policies and practices diverge.<\/p>\n<p>Geopolitically, the incident could shift how allies and adversaries view reliance on US commercial AI. If the Pentagon increasingly channels work to a small set of firms cleared for classified networks, market concentration and closer public\u2011private alignment may follow. That dynamic may accelerate procurement of hardened, auditable systems but risks reducing competitive pressures that drive innovation and safety improvements.<\/p>\n<h2>Comparison &#038; Data<\/h2>\n<figure>\n<table>\n<thead>\n<tr>\n<th>Aspect<\/th>\n<th>Anthropic\/Claude<\/th>\n<th>OpenAI\/ChatGPT<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Public terms of use<\/td>\n<td>Prohibits violent use (company policy)<\/td>\n<td>Commercial policies with classified\u2011use arrangements<\/td>\n<\/tr>\n<tr>\n<td>Reported Pentagon role<\/td>\n<td>Intelligence, targeting, simulations (reported)<\/td>\n<td>Agreed access for classified network (reported)<\/td>\n<\/tr>\n<tr>\n<td>Transition window<\/td>\n<td>Up to six months authorized<\/td>\n<td>Immediate steps to supply tools (reported)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>The table summarizes publicly reported differences in policy and reported government access. It does not quantify model performance or assurance levels; those technical measurements remain classified or proprietary. Still, the contrast shows how contractual terms and classified\u2011network agreements shape which vendors the military can use in crisis periods.<\/p>\n<h2>Reactions &#038; Quotes<\/h2>\n<p>Pentagon leadership framed the dispute as both operational and normative. Hegseth\u2019s remarks sought to assert that military needs outweigh private vendor restrictions while also allowing time to move systems.<\/p>\n<blockquote>\n<p>&#8220;Arrogance and betrayal&#8230; America\u2019s warfighters will never be held hostage by the ideological whims of Big Tech,&#8221;<\/p>\n<p><cite>Pete Hegseth, U.S. Secretary of Defense (via X)<\/cite><\/p><\/blockquote>\n<p>Anthropic has pushed back, citing its terms of use that ban violent applications; company representatives have emphasized contractual and ethical boundaries around how customers may deploy Claude.<\/p>\n<blockquote>\n<p>&#8220;Our terms of service prohibit the use of Claude for violent ends or weapons development,&#8221;<\/p>\n<p><cite>Anthropic (company statement)<\/cite><\/p><\/blockquote>\n<p>OpenAI\u2019s CEO positioned his firm as a ready alternative for the Defense Department\u2019s classified needs, signaling rapid vendor substitution in practice.<\/p>\n<blockquote>\n<p>&#8220;We have reached agreement to provide tools for use on the Pentagon\u2019s classified network,&#8221;<\/p>\n<p><cite>Sam Altman, CEO of OpenAI (public statement)<\/cite><\/p><\/blockquote>\n<h2>\n<aside>\n<details>\n<summary>Explainer: What is Claude and why rules matter<\/summary>\n<p>Claude is a family of large language models developed by Anthropic designed for tasks such as summarization, reasoning and simulation. Anthropic has published terms of use intended to limit applications the company deems harmful, including uses that facilitate violence or weapons development. In practice, a model can be embedded into many military workflows \u2014 from parsing intercepted communications to running probabilistic battlefield scenarios \u2014 creating grey areas between benign analytics and direct targeting. Understanding these distinctions matters for accountability, procurement rules, and operational safety.<\/p>\n<\/details>\n<\/aside>\n<\/h2>\n<h2>Unconfirmed<\/h2>\n<ul>\n<li>Precise scope of Claude\u2019s contributions: available public reports indicate use for intelligence and simulations, but the full technical role and outputs remain classified or unverified.<\/li>\n<li>Whether any specific strike decisions were solely driven by Claude outputs is unconfirmed; reporting attributes the model as an assistive tool rather than the final decision authority.<\/li>\n<\/ul>\n<h2>Bottom Line<\/h2>\n<p>The episode exposes a growing governance gap: commercial AI models are now operationally consequential, but political, contractual and ethical controls have not kept pace with their use in sensitive missions. A political order severing ties with a vendor can be legally simple yet practically costly if the vendor\u2019s tools are embedded in mission\u2011critical systems.<\/p>\n<p>Expect rapid short\u2011term shifts in supplier relationships as the Pentagon secures alternatives and as vendors clarify or revise terms of service. In the longer term, this incident may spur tighter procurement rules, clearer terms for dual\u2011use models, and investment in auditable, government\u2011controlled AI capabilities to reduce single\u2011vendor operational risk.<\/p>\n<h2>Sources<\/h2>\n<ul>\n<li><a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/01\/claude-anthropic-iran-strikes-us-military\" target=\"_blank\" rel=\"noopener\">The Guardian<\/a> \u2014 UK news reporting (original article link provided by the requestor)<\/li>\n<li><a href=\"https:\/\/www.wsj.com\" target=\"_blank\" rel=\"noopener\">Wall Street Journal<\/a> \u2014 U.S. newspaper reporting on military use and sourcing (reporting cited by multiple outlets)<\/li>\n<li><a href=\"https:\/\/www.axios.com\" target=\"_blank\" rel=\"noopener\">Axios<\/a> \u2014 U.S. news outlet (corroborating reporting)<\/li>\n<li><a href=\"https:\/\/www.anthropic.com\/policies\/terms\" target=\"_blank\" rel=\"noopener\">Anthropic Terms of Use<\/a> \u2014 Company policy document (official)<\/li>\n<li><a href=\"https:\/\/www.defense.gov\" target=\"_blank\" rel=\"noopener\">U.S. Department of Defense<\/a> \u2014 Official statements and policy context (government)<\/li>\n<\/ul>\n<\/article>\n","protected":false},"excerpt":{"rendered":"<p>Lead The US military reportedly relied on Anthropic\u2019s AI model Claude to support intelligence, targeting and battlefield simulations during the joint US\u2011Israel strikes on Iran that began on Saturday, according to reporting by the Wall Street Journal and Axios. That use came just hours after former president Donald Trump ordered federal agencies to cease using &#8230; <a title=\"US military reportedly used Claude in Iran strikes despite Trump\u2019s ban &#8211; The Guardian\" class=\"read-more\" href=\"https:\/\/readtrends.com\/en\/us-military-uses-claude-iran-strikes\/\" aria-label=\"Read more about US military reportedly used Claude in Iran strikes despite Trump\u2019s ban &#8211; The Guardian\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":21893,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"rank_math_title":"US military used Claude in Iran strikes | TechBrief","rank_math_description":"Reporting says US forces used Anthropic\u2019s Claude for intelligence and targeting in weekend Iran strikes, days after Trump ordered a ban\u2014underscoring operational and policy friction.","rank_math_focus_keyword":"Claude,Anthropic,US military,Iran strikes,Trump ban","footnotes":""},"categories":[2],"tags":[],"class_list":["post-21898","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-top-stories"],"_links":{"self":[{"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/posts\/21898","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/comments?post=21898"}],"version-history":[{"count":0,"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/posts\/21898\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/media\/21893"}],"wp:attachment":[{"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/media?parent=21898"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/categories?post=21898"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/readtrends.com\/en\/wp-json\/wp\/v2\/tags?post=21898"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}