Hegseth gives Anthropic deadline to open its AI to the military or risk contract, AP source says

Lead: Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that the company must allow unrestricted military use of its AI by a Friday deadline or face the possibility of losing a government contract, according to a person familiar with the meeting. The dispute centers on Anthropic’s safety limits for its Claude chatbot and Pentagon demands for tools without built-in operational constraints. Officials suggested the Defense Department could treat Anthropic as a supply-chain risk or invoke the Defense Production Act to broaden military access. The exchange highlights growing friction over how commercial AI will be governed in national-security settings.

Key Takeaways

  • Hegseth delivered an ultimatum to Anthropic’s CEO on Tuesday, requiring full military access to the company’s AI by the following Friday or threatening contract consequences, per an anonymous source.
  • Anthropic—maker of the Claude chatbot—remains the last of four firms awarded Pentagon AI contracts to limit its model’s use on a new internal network, GenAI.mil.
  • The Pentagon in summer awarded contracts to Anthropic, Google, OpenAI and xAI, with each contract valued up to $200 million for work on secure AI platforms.
  • Anthropic is the only one of the four already authorized to operate on classified networks and partners with firms such as Palantir for that work.
  • CEO Dario Amodei has publicly opposed fully autonomous targeting and domestic surveillance, calling those two firm “red lines.”
  • Pentagon officials said they could designate Anthropic a supply-chain risk or deploy the Defense Production Act to expand military use of its products if needed.
  • Public debate over AI in national security has intensified after recent announcements that xAI’s Grok and OpenAI would join GenAI.mil for various roles.
  • Advocates and experts warn Congress may need stronger oversight as the Defense Department rapidly integrates commercial AI into operations.

Background

Since 2023 the Pentagon has accelerated efforts to incorporate commercial foundation models into military workflows, creating GenAI.mil as a secure internal platform. Last summer the Defense Department selected four AI vendors—Anthropic, Google, OpenAI and Elon Musk’s xAI—for contracts valued up to $200 million each to supply models and support. Anthropic’s approval to operate on classified networks set it apart from peers, enabling partnerships with defense contractors like Palantir.

Anthropic was founded in 2021 by former OpenAI researchers and has consistently promoted safety-first development, volunteering to submit some systems to outside review. CEO Dario Amodei has repeatedly warned about scenarios such as fully autonomous weapons and mass surveillance that he says should be off-limits. That stance has put Anthropic at odds with officials who argue military operations require tools without preprogrammed ethical constraints.

Main Event

Sources say Defense Secretary Hegseth met with Amodei on Tuesday in Washington and set a deadline for the company to remove or relax ethical restrictions preventing certain military uses. The person familiar with the meeting and a senior Pentagon official, both speaking anonymously, described the tone as cordial but firm. Officials told company leaders they could face a supply-chain designation or that the department might use the Defense Production Act to grant broader access to Anthropic’s technology.

Anthropic’s leadership has held fast to two explicit prohibitions: no systems enabling fully autonomous targeting and no tools designed for domestic surveillance of U.S. citizens. Amodei has framed those limits as necessary safety guardrails. Pentagon officials counter that lawful military orders and the need for unencumbered capabilities make such constraints problematic for some mission sets.

The dispute follows recent moves by other AI firms to integrate more tightly with the Pentagon. xAI’s Grok has been added to GenAI.mil and OpenAI confirmed in February it would supply a customized ChatGPT for unclassified uses. Hegseth has publicly criticized what he calls ideological restraints in the military, saying in January that he would not accept models that “won’t allow you to fight wars.”

Analysis & Implications

The confrontation between Anthropic and the Defense Department illustrates a core tension in modernizing military technology: balancing operational flexibility against ethical and civil‑liberties safeguards. If the Pentagon can compel companies to loosen safety controls, private-sector incentives to build and publish safety research could erode. That may accelerate deployment but could heighten risks tied to misuse, errors, or mission creep.

Designating a vendor as a supply‑chain risk or invoking the Defense Production Act would be significant precedents. Both measures could effectively transfer decision-making about acceptable uses from developers to the government, narrowing corporate negotiating leverage in future procurements. Such steps would likely prompt legal, congressional and industry scrutiny on proportionality and oversight.

Internationally, U.S. policy choices will be watched by allies and competitors. If Washington prioritizes unrestricted military access to advanced commercial models, partner nations may follow suit or seek alternative suppliers aligned with their own ethical frameworks. Conversely, strict corporate limits could slow adoption of promising tools that analysts argue could enhance intelligence, logistics and defensive capabilities.

Comparison & Data

Vendor GenAI.mil Status Classified Network Access Contract Value (max)
Anthropic Selected Approved $200 million
Google Selected Operating unclassified $200 million
OpenAI Selected Operating unclassified $200 million
xAI (Grok) Selected Operating unclassified $200 million

The table summarizes the Pentagon’s four summer selections and their current disposition as reported publicly: Anthropic is the only vendor cleared for classified environments to date, while the other three operate in unclassified settings. The uniform cap of up to $200 million per contract reflects the department’s procurement parameters for these initial integrations.

Reactions & Quotes

“Our models should not be directed to carry out fully autonomous targeting or be built for domestic surveillance,”

Dario Amodei, Anthropic CEO (paraphrased)

Amodei has repeatedly framed those two prohibitions as essential safety commitments and has warned about societal harms from unchecked AI deployment.

“The Department of Defense needs tools without ideological constraints that limit lawful military applications,”

Senior Pentagon official (paraphrased)

Pentagon spokespeople and officials argue that the military must be able to rely on systems that can execute lawful orders without embedded restrictions that could impede operations.

“Congress must step in if technology adoption outpaces the law; the DoD does not have a blank check,”

Amos Toh, Brennan Center senior counsel (paraphrased)

Legal and civil‑liberties groups have voiced concerns about potential domestic surveillance and the adequacy of statutory oversight as the department scales AI use.

Unconfirmed

  • The exact legal pathway the Pentagon would use—whether a formal supply‑chain designation or invocation of the Defense Production Act—has not been publicly confirmed by officials.
  • The specific wording and scope of the Friday deadline delivered to Anthropic’s CEO have not been released and remain known only to participants who spoke anonymously.
  • Any internal Pentagon assessment asserting Anthropic would be unable to comply with lawful orders if limits remain has not been made public and is therefore unverified.

Bottom Line

The clash between Anthropic’s stated safety restraints and the Pentagon’s demand for unencumbered operational AI underscores a pivotal policy choice: whether the U.S. will prioritize speed and maximal capability for military AI or preserve corporate and civil‑liberties safeguards. The outcome could reshape how commercial AI firms engage with national‑security customers and influence the design of future models.

In the near term, watch for formal moves by the Defense Department, congressional questions, and possible legal challenges. Longer term, this dispute may prompt clearer statutory rules governing government access to commercial AI, setting norms that will affect industry conduct and international expectations.

Sources

Leave a Comment