After Blacklisting Anthropic, Trump’s Military Still Used Claude AI in Iran Strike

This report details the escalating dispute between the Trump administration and Anthropic, and the reported continued use of Claude AI in U.S. military operations despite an official phase-out order.

According to people familiar with the situation who spoke to the Wall Street Journal, the U.S. military used artificial intelligence software from San Francisco-based startup Anthropic in a significant airstrike against Iran just hours after President Donald Trump ordered federal agencies to stop using the company’s AI systems.

U.S. Military Use of Anthropic’s Claude AI After Trump Order

These individuals told the Journal that Anthropic’s Claude AI tool is used by commands worldwide, including U.S. Central Command in the Middle East, for activities including intelligence assessments, target identification, and war simulation. Regarding particular equipment used in its continuing operations against Iran, Centcom declined to discuss.

Even if ties between Anthropic and the Pentagon have drastically deteriorated, the employment of Claude in such high-stakes missions shows that the model is already incorporated into U.S. military operations, according to the Journal’s sources.

Defense Department and Anthropic Contract Dispute

The Defense Department and Anthropic have been at odds over terms for using the company’s AI models for months, which has led to the current deadlock. The Trump administration directed all government agencies to cease collaborating with Anthropic last week.

It instructed the Pentagon to classify the business as a risk to its defense supply chain and a security concern. According to the Journal, the order came after contract discussions in which Anthropic declined to give the Pentagon the authority to employ Claude in whatever legal situation it might need.

⚔️ AI Military Contract Conflict

  • Company: Anthropic
  • AI Tool: Claude AI
  • Issue: Refusal to allow unrestricted lawful military use
  • Action: Pentagon directed to classify as supply chain risk
  • Phase-Out: Six-month removal plan from government systems
  • Core Dispute: Ethics vs national security deployment

Alternative AI Contracts and Pentagon Strategy

The Defense Department has secured alternate contracts for AI technologies from other developers as a result of the conflict. Although the Pentagon has agreements with the creators of Elon Musk’s xAI models and OpenAI’s ChatGPT for sensitive settings, military officials and AI specialists warn that it may take months to completely replace Claude across all platforms.

Along with OpenAI, Google, and xAI, Anthropic was one of a select few large AI laboratories that obtained multiyear contracts with the Pentagon to provide cutting-edge AI capabilities. These contracts might total up to $200 million each. Anthropic partnered with Palantir and Amazon Web Services to make its Claude model the only one authorized for use in confidential military and intelligence workflows.

🛰️ Claude AI in Pentagon Systems

  • Deployment Since: 2024
  • Usage: Intelligence workflows & classified operations
  • Notable Operation: Venezuela raid capturing Nicolás Maduro
  • Partners: Palantir & Amazon Web Services
  • Contract Value: Up to $200 million
  • Status: Gradual six-month phase-out ordered

Escalating Tensions Over AI Ethics and Military Use

Earlier this year, the model’s incorporation into Pentagon systems—including intelligence procedures and operations like the January raid in Venezuela that seized President Nicolás Maduro—attracted attention and highlighted its operational relevance in high-security settings.

After Defense Secretary Pete Hegseth gave Anthropic a deadline to permit unfettered use of its AI technologies for any “lawful” military purpose, tensions have increased in recent weeks. Demands to remove controls against particular uses were publicly rebuffed by Anthropic CEO Dario Amodei, who framed them as moral red lines that the company would not cross, even at the expense of government contracts.

Trump’s order followed days of public and private back and forth between Hegseth and the company’s CEO, Dario Amodei.

In recent months, the corporation had been worried that the government may use its AI technologies, such as Claude, for “totally autonomous weaponry” and “bulk surveillance.” The Pentagon and Hegseth have demanded that Anthropic consent to “any authorized usage” of its equipment and technology.

Public Statements and Social Media Announcements

Hegseth and Trump both made their choices against Anthropic public on social media. The defense secretary stated on X that Anthropic would be “immediately” classified as a supply chain risk and that any company that collaborates with the military would not be allowed to engage in “any commercial activity with Anthropic.”

Anthropic stated on Friday night that it had not received any direct communication “on the status of our conversations” from the military or the White House.

Becoming a supply chain risk, however, “would both be legally unsound and create a hazardous precedent for any American company that negotiates with the government,” according to the corporation.

The business went on to say, “We will not back down from our stance on fully autonomous weapons or widespread domestic surveillance, no matter how much pressure or punishment we receive from the Department of War.” Trump has renamed the defense department the Department of War.

During the following six months, Trump plans to gradually remove Anthropic’s tools from all government operations. Regarding Anthropic’s other clients, the business stated that the only businesses affected would be those that have military contracts. These businesses might have to cease utilizing Anthropic for departmental work.

Anthropic has previously stated that it would “try to enable a smooth transition to another vendor” in the event that the US Department of Defense decided to discontinue utilizing the company’s solutions.

Anthropic, however, had to “get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with serious civil and criminal ramifications to follow,” Trump said in a disparaging statement on his Truth Social platform.

Anthropic was the first cutting-edge AI startup to have its tools implemented in government organizations performing classified work, and it has been in use by the US military and government since 2024.

Frequently Asked Questions

1. For what reason did Donald Trump decide to leave Anthropic?

Following disputes over the potential applications of Anthropic’s AI products, Donald Trump ordered federal agencies to stop using the company. Anthropic rejected the Pentagon’s request for unrestricted access for “any authorized military reason.”

2. How did Claude AI contribute to the attack on Iran?

During the U.S. military operation against Iran, Anthropic’s Claude AI was allegedly utilized for intelligence assessments, target identification, and warfare simulation.

3. Why did Anthropic reject the terms offered by the Pentagon?

Citing ethical concerns about mass monitoring and fully autonomous weapons, Anthropic’s CEO, Dario Amodei, rejected calls to permit unlimited military use of its AI.

4. Will government systems stop using Anthropic’s AI right away?

No, Anthropic’s tools will reportedly be taken down over the course of the next six months because it may take some time to replace such intricately interwoven AI systems.

5. Do other AI firms collaborate with the Pentagon?

Yes, To supply AI tools for secret and defense-related operations, the Pentagon has contracts with firms like OpenAI and xAI.

Conclusion

The situation demonstrates the rising conflict between the ethical constraints of tech businesses and government expectations for the unrestrained deployment of AI.

The reported employment of Claude AI in a significant Iran strike highlights how deeply ingrained advanced AI technologies are already in U.S. military operations, even if President Trump ordered agencies to sever relations with Anthropic.

How defense alliances with AI companies develop in the upcoming months will probably depend on worries about national security, ethics, and technical reliance.

Disclaimer: This article is based on publicly reported information and statements from involved parties. Developments may evolve as official confirmations or further details emerge.


About the Author

I’m Gourav Kumar Singh, a graduate by education and a blogger by passion. Since starting my blogging journey in 2020, I have worked in digital marketing and content creation. Read more about me.

Leave a Comment