Thairath Online
Thairath Online

Ethics of Big Tech: Anthropic vs. the Military — From Venezuela to Iran, AI Pressured into Warfare

Tech companies02 Mar 2026 11:52 GMT+7

Share article

Ethics of Big Tech: Anthropic vs. the Military — From Venezuela to Iran, AI Pressured into Warfare

Last week, the U.S. technology and security sectors were shaken by a conflict between Anthropic, the developer of the AI model "Claude," and the U.S. Department of Defense (DoD). This escalated into a national issue about "who should control the world's most powerful AI in military applications"—the developers or the government, which can pressure developers through legal means for deployment.

Here is a summary timeline of the events.

Summer 2025: Beginning of cooperation.

  • Anthropic, along with other major AI companies including Google, OpenAI, and xAI, secured contracts from the DoD worth up to $200 million each.
  • Anthropic became the first tech company approved to deploy its AI model, "Claude," within the Pentagon's classified military networks. Military access was facilitated through their partner, intelligence analytics firm Palantir.

January 2026: Operation to capture Nicolás Maduro.

  • The U.S. military used the Claude model in an operation to arrest former Venezuelan President Nicolás Maduro in Caracas, conducted under the contract with Palantir.
  • Claude was utilized to analyze intelligence, support target selection, and simulate operational scenarios.
  • After this event, Anthropic immediately expressed concerns, emphasizing the company's "red lines": prohibiting the use of AI for two key purposes—fully autonomous weapons that can decide to attack targets without human control, and mass domestic surveillance of citizens.

February 2026: Tensions reach a breaking point.

  • (24 Feb) Pete Hegseth, the Secretary of Defense, summoned Dario Amodei, Anthropic's CEO, to a meeting at the Pentagon.
  • Amodei explained the company's usage restrictions, but Defense officials countered that the dispute was not about autonomous weapons or domestic surveillance.
  • The Pentagon argued that the DoD should not be constrained by "vendor rules," insisting Anthropic must allow military use of AI in "all lawful use cases," asserting the company does not have the right to dictate how the military uses the technology.
  • A deadline was set for Anthropic to permit military use of AI in all applications by the evening of Friday, 27 Feb. The DoD warned that failure to comply would trigger drastic measures, including: invoking the Defense Production Act, to compel Anthropic's executives to grant unrestricted Pentagon access to AI technologies, citing national security needs, and blacklisting and removal, Hegseth threatened to ban and remove Anthropic from the supply chain and designate it as a "supply chain risk affecting national security," a term usually applied to companies from rival nations.
  • On the same day, President Donald Trump ordered all federal agencies to "cease using" Anthropic's technology immediately. Contractors for the DoD were required to certify they no longer used Anthropic's models.
  • Trump attacked Anthropic on social media, calling it a "radical left AI company with no understanding of the real world."
  • Hegseth accused Anthropic of showing "arrogance and betrayal" toward the military.
  • Meanwhile, Anthropic's Claude app surged to number one on the U.S. Apple free apps chart late Saturday, surpassing ChatGPT and Gemini after refusing to allow its model's use for mass surveillance or autonomous weapons development.

OpenAI takes over; Claude still used in Iran.

However, hours after Anthropic was cut off, reports emerged that OpenAI announced an agreement with the DoD. CEO Sam Altman stated the company's model would continue to be used in the Pentagon's classified military networks, emphasizing that OpenAI maintains similar "red lines" as Anthropic—prohibiting mass surveillance and requiring human accountability for lethal force.

Meanwhile, multiple media outlets reported that U.S. Central Command continues to use Anthropic's AI despite Trump's recent ban, including during operations targeting Iran.

During U.S. and Israeli strikes on Iran, Claude was employed for intelligence, target selection, and battlefield simulation. This level of AI integration reveals that AI systems are deeply embedded in military operational planning more than many realize and cannot be discontinued instantly by order, as removing them from complex systems takes time.

Ethics of technology use for security: what happens next?

This issue has sparked broad ethical debate. Anthropic has clearly stated that AI differs from traditional weapons or military technology because of its ability to learn, analyze, and make decisions with potentially widespread and rapid impacts.

Anthropic positions itself as an AI company committed to safety and ethics, while the Trump administration aggressively pursued military AI development by all means. The DoD's 2023 policy stated AI systems could select and attack targets without direct human intervention if properly vetted. This raised Anthropic's concern that if their model is used in secret military systems, the public might remain unaware until incidents occur.

The company does not reject security use entirely but believes current models are unprepared for high-stakes tasks like split-second attack decisions. Another key issue is AI use in citizen surveillance—while current laws permit some data collection, AI can elevate this to continuous behavioral analysis, cross-database pattern detection, and automated risk assessments on a broad scale.

Many question the DoD's decision to accept OpenAI's terms but reject Anthropic's, while some officials previously criticized Anthropic for being overly cautious about safety.

This is not a conflict either side can easily exit. If Anthropic is permanently cut off from government contracts, it could severely impact the business. But if the DoD loses access to the most advanced models, security gaps may emerge while waiting for other players to catch up.

This conflict reflects structural tensions between AI companies' safety values—rejecting violence or surveillance without human oversight—and governments' and militaries' desire to leverage cutting-edge technology for strategic advantage.

It raises a major question about the balance in an era where AI becomes critical infrastructure for security, human safety, human rights, and future technology governance. Who should set ethical boundaries? This dispute may become a new paradigm in Big Tech–military relations, marking the beginning of a new power dynamic between the AI industry and governments worldwide.

Read more



Follow the Facebook page: Thairath Money at this link -